People and Culture | 27 Jan, 2025

Building Security Culture in AI Teams

Generic security training fails AI teams. Security culture for AI requires champions, cross-functional collaboration, and specific threat awareness.

Every organisation claims to take security seriously. Most of them prove it with an annual compliance training module that employees click through in fifteen minutes. For traditional IT teams, this approach was already inadequate. For AI teams, it is actively dangerous.

AI development introduces threat categories that generic security awareness programmes simply do not cover. Data poisoning, model theft, adversarial manipulation, prompt injection: these are not edge cases. They are foreseeable risks that AI teams encounter in their daily work. If security culture does not reach these teams in a meaningful way, technical controls alone will not close the gap.

Why Generic Training Fails AI Teams

Standard security awareness training is built around a familiar set of topics: phishing recognition, password hygiene, social engineering, acceptable use policies. These remain important. But they miss the specific risks that AI practitioners face.

A data engineer managing training pipelines needs to understand data provenance and integrity threats. A machine learning engineer deploying models needs to recognise adversarial input patterns. A product manager defining AI features needs to understand the security implications of model exposure through APIs.

Generic training addresses none of these. Worse, it can create a false sense of security by implying that completing the standard programme is sufficient. When an organisation’s security culture does not speak the language of AI risks, AI teams learn to see security as someone else’s problem.

The Security Champion Model

One of the most effective approaches to embedding security culture in specialist teams is the security champion model. Rather than relying solely on a centralised security function, this approach identifies and empowers individuals within AI teams to serve as security advocates.

What a Security Champion Does

A security champion is not a part-time security analyst. The role is about influence, awareness, and connection. Effective champions:

  • Translate security concerns into terms that resonate with their team’s daily work. For an ML engineering team, this means framing threats in terms of model integrity and pipeline reliability, not abstract compliance requirements.
  • Identify risks early by participating in design reviews, data pipeline discussions, and model deployment decisions. A champion who understands both the AI work and the security implications can spot issues that a centralised review might miss.
  • Serve as a bridge between the AI team and the security function, ensuring that questions flow in both directions. When security policies seem impractical for AI workflows, the champion can articulate why and help negotiate workable alternatives.

Selecting and Supporting Champions

The best security champions are respected within their teams and genuinely curious about security. Volunteerism matters more than seniority. Organisations should provide champions with dedicated training (covering AI-specific threats, not just generic security), regular access to the security team, and visible recognition for their contributions.

Critically, the champion role must come with protected time. Asking someone to take on security advocacy while maintaining a full delivery workload guarantees that security becomes the first thing dropped under pressure.

Cross-Functional Collaboration as a Cultural Foundation

The Bletchley Park model offers a powerful lens for thinking about security culture. During the Second World War, Bletchley Park’s success depended on bringing together mathematicians, linguists, engineers, and military personnel, people who thought differently and challenged each other’s assumptions. The result was an organisation that solved problems none of its individual disciplines could have tackled alone.

AI security demands the same cross-functional approach. When security teams, data scientists, ML engineers, ethicists, and product managers work in isolation, blind spots multiply. Each group sees part of the risk landscape but not the whole.

Practical Steps for Cross-Functional Security Culture

Joint threat modelling sessions. Bring AI practitioners and security professionals together to map threats specific to the organisation’s AI systems. Use frameworks like STRIDE or MITRE ATLAS, but ensure that the people who build and operate the models are in the room. They will identify realistic attack paths that security teams alone might overlook.

Shared incident reviews. When AI-related security events occur (even near-misses), conduct reviews that include both security and AI team members. These reviews build shared understanding and reinforce the message that AI security is a collective responsibility.

Rotational shadowing. Allow AI team members to spend time with the security operations team, and vice versa. Even a few days of observation builds empathy and shared vocabulary that pays dividends when collaboration is needed under pressure.

Collaborative policy development. Security policies for AI systems should be co-authored with the teams who will implement them. Policies developed in isolation tend to be either too vague to be actionable or too prescriptive to accommodate the realities of AI development workflows.

Building AI-Specific Threat Awareness

Security culture starts with awareness, but awareness must be specific and practical to drive behaviour change. For AI teams, this means going beyond high-level threat briefings to provide concrete, scenario-based learning.

Scenario-Based Training

Replace passive slide decks with interactive scenarios that reflect real AI security challenges:

  • A data scientist discovers that a third-party dataset used for training contains systematically altered labels. What steps should they take? Who should they notify?
  • An ML engineer notices unusual query patterns against a model API that suggest extraction attempts. How do they escalate this? What evidence should they preserve?
  • A product team wants to fine-tune a large language model using customer interaction data. What data governance and security considerations apply?

These scenarios ground abstract threats in daily decision-making. They also create opportunities for discussion, which is where culture is actually built.

Threat Intelligence Sharing

Keep AI teams informed about emerging threats and real-world incidents. Regular, brief updates on attacks targeting AI systems (model poisoning campaigns, adversarial examples in the wild, supply chain compromises in ML libraries) reinforce that these are not theoretical concerns.

The format matters. A monthly fifteen-minute briefing tailored to the AI team’s context will have far more impact than a quarterly email newsletter that nobody reads.

Measuring Security Culture

Culture is difficult to measure, but not impossible. Useful indicators for AI team security culture include:

  • Engagement metrics. How many AI team members participate in threat modelling sessions, incident reviews, or security training beyond the mandatory minimum?
  • Reporting rates. Are AI team members reporting potential security concerns proactively? An increase in reports (especially near-misses and questions) is a strong positive signal.
  • Champion activity. Are security champions actively participating in design reviews and raising security considerations? Track their involvement, not as a performance metric, but as a health indicator.
  • Time to engage security. When AI projects encounter security-relevant decisions, how early does the security function become involved? Earlier engagement indicates stronger cultural integration.

Culture Is Infrastructure

It is tempting to treat security culture as a “soft” concern, secondary to technical controls and governance frameworks. This is a mistake. Technical controls can be bypassed by people who do not understand why they exist. Governance frameworks become checkbox exercises without genuine buy-in from the teams they govern.

For AI teams operating at the frontier of a rapidly evolving field, security culture is infrastructure. It determines whether security considerations are embedded in decisions or bolted on after the fact. It shapes whether AI practitioners see security as an enabler or an obstacle.

Building that culture requires specificity (addressing AI threats, not just generic risks), structure (champions, cross-functional collaboration, scenario-based training), and sustained investment. The organisations that treat it as a priority now will be the ones capable of moving fast and staying secure as AI capabilities continue to advance.