Navigating the EU AI Act for Security Leaders
The EU AI Act introduces risk-based obligations for AI systems. Here's what security leaders need to know and do now.
The EU AI Act represents the most comprehensive attempt by any jurisdiction to regulate artificial intelligence through legislation. For security leaders, it introduces obligations that extend well beyond traditional data protection requirements. Understanding the Act’s structure, timelines, and practical implications is essential for any organisation deploying AI systems that touch European markets or citizens.
The Risk-Based Framework
The Act classifies AI systems into four risk tiers, each carrying different obligations. This risk-based approach mirrors established principles in security governance: the higher the risk, the greater the controls required.
Unacceptable risk (prohibited)
Certain AI practices are banned outright. These include social scoring systems used by governments, real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement), and AI systems that exploit vulnerabilities of specific groups such as children or people with disabilities. Manipulative AI techniques that cause harm through subliminal methods also fall under this category.
Security leaders should audit current and planned AI deployments against the prohibited list. Even if an organisation does not intend to deploy such systems, third-party tools or vendor products may include capabilities that cross these boundaries.
High risk
This is the tier with the most significant compliance burden. High-risk AI systems include those used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice.
For high-risk systems, the Act mandates:
- Risk management systems that identify, analyse, and mitigate risks throughout the AI system’s lifecycle.
- Data governance ensuring that training, validation, and testing datasets are relevant, representative, and free from errors to the extent possible.
- Technical documentation sufficient to demonstrate compliance with the Act’s requirements.
- Record-keeping and logging that enable traceability of the system’s operations.
- Transparency obligations requiring that deployers and affected persons receive clear information about the system’s purpose and functioning.
- Human oversight measures that allow natural persons to effectively supervise the system’s operation.
- Accuracy, robustness, and cybersecurity appropriate to the system’s purpose and risk level.
That final requirement is particularly noteworthy for security leaders. The Act explicitly mandates cybersecurity measures proportionate to the risk, creating a direct regulatory obligation for security controls on qualifying AI systems.
Limited risk
Systems with limited risk face transparency obligations. Chatbots, for instance, must inform users that they are interacting with an AI system. Deepfake content must be labelled as artificially generated. These requirements are narrower but still demand implementation effort.
Minimal risk
The vast majority of AI systems fall into this category and face no specific obligations under the Act. However, the European Commission encourages voluntary adoption of codes of conduct, and organisations would be wise to maintain responsible practices regardless of legal requirements.
Key Timelines
The Act entered into force on 1 August 2024, with obligations phasing in over a staggered timeline:
- February 2025: Prohibitions on unacceptable-risk AI systems apply.
- August 2025: Obligations for general-purpose AI (GPAI) models take effect, along with governance structures.
- August 2026: The majority of the Act’s provisions become applicable, including high-risk system requirements.
- August 2027: Extended deadlines for certain high-risk AI systems that are components of regulated products.
This phased approach provides lead time, but not as much as it might appear. Building compliant risk management systems, documentation frameworks, and monitoring capabilities takes considerable effort. Organisations that wait until 2026 to begin preparing for high-risk obligations will find themselves under significant pressure.
What Security Leaders Should Focus On
Classify your AI systems
The first practical step is a comprehensive inventory and classification exercise. Map every AI system the organisation develops, deploys, or procures against the Act’s risk tiers. This includes:
- Internal AI tools used for decision-making in HR, finance, or operations.
- Customer-facing AI systems such as chatbots, recommendation engines, and automated decision systems.
- Third-party AI services embedded in vendor products or cloud platforms.
- AI components in existing software that may not be immediately obvious.
Many organisations will discover AI capabilities in their technology stack that nobody formally evaluated or classified. The inventory process itself delivers security value by improving visibility.
Align with existing security frameworks
The Act’s requirements for high-risk systems overlap substantially with established security and governance frameworks. Organisations already implementing ISO 27001, NIST AI RMF, or similar standards will find familiar concepts in the Act’s obligations.
The AI Security Roadmap provides a structured approach to building security capabilities that satisfy both the Act’s requirements and broader organisational security needs. Rather than treating EU AI Act compliance as a standalone project, integrate it into the security programme’s existing governance structure.
Specific alignment points include:
- Risk management maps directly to existing enterprise risk frameworks. Extend current risk registers to include AI-specific risks identified in the Act.
- Documentation requirements align with security documentation practices. Technical documentation for high-risk AI systems should be incorporated into the organisation’s information security management system.
- Logging and monitoring obligations echo existing security operations practices. Ensure that AI system logs feed into the organisation’s SIEM or security monitoring platform.
- Incident reporting under the Act can be integrated with existing security incident response processes.
Address the cybersecurity mandate
Article 15 of the Act requires high-risk AI systems to be resilient against attempts by unauthorised third parties to exploit vulnerabilities. This includes protection against:
- Data poisoning and adversarial attacks that compromise model integrity.
- Model extraction or inversion attacks that threaten confidentiality.
- Exploitation of supply chain vulnerabilities in AI components.
For security leaders, this creates a clear mandate to apply security controls specifically to AI systems, not as a discretionary best practice, but as a legal obligation. Threat modelling, penetration testing, and security monitoring must extend to cover AI-specific attack vectors.
Manage the supply chain
The Act distinguishes between providers (those who develop AI systems) and deployers (those who use them). Both have obligations, but the nature of those obligations differs. Organisations that procure AI systems from third parties remain responsible for ensuring compliant deployment.
This means:
- Vendor assessments must evaluate AI providers’ compliance with the Act’s requirements for the relevant risk tier.
- Contractual provisions should address data governance, documentation access, incident notification, and ongoing monitoring responsibilities.
- Due diligence on pre-trained models is essential when building systems on foundation models or fine-tuning third-party models. The Act’s general-purpose AI provisions add specific obligations for providers of these models.
Prepare for enforcement
The Act establishes the European AI Office as the central enforcement body, with national competent authorities in each member state handling day-to-day supervision. Penalties for non-compliance are significant: up to 35 million euros or 7% of global annual turnover for prohibited practices, and up to 15 million euros or 3% of turnover for other violations.
Security leaders should ensure that compliance evidence is maintained proactively. Documentation, risk assessments, testing records, and audit trails should be kept current rather than assembled reactively in response to regulatory enquiries.
Practical Steps for the Next Six Months
- Complete an AI inventory across the organisation, including shadow deployments and third-party integrations.
- Classify each system against the Act’s risk tiers, prioritising those most likely to qualify as high-risk.
- Gap-assess current controls against the Act’s requirements for applicable risk tiers.
- Extend existing security processes to cover AI-specific risks, including threat modelling, testing, and monitoring.
- Engage procurement and legal teams to update vendor assessment criteria and contractual templates.
- Establish a cross-functional AI governance group that brings together security, legal, data protection, and business stakeholders.
The EU AI Act is not simply a compliance exercise. It reflects a growing global consensus that AI systems require proportionate governance and security controls. Organisations that treat it as an opportunity to strengthen their AI security posture, rather than a box-ticking burden, will be better positioned for whatever regulatory landscape follows.