Why Security Fundamentals Still Matter for AI
Every few years, a technology arrives that tempts the security community to throw out the playbook ...
Security principles endure, but AI demands fresh application. Why the fundamentals matter more than ever in the age of artificial intelligence.
Every generation of technology brings the same breathless claim: “This changes everything.” And every time, the fundamentals of security prove stubbornly persistent. Confidentiality, integrity, availability. Least privilege. Defence in depth. Separation of duties. These aren’t relics; they’re bedrock.
AI is no different. The attack surface has expanded. The speed of exploitation has accelerated. The consequences of failure have grown. But the principles that protect organisations haven’t changed. What’s changed is how we apply them.
Let’s be honest about where we are. Organisations are deploying AI systems at a pace that far outstrips their ability to secure them. Models are being integrated into critical business processes with minimal threat modelling. Training data is being assembled without rigorous provenance checks. And governance frameworks are still catching up.
This isn’t a criticism; it’s a recognition of reality. We’ve been here before, with cloud computing, with mobile, with the early internet. The pattern is familiar: rapid adoption, followed by a scramble to bolt on security after the fact.
The difference with AI is the scale of potential impact. When an AI system fails, or is deliberately compromised, the consequences can cascade across an entire organisation in ways that traditional systems rarely do.
While the principles endure, AI does introduce genuinely novel challenges that security professionals must grapple with:
In traditional systems, data is something to be protected. In AI systems, data is also an input that shapes behaviour. Poisoned training data doesn’t just leak information; it fundamentally alters what the system does. This dual nature of data demands a rethinking of data governance that goes far beyond access controls.
Many AI systems, particularly deep learning models, operate as effective black boxes. You can observe inputs and outputs, but the reasoning between them resists inspection. This creates challenges for audit, compliance, and incident response that security teams haven’t faced before.
AI systems can exhibit capabilities that weren’t explicitly programmed or anticipated. This isn’t a bug; it’s a fundamental characteristic of how these systems learn. But it means that traditional testing approaches, which verify expected behaviour, are necessary but insufficient.
Modern AI systems depend on pre-trained models, open-source libraries, third-party datasets, and cloud-based inference services. Each of these represents a link in a supply chain that’s often poorly understood and rarely audited.
The good news is that decades of security practice give us a strong foundation to build on. The key is adaptation, not reinvention:
Perhaps the most important fundamental of all: security is ultimately about people. The organisations that secure AI effectively will be those that build security culture into their AI programmes from the start, not as a bureaucratic hurdle, but as an enabling discipline.
This means security professionals who understand AI. Data scientists who understand threat modelling. Leadership that recognises AI security as a strategic imperative, not just a technical concern.
The fundamentals endure. The question is whether we have the will and the wisdom to apply them.
Explore how principled AI security can protect your organisation.
View the Security RoadmapEvery few years, a technology arrives that tempts the security community to throw out the playbook ...
Prompt injection has rapidly become the most discussed vulnerability class in AI security, and for ...
The EU AI Act represents the most comprehensive attempt by any jurisdiction to regulate artificial ...