Why Security Fundamentals Still Matter for AI

The CIA triad, least privilege, and defence in depth aren't relics. They're the foundation AI security must be built on.

Every few years, a technology arrives that tempts the security community to throw out the playbook and start from scratch. Cloud computing did it. Mobile did it. Now artificial intelligence is doing it again, with louder voices and higher stakes.

The temptation is understandable. Large language models behave in ways that feel alien compared to traditional software. They hallucinate, they can be manipulated through natural language, and their decision-making processes remain largely opaque. Faced with these novel characteristics, some organisations conclude that entirely new security frameworks are needed.

That conclusion is wrong. Not because AI presents no new challenges (it clearly does), but because the foundational principles of information security remain as relevant as ever. The task is adaptation, not reinvention.

The CIA Triad Has Not Retired

Confidentiality, integrity, and availability: three words that have anchored security thinking for decades. They apply to AI systems just as firmly as they apply to databases, networks, and endpoints.

Confidentiality in AI contexts means protecting training data, model weights, and inference outputs from unauthorised access. Models trained on sensitive data can leak that data through carefully crafted prompts or membership inference attacks. The principle is unchanged; the attack surface is new.

Integrity demands that models behave as intended and that their outputs can be trusted. Data poisoning, adversarial examples, and model tampering all threaten integrity. When a model’s training data is corrupted, the resulting outputs cannot be relied upon, just as a database with tampered records cannot be trusted.

Availability requires that AI services remain operational and resilient. Model denial-of-service attacks, resource exhaustion through complex queries, and infrastructure failures all fall under this heading. Organisations that deploy AI-powered decision systems need the same availability planning they apply to any critical service.

The AI Security Fundamentals framework explores these connections in greater depth, mapping classical principles to the specific challenges AI introduces.

Least Privilege: Still the Hardest Principle to Enforce

The principle of least privilege states that any user, process, or system should have only the minimum access necessary to perform its function. It has always been difficult to implement well. AI makes it harder, but no less important.

Consider an AI agent with access to internal systems. If that agent is granted broad permissions “to be useful,” it becomes a potent attack vector. A compromised or manipulated agent with administrative access to databases, APIs, and file systems represents a catastrophic risk.

Practical application for AI systems

  • Scope model access tightly. An AI assistant that answers HR policy questions does not need access to financial databases. Define boundaries before deployment, not after an incident.
  • Use ephemeral credentials. Where AI systems interact with APIs or data stores, issue short-lived tokens rather than persistent keys.
  • Audit agent actions continuously. Every action an AI agent takes should be logged and reviewable. This is not optional; it is a core security control.
  • Separate training and inference environments. The systems used to train models should be isolated from production inference infrastructure. Cross-environment access should require explicit justification.

These are not revolutionary ideas. They are the same principles applied to service accounts, microservices, and automated workflows for years. The difference is that AI systems often blur the boundaries between user and application, making disciplined access control even more critical.

Defence in Depth Remains the Right Strategy

No single control stops every attack. This has been true since the earliest days of network security, and it remains true for AI. Defence in depth, the practice of layering multiple independent controls so that the failure of one does not compromise the whole system, is essential.

For AI deployments, defence in depth means:

  1. Input validation and sanitisation at the point where data enters the system, whether that data comes from users, APIs, or training pipelines.
  2. Model-level controls such as output filtering, guardrails, and anomaly detection on inference behaviour.
  3. Infrastructure security covering the compute, storage, and networking layers that host AI workloads.
  4. Monitoring and response capabilities that detect unusual patterns in model behaviour, data access, or system performance.
  5. Governance and policy that define acceptable use, risk thresholds, and escalation procedures.

Each layer operates independently. If prompt injection bypasses input validation, output filtering should catch harmful responses. If output filtering fails, monitoring should detect the anomaly. No single layer is expected to be perfect; the combination provides resilience.

The Danger of “AI Exceptionalism”

There is a growing tendency to treat AI security as an entirely separate discipline, disconnected from the broader security programme. This is a mistake for several reasons.

First, it leads to duplicated effort. Organisations build parallel governance structures, risk frameworks, and incident response processes for AI when existing structures could be extended.

Second, it creates gaps. When AI security is siloed, the connections between AI risks and broader organisational risks are missed. A data breach that compromises training data is both a data protection incident and an AI security incident. It needs a unified response.

Third, it undermines institutional knowledge. Security teams have decades of hard-won experience in threat modelling, risk assessment, and control implementation. Treating AI as wholly exceptional discards that experience precisely when it is most needed.

The more productive approach is to ask: “How do our existing principles apply here, and where do we need to extend them?” This question leads to stronger, more coherent security postures than starting from a blank page.

What Needs to Change

Acknowledging that fundamentals endure does not mean nothing needs to change. Several areas demand genuine adaptation:

  • Threat modelling must expand to include AI-specific attack vectors such as prompt injection, training data poisoning, and model extraction.
  • Security testing must evolve to include adversarial testing, red-teaming of AI outputs, and evaluation of model robustness.
  • Skills must develop. Security professionals need to understand how machine learning works, not to become data scientists, but to assess risks competently.
  • Supply chain scrutiny must deepen. Pre-trained models, third-party datasets, and open-source ML libraries all introduce dependencies that require evaluation.

These adaptations build on existing foundations. Threat modelling is not new; it simply gains new inputs. Security testing is not new; it gains new techniques. Supply chain risk management is not new; it gains new scope.

Moving Forward with Confidence

The organisations that will secure AI most effectively are those that resist the urge to discard what works. The CIA triad, least privilege, defence in depth, separation of duties, continuous monitoring: these principles have survived every previous technology shift because they address enduring truths about risk, access, and trust.

AI changes the context in which these principles operate. It does not change their validity. Security teams should invest in understanding AI’s specific characteristics and risks, then apply their existing expertise to address them.

The fundamentals still matter. They always have. The real question is whether organisations have the discipline to apply them consistently, even when the technology feels unfamiliar.

Start with the principles. Extend them to the new context. Build from there.