2026-03-04
Zero Trust for AI Agents
Intended Team · Founding Team
Zero Trust for AI Agents
Zero trust is not new. The concept has been part of security architecture for over a decade. Never trust, always verify. Assume breach. Authenticate and authorize every request. The principles are well understood for human users and traditional services.
But zero trust was designed for a world where principals are humans or services with predictable behavior patterns. AI agents are neither. They are autonomous systems that make decisions, chain actions together, and operate at speeds that make human oversight impossible in real time. Applying zero trust to AI agents requires extending the model in ways that the original framework did not anticipate.
The Original Zero Trust Principles
The core zero trust principles are straightforward:
**Verify explicitly.** Always authenticate and authorize based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
**Use least privilege access.** Limit user access with just-in-time and just-enough-access, risk-based adaptive polices, and data protection.
**Assume breach.** Minimize blast radius and segment access. Verify end-to-end encryption. Use analytics to detect and respond to threats.
These principles transformed enterprise security. They eliminated the idea of a trusted internal network and replaced it with continuous verification at every access point. They work well for human users accessing applications and for services calling other services.
Why AI Agents Need More
AI agents introduce three properties that the original zero trust model does not address well.
Autonomous Decision-Making
A human user requests access to a resource. The zero trust system verifies the user's identity, checks their authorization, evaluates the risk, and grants or denies access. The human then decides what to do with that access, applying their own judgment.
An AI agent does not apply judgment in the same way. It optimizes for its objective function. If it has access to a resource, it will use that access in whatever way its reasoning determines is optimal. The zero trust system verified the agent's identity and authorization, but it did not verify whether the agent's intended use of that access is appropriate.
Zero trust for AI agents must verify intent, not just identity. It is not enough to know that the agent is who it claims to be and has the required permissions. The system must also know what the agent plans to do and whether that plan is within acceptable bounds.
Action Chaining
Humans typically perform discrete actions with natural pauses between them. They open a file, read it, think about it, make a decision, and then take the next action. Each action is relatively independent in terms of timing.
AI agents chain actions in rapid succession. An agent might read a database, process the results, call three APIs, update a configuration file, and trigger a deployment, all in under a second. Each individual action might be authorized, but the chain of actions might produce an outcome that no individual action would suggest.
Zero trust for AI agents must evaluate action sequences, not just individual actions. A deployment following a security group change following a DNS update is a different risk profile than any of those actions in isolation. The trust model must track sequences and evaluate cumulative risk.
Operational Speed
Human zero trust verification can afford to add a few hundred milliseconds of latency. Humans do not notice the difference between a 200ms page load and a 400ms page load. The verification overhead is invisible.
AI agents make decisions in milliseconds. Adding 500ms of verification to each action in a chain of 20 actions adds 10 seconds of overhead. The agent's responsiveness degrades noticeably. Teams push back on governance that slows their agents down, and the governance gets bypassed.
Zero trust for AI agents must operate at machine speed. Verification that cannot keep up with the agent is verification that gets removed.
The Extended Model
Zero trust for AI agents extends the original three principles with three additional principles specific to autonomous systems.
Verify Intent, Not Just Identity
Every action an AI agent takes should be evaluated for intent, not just for permission. The agent wants to "create a purchase order for $50,000 with a new vendor" -- that is an intent with specific risk characteristics that go beyond the permission "create purchase order."
Intent verification classifies the action semantically. It considers the action type, the parameters, the target resource, and the context. It maps the raw API call to a meaningful category that can be evaluated against domain-specific policies.
In Intended, intent verification happens through the classification stage of the decision pipeline. Every action is mapped to a canonical intent category, and the risk is scored based on the specific parameters of that intent, not just the permission required to execute it.
Prove Every Decision
In traditional zero trust, the system logs that a verification occurred and what the outcome was. The log is stored in a centralized system that the organization controls. The integrity of the log depends on access controls and operational procedures.
For AI agents, logging is not sufficient. Every authorization decision must produce cryptographic proof that can be independently verified. The proof must include what was evaluated, which policies were applied, what the risk assessment was, and what the decision was. The proof must be signed so that any party with the public key can verify it.
This is a higher bar than traditional zero trust. Traditional zero trust trusts the logging system. AI agent zero trust does not trust anything. The cryptographic proof is the verification mechanism, and it works regardless of who controls the logging infrastructure.
Enforce Boundaries on Autonomy
Traditional zero trust segments the network and limits blast radius. This works because the segmentation boundaries are static or slow-changing. A database server is in a database segment. A web server is in a web segment. The boundaries are architectural.
AI agents need dynamic boundaries on their autonomy. An agent that normally processes 30 transactions per day should be bounded if it tries to process 3,000. An agent that operates in the procurement domain should be bounded if it starts making infrastructure changes. These boundaries are not network segments. They are behavioral constraints that adapt to the agent's actual behavior.
Intended implements autonomy boundaries through velocity tracking, domain confinement, and escalation thresholds. An agent that exceeds its behavioral baseline triggers automatic escalation. An agent that attempts actions outside its configured domain is denied. These boundaries are enforced in real time, at every action, with no exceptions.
Implementation
Implementing zero trust for AI agents requires three infrastructure components that most organizations do not have today.
**An authority runtime** that evaluates every action before it executes. Not a policy engine that evaluates rules. An authority runtime that classifies intent, scores risk, evaluates policies, issues cryptographic proof, and maintains an immutable audit chain. The authority runtime is the enforcement point for AI agent zero trust.
**Domain intelligence** that provides context for risk evaluation. Without domain intelligence, the authority runtime cannot distinguish between routine actions and anomalous ones. Domain packs encode the operational knowledge that makes risk scoring accurate.
**Continuous verification infrastructure** that operates at machine speed. The verification cannot add meaningful latency to agent operations. Sub-50ms evaluation is the target. Anything slower gets bypassed.
Zero trust for AI agents is not a marketing repackaging of existing zero trust. It is a genuine extension of the model to handle principals that think, decide, and act autonomously. The principles are the same: never trust, always verify. The implementation is different because the principals are different. AI agents need intent verification, cryptographic proof, and behavioral boundaries. These are not optional enhancements. They are the minimum viable zero trust for autonomous systems.