2026-03-10
What Is an Authority Runtime?
Intended Team · Founding Team
What Is an Authority Runtime?
The security industry has no shortage of categories. Identity providers. Policy engines. Access control lists. Role-based authorization. Attribute-based authorization. Each solves a piece of the puzzle. None of them solve the puzzle that AI agents create.
An authority runtime is a new category of infrastructure. It sits at the execution boundary between an AI agent's decision and the real-world action that follows. It answers a question that no existing category was designed to answer: should this autonomous system be allowed to take this specific action, right now, given everything we know about the context?
What It Is Not
An authority runtime is not an identity provider. Identity providers answer "who is this?" An authority runtime already knows who the agent is. It needs to know whether that agent, with that identity, should be allowed to perform this particular action at this particular moment.
An authority runtime is not a policy engine. Policy engines like OPA or Cedar evaluate rules against data and return allow or deny decisions. They are evaluation functions. An authority runtime includes policy evaluation, but it also includes intent classification, risk scoring, evidence collection, cryptographic proof generation, and audit chain maintenance. A policy engine is a component inside an authority runtime. It is not the whole thing.
An authority runtime is not an access control layer. Access control determines what resources a principal can reach. It operates on coarse-grained permissions: this role can read this table, this service account can call this API. An authority runtime operates on fine-grained intent: this agent wants to create a purchase order for $47,000 with a vendor it has never transacted with before, during a weekend, while its cumulative daily spend is already at 80% of budget.
An authority runtime is not a firewall. Firewalls filter traffic based on network-level rules. An authority runtime evaluates semantic meaning. It understands that "deploy to staging" and "deploy to production" are the same action verb targeting different resources with wildly different risk profiles.
Why the Category Exists Now
Humans are slow. That is not an insult. It is an architectural feature. When a human operator clicks a button to deploy code, they have already thought about whether they should. They checked the pull request. They looked at the test results. They considered the time of day. They remembered that the last Friday deploy caused an incident. The slowness of human cognition builds in a natural governance layer.
AI agents are fast. An agent can make a hundred tool calls in the time it takes a human to read a Slack message. Speed eliminates the natural governance layer. The gap between "can do" and "should do" that human slowness used to fill now needs to be filled by infrastructure.
Traditional authorization systems were designed for the human speed regime. They check permissions at login time or at API gateway boundaries. They assume that if a principal has a permission, every exercise of that permission is equivalent. A user with "write" access to a database can write one row or a million rows, and the authorization system treats both the same.
AI agents break that assumption. An agent with database write access might decide to update every record in a table. The authorization check passes because the agent has the permission. But the action is catastrophic. The authorization system has no concept of intent, risk, or proportionality. It was never designed to have those concepts because human operators provided them implicitly.
The authority runtime category exists because AI agents need the governance that human cognition used to provide, delivered at machine speed, with cryptographic proof.
The Five Properties
An authority runtime has five properties that distinguish it from adjacent categories:
**Intent-aware evaluation.** Every action is classified by what the agent is trying to accomplish, not just what API it is calling. A "POST /orders" request is just an HTTP method and path. The intent "create a $50,000 purchase order with an unapproved vendor" carries semantic meaning that drives evaluation.
**Contextual risk scoring.** The same action can be low-risk or high-risk depending on context. Deploying code at 2 PM on a Tuesday is different from deploying code at 2 AM on a Saturday. Sending a $100 payment is different from sending a $100,000 payment. The authority runtime scores risk dynamically, not statically.
**Deterministic enforcement.** The authority runtime makes a decision before the action executes. Not after. Not eventually. Before. The action does not happen unless the authority runtime issues a token. There is no "log and allow" mode in production. The default is fail-closed.
**Cryptographic proof.** Every decision produces a signed token that proves what was evaluated, what policies were applied, what the risk scores were, and what the decision was. The token is independently verifiable. An auditor can confirm that a specific action was authorized without trusting the system that made the decision.
**Immutable audit chain.** Every decision, whether allowed, denied, or escalated, is recorded in a hash-linked chain. Each entry references the previous entry's hash. Tampering with any entry breaks the chain. The audit trail is not a log file that can be edited. It is a cryptographic structure that proves completeness and ordering.
How It Works in Practice
An AI agent decides to take an action. Maybe it wants to update a customer record. Maybe it wants to approve a refund. Maybe it wants to deploy a service.
The authority runtime intercepts the action before execution. It classifies the intent: what is the agent trying to do, in which domain, affecting which resources? It scores the risk: how dangerous is this action given the current context, the agent's history, and the organizational policies? It evaluates policies: is this agent authorized for this intent at this risk level?
The result is one of three outcomes. Allow: the action proceeds, a signed token is issued as proof, and the decision is recorded. Deny: the action is blocked, the agent receives a structured explanation, and the decision is recorded. Escalate: the action is held pending human review, a notification is sent, and the decision is recorded when the human responds.
The entire pipeline executes in under 50 milliseconds. The agent barely notices. But the organization gains something it never had before: deterministic, provable control over autonomous system behavior.
The Alternative
The alternative to an authority runtime is hope. Hope that the agent does the right thing. Hope that the prompt engineering is good enough. Hope that the guardrails in the LLM catch the edge cases. Hope that someone notices when something goes wrong.
Hope is not a control. Hope is not auditable. Hope does not satisfy your compliance team, your security team, your board, or your customers.
An authority runtime replaces hope with proof. Every action, authorized. Every decision, recorded. Every claim, verifiable.
That is the category. That is what Intended builds.