Skip to content

Company / Blog

From the Team

Launch posts, architecture explainers, and operator guidance for the AI Authority Runtime.

Stay updated

Get the latest on AI agent governance, new domain packs, and platform updates.

No spam. Unsubscribe anytime.

2026-03-22

industry

The Economics of AI Agent Authorization

How much does an unauthorized AI action cost? The math makes the case for runtime authority better than any feature comparison.

Read post
#industry#economics#roi

2026-03-21

technical

LangChain + Intended: Adding Governance to Your AI Agent in 10 Minutes

A step-by-step tutorial for adding authorization and audit to any LangChain agent using the Intended Python SDK.

Read post
#technical#tutorial#langchain

2026-03-20

industry

Enterprise AI Governance in 2026: What CISOs Need to Know

AI agents are in production at the majority of Fortune 500 companies. The regulatory landscape is catching up fast. Here is what security leaders need to understand.

Read post
#industry#governance#enterprise

2026-03-20

technical

Why Permission Is Not Authority: The Gap in AI Agent Governance

AI agents are governed by permission systems designed for humans. That is a problem. Permission asks whether an identity can access a resource. Authority asks whether an action should happen. The difference matters when AI agents are making thousands of decisions per hour.

Read post
#governance#architecture#ai-agents

2026-03-19

technical

How We Scored 92/100 on Our Own Security Audit

We ran 5 red team agents against our own platform. Here is what they found, and how we fixed every issue.

Read post
#security#transparency#engineering

2026-03-18

announcement

Introducing MIR: An Open Standard for Classifying AI Agent Actions

There is no common language for describing what AI agents do. MIR changes that. The Intended Intent Reference is an open taxonomy of 14 domains and 100+ categories for classifying AI agent actions. Apache 2.0 licensed.

Read post
#open-source#mir#taxonomy

2026-03-18

industry

The AI Agent Trust Problem

Every company deploying AI agents faces the same question — how do you trust autonomous software to act on your behalf?

Read post
#industry#ai-agents#trust

2026-03-17

technical

Fail-Closed vs. Fail-Open: Why Your AI Authorization Model Matters

When your authorization system goes down, do AI agents keep executing? The choice between fail-open and fail-closed is the most important architectural decision in AI agent governance. Intended is fail-closed at every boundary.

Read post
#architecture#security#governance

2026-03-15

technical

MCP + Intended: Governing Every Tool Call in 5 Lines of Code

The Model Context Protocol is becoming the standard for AI agent tool use. But MCP has no built-in authorization. Here is how to add policy-based governance to every MCP tool call with Intended's MCP Gateway in five lines of code.

Read post
#mcp#integration#tutorial

2026-03-14

technical

Cryptographic Proof-of-Authority: Why Audit Logs Are Not Enough

Audit logs tell you what happened. Cryptographic proof tells you what happened and proves it mathematically. Intended produces RS256-signed authority tokens, HMAC evidence bundles, and a hash-chained ledger for every decision.

Read post
#security#cryptography#compliance

2026-03-12

product

Building an AI-Native Company: How Intended Runs on Its Own Product

Intended is an AI-native company. Our AI agents handle support triage, billing operations, platform health, and more -- all governed by our own Authority Engine. Here is how we built it and what we learned.

Read post
#company#ai-native#operations

2026-03-10

technical

From OPA to Intended: When Policy Engines Are Not Enough for AI Agents

OPA is a great policy engine for infrastructure. But AI agents need more than policy evaluation. They need intent understanding, risk scoring, domain intelligence, and cryptographic proof. Here is the migration path.

Read post
#migration#opa#policy

2026-03-10

product

What Is an Authority Runtime?

Defining the authority runtime category and what makes it different from authorization, policy engines, and access control.

Read post
#concepts#product#authority-runtime

2026-03-09

product

AI Agents in Production: What Could Go Wrong?

Real failure scenarios when AI agents operate without governance -- unauthorized purchases, data leaks, infrastructure damage, and why governance matters.

Read post
#security#ai-agents#risk

2026-03-08

technical

The 14 Domains of AI Agent Actions

A complete walkthrough of the 14 MIR domains that classify every action an AI agent can take in an enterprise. From software development to executive operations, here is the taxonomy that makes AI governance possible.

Read post
#mir#taxonomy#domains

2026-03-08

technical

RBAC Is Not Enough for AI Agents

RBAC was designed for humans clicking buttons. AI agents need intent-aware authorization that understands context, velocity, and risk.

Read post
#security#rbac#ai-agents

2026-03-07

technical

How Intended Processes a Decision in Under 50ms

Technical deep-dive on the Intended decision pipeline -- how we achieve sub-50ms p99 latency for authority decisions.

Read post
#technical#performance#architecture

2026-03-06

product

Domain Intelligence: Why Context Matters for AI Governance

A $5000 payment in FinOps vs. a test payment in sandbox -- same action, completely different risk. How domain intelligence makes governance accurate.

Read post
#domain-packs#intelligence#risk-scoring

2026-03-06

product

Why We Open-Sourced Our Intent Taxonomy

Open-sourcing the MIR taxonomy was a strategic decision. We studied Databricks, Confluent, and HashiCorp to understand when open-sourcing a foundational technology creates more value than keeping it proprietary. Here is what is open, what is commercial, and why.

Read post
#open-source#strategy#business-model

2026-03-05

announcement

Introducing Intended: The AI Authority Runtime

Category-defining launch post for deterministic AI execution authority.

Read post
#announcement#product

2026-03-05

product

The Audit Trail Your Compliance Team Actually Wants

What auditors look for, what most systems provide, and what Intended provides -- hash chains, evidence bundles, and independent verification.

Read post
#compliance#audit#enterprise

2026-03-04

technical

How Authority Decision Tokens Work

Deep dive into RS256 signing, nonce policy, and adapter verification flow.

Read post
#technical#security

2026-03-04

technical

Governing AI Agent Operations in Kubernetes

Kubernetes RBAC controls who can do what in a cluster. But AI agents need governance that goes beyond RBAC -- intent classification, risk scoring, and cryptographic proof for every operation. Here is how Intended's admission controller closes the gap.

Read post
#kubernetes#infrastructure#integration

2026-03-04

technical

Zero Trust for AI Agents

Applying zero-trust principles to AI agent operations -- never trust, always verify, always prove.

Read post
#security#zero-trust#ai-agents

2026-03-03

technical

PydanticAI + Intended: Governing AI Agent Tools Step by Step

Step-by-step tutorial with full code showing how to protect PydanticAI agent tools with Intended authority checks.

Read post
#tutorial#pydantic-ai#python

2026-03-03

industry

Why AI Governance Is Not Enough

Governance dashboards report controls, while runtime authority enforces them.

Read post
#industry#positioning

2026-03-02

technical

Building a Connector in Under 200 Lines

Build a connector that verifies tokens and emits audit metadata.

Read post
#technical#tutorial

2026-03-02

technical

OpenAI Agents SDK + Intended: Adding Authority to Every Tool Call

Tutorial walkthrough for wrapping OpenAI Agents SDK tools with Intended authority checks for production-grade governance.

Read post
#tutorial#openai#agents-sdk

2026-03-01

technical

The 8-Factor Risk Scoring Model

Transparent factor-level scoring for every authority decision.

Read post
#technical#product

2026-03-01

product

The CTO Guide to Evaluating AI Governance Solutions

A 10-point evaluation framework for CTOs comparing AI governance solutions -- what to look for, what to avoid, and what questions to ask.

Read post
#enterprise#evaluation#leadership

2026-02-28

technical

Why Fail-Open Authorization Is Dangerous for AI Agents

Case studies of fail-open disasters and why Intended chose fail-closed as the only safe default for AI agent authorization.

Read post
#security#fail-closed#architecture

2026-02-27

product

Multi-Party Approvals for High-Risk AI Actions

How Intended handles escalation workflows -- single approver, multi-party approval, delegation chains, and time-bounded authorization.

Read post
#escalation#approvals#enterprise

2026-02-26

technical

Securing MCP Servers: The Complete Guide

Comprehensive guide to MCP security -- what MCP lacks in authorization, why it matters, and how Intended fills the gap.

Read post
#mcp#security#guide

2026-02-25

industry

AI Governance for Financial Services

Industry-specific governance for financial services -- payment approvals, trading operations, regulatory compliance, and Intended's FinTech domain pack.

Read post
#industry#fintech#compliance

2026-02-24

industry

AI Governance for Healthcare

Industry-specific governance for healthcare -- patient data access, clinical decision support, HIPAA considerations, and the healthcare domain pack.

Read post
#industry#healthcare#hipaa

2026-02-23

industry

AI Governance for DevOps

Industry-specific governance for DevOps -- deployment gates, infrastructure changes, incident response automation, and the infrastructure domain pack.

Read post
#industry#devops#infrastructure

2026-02-22

technical

Building Custom Domain Packs for Intended

Developer guide for creating organization-specific governance models with Intended domain packs -- from intent mappings to risk models.

Read post
#technical#domain-packs#developer-guide

2026-02-21

technical

Terraform + Intended: Infrastructure as Authority

Manage Intended policies as code with Terraform -- full HCL examples for provisioning policies, domain packs, and escalation workflows.

Read post
#tutorial#terraform#infrastructure-as-code

2026-02-20

technical

GitHub Actions + Intended: CI/CD Pipeline Governance

Protect your CI/CD pipeline with Intended authority checks -- a complete GitHub Action walkthrough for governed deployments.

Read post
#tutorial#github-actions#ci-cd

2026-02-19

product

The Hidden Cost of Building AI Authorization In-House

Engineering hours, maintenance burden, compliance gaps, and why buying AI agent authorization beats building it in-house.

Read post
#enterprise#build-vs-buy#cost-analysis

2026-02-18

industry

What Enterprise Buyers Look for in AI Governance

Enterprise procurement teams evaluate AI governance platforms against a specific checklist. SOC 2, DPA, SLA, uptime guarantees, data residency, and support tiers are table stakes. Here is what you need to pass the procurement gauntlet.

Read post
#enterprise#procurement#compliance

2026-02-17

technical

Intended vs Building with OPA and Cedar

OPA and Cedar are excellent policy engines. But building an AI governance platform on top of them requires solving the other 80 percent yourself. Here is an honest comparison of what they give you and what is missing.

Read post
#technical#comparison#opa

2026-02-16

technical

Intent Classification Explained: How Natural Language Becomes Structured Authority

When an AI agent says it wants to do something, that request is natural language. Before governance can happen, that language must become structured data. Here is how Intended's intent compiler works, from raw text to classified intent.

Read post
#technical#intent-classification#compiler

2026-02-15

technical

Token Replay Protection: How It Works

Authority tokens are cryptographic proof that an AI agent was authorized to take an action. But what stops an agent from using the same token twice? Nonces, TTLs, and single-use enforcement. Here is how Intended prevents token replay attacks.

Read post
#security#tokens#replay-protection

2026-02-14

industry

The Four Perimeters of AI Agent Security

Most AI security tools protect one perimeter. But AI agents operate across four distinct perimeters -- ingestion, evaluation, execution, and audit. If you only secure one, you have three gaps. Here is why you need all four.

Read post
#security#architecture#ai-agents

2026-02-13

industry

How to Convince Your CISO to Adopt AI Governance

You know your organization needs AI governance. Your CISO is skeptical. Here is the internal champion playbook -- what CISOs care about, how to frame the conversation, and how to build the case that gets budget approved.

Read post
#enterprise#ciso#adoption

2026-02-12

technical

Risk Scoring: Beyond Binary Allow/Deny

Binary allow/deny decisions are insufficient for AI agent governance. Real-world actions exist on a risk continuum. Here is how Intended calculates risk dynamically using eight dimensions of context.

Read post
#technical#risk-scoring#policy

2026-02-11

product

The Compliance Engineer's Guide to Intended

For compliance engineers managing SOC 2, ISO 27001, or industry-specific frameworks, Intended provides automated evidence collection, chain verification, and auditor-ready exports. Here is how to map Intended to your compliance controls.

Read post
#compliance#soc-2#audit

2026-02-10

product

Air-Gapped Deployments: Running Intended On-Premise

Not every organization can send AI governance data to the cloud. Defense, financial services, healthcare, and critical infrastructure often require air-gapped or on-premise deployment. Here is how Intended supports every deployment model.

Read post
#enterprise#deployment#on-premise

2026-02-09

technical

Webhook Normalization: One Format for Every System

GitHub, Jira, Salesforce, and ServiceNow all send webhooks in different formats. Intended normalizes them into a single unified intent format so your policies work across every system without system-specific rules.

Read post
#technical#connectors#webhooks

2026-02-08

technical

Scaling to One Million Decisions per Month

When your AI agents are making a million governance decisions per month, every millisecond of latency and every bottleneck in the pipeline matters. Here is how Intended's architecture scales horizontally to handle enterprise-grade throughput.

Read post
#technical#architecture#scaling

2026-02-07

company

The SOC 2 Journey: What We Learned

We went through SOC 2 Type II preparation ourselves. Here is a transparent account of what was harder than expected, what was easier, and what we would do differently if we started over.

Read post
#company#compliance#soc-2

2026-02-06

company

Open-Source Strategy: Lessons from Databricks and HashiCorp

Deciding what to open-source and what to keep commercial is one of the hardest strategic decisions a platform company makes. Here is how we made that decision, and what we learned from Databricks, HashiCorp, and the broader industry.

Read post
#company#open-source#strategy

2026-02-05

education

AI Governance Glossary: 40+ Terms Defined

AI governance has its own vocabulary. Intent, authority token, domain pack, MIR, fail-closed, risk score, evidence bundle -- here are 40-plus terms defined clearly and precisely so everyone speaks the same language.

Read post
#education#glossary#reference

2026-02-04

industry

The Future of AI Agent Governance

The AI governance landscape is shifting fast. The EU AI Act is entering enforcement, autonomous agents are proliferating, and multi-agent systems are going production. Here is where the industry is headed and what it means for governance.

Read post
#industry#future#eu-ai-act

2026-02-03

technical

Intended Architecture Deep Dive

A full technical architecture post for CTOs and senior engineers. Every component of the Intended platform explained -- from the intent compiler to the hash-chained audit ledger, with data flows, scaling characteristics, and design rationale.

Read post
#technical#architecture#deep-dive

2026-02-02

industry

Incident Response for AI Agent Failures

When an AI agent does something wrong -- an unauthorized action, a misconfiguration, a data leak -- you need a playbook. Detection, containment, investigation, and remediation for AI agent incidents.

Read post
#operations#incident-response#security

2026-02-01

technical

Policy as Code with Intended

Version-controlled policies, Git-based review workflows, and CI/CD for governance. Here is how to treat your AI governance policies with the same rigor as your application code.

Read post
#technical#policy-as-code#git

2026-01-31

technical

The MIR Taxonomy: Design Principles

The Intended Intent Reference taxonomy classifies AI agent actions into 14 domains and 300-plus categories. Here are the design principles that guided its creation and why those principles matter for governance at scale.

Read post
#technical#mir#taxonomy

2026-01-30

industry

Why Every AI Framework Needs an Authority Layer

LangChain, PydanticAI, CrewAI, OpenAI Agents SDK -- none of them have built-in governance. They all provide tool calling without authority. Here is why every AI framework needs an authority layer, and why that layer should be external.

Read post
#industry#langchain#pydantic-ai

2026-01-29

technical

Connector SDK: Build Your Own Integration

Intended ships connectors for major platforms, but your organization has custom systems too. Here is a developer tutorial for building a custom connector from scratch using the Intended Connector SDK.

Read post
#technical#tutorial#connector-sdk

2026-01-28

product

Monitoring AI Agent Decisions in Real Time

Governance without observability is governance in the dark. Here is how to monitor AI agent decisions in real time -- metrics, dashboards, alerts, and the signals that matter most.

Read post
#operations#monitoring#observability

2026-01-27

industry

Data Residency and AI Governance

Where your governance data lives matters more than ever. GDPR, data sovereignty laws, and enterprise requirements demand control over data location. Here is how Intended handles multi-region deployment and data residency.

Read post
#compliance#data-residency#gdpr

2026-01-26

industry

The Business Case for AI Agent Governance

Building the ROI case for AI agent governance. Risk reduction, time savings, compliance value, and the cost of doing nothing. A framework for executive presentations.

Read post
#business#roi#executive

2026-01-25

product

Intended Product Update: March 2026

A roundup of what we shipped in early 2026. MCP Gateway for model context protocol governance, Python SDK, Kubernetes admission controller, new pricing tiers, and 15 new blog posts for the community.

Read post
#product#update#release

2026-01-24

technical

API Key Management Best Practices

API keys are the credentials your AI agents use to interact with Intended. Rotation, scoping, grace periods, and monitoring. Here are the best practices for managing API keys in a governance-critical system.

Read post
#security#api-keys#best-practices

2026-01-23

industry

The Death of Manual AI Review

Manual review of AI agent actions does not scale. At 50 agents making 500 decisions a day, you need a team just to review. Automated governance replaces manual review without sacrificing control.

Read post
#industry#automation#manual-review

2026-01-22

industry

AI Governance for SaaS Platforms

SaaS platforms deploying AI agents face unique governance challenges. Per-tenant policies, data isolation, usage metering, and cross-tenant security. Here is how to implement AI governance in a multi-tenant architecture.

Read post
#saas#multi-tenant#governance

2026-01-21

technical

Hash-Chained Audit Trails Explained

A technical deep-dive into hash-chained audit trails. SHA-256 chains, serializable transactions, tamper detection, and why traditional logging is insufficient for AI governance compliance.

Read post
#technical#audit#cryptography

2026-01-20

technical

Getting Started with Intended in 5 Minutes

The quickest possible path from zero to governed AI agent. Sign up, install the SDK, submit your first intent, and see the governance decision. Five minutes, no infrastructure required.

Read post
#tutorial#getting-started#quickstart

Stay updated

Get the latest on AI agent governance, new domain packs, and platform updates.

No spam. Unsubscribe anytime.