2026-03-21
LangChain + Intended: Adding Governance to Your AI Agent in 10 Minutes
Developer Relations · Developer Experience
LangChain + Intended: Adding Governance to Your AI Agent in 10 Minutes
LangChain is one of the most popular frameworks for building AI agents. It makes it easy to give an LLM access to tools, databases, APIs, and other systems. But there is a problem that LangChain does not solve: authorization.
When a LangChain agent decides to call a tool, nothing verifies whether that call should be allowed. The agent has access to the tool, so it can call the tool. There is no policy evaluation, no risk assessment, and no audit trail beyond whatever logging you have manually wired up.
This tutorial shows you how to add full authority governance to any LangChain agent in under 10 minutes using the Intended Python SDK. By the end, every tool call your agent makes will be authorized against explicit policies, risk-scored, and recorded in an immutable audit trail with cryptographic proof.
What You Will Build
You will take a standard LangChain agent with tools and wrap each tool with a Intended authority guard. The guard intercepts every tool call, submits the intent to Intended for evaluation, and either allows the call to proceed, blocks it, or escalates it for human review, all based on the policies you define.
The agent's behavior does not change. It still reasons, plans, and selects tools the same way. But now every action is governed.
Prerequisites
You need a Intended account with an API key. Sign up at meritt.run if you do not have one. You also need Python 3.10 or later and a working LangChain installation.
Step 1: Install the Intended Python SDK
pip install meritt-sdkThe SDK is lightweight with minimal dependencies. It communicates with the Intended API over HTTPS and handles token verification locally using the public key from your account.
Step 2: Create a IntendedToolGuard
The ToolGuard is the central integration point. It manages the connection to Intended, caches policies locally for low-latency evaluation, and provides the wrapper function for your tools.
from meritt_sdk import IntendedToolGuard
guard = IntendedToolGuard(
api_key="mrt_your_api_key_here",
org_id="org_your_org_id",
agent_id="langchain-procurement-agent",
domain_pack="saas-ops",
)The agent_id identifies this specific agent in your audit trail. The domain_pack tells Intended which policy set to evaluate against. Intended ships with domain packs for common verticals: saas-ops, fintech, healthcare, infrastructure, and others. You can also create custom packs.
Step 3: Wrap Your Tools with guard.protect()
Here is a standard LangChain agent with three tools: one to search a vendor database, one to create a purchase order, and one to send a notification.
from langchain.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate
@tool
def search_vendors(query: str) -> str:
"""Search the approved vendor database."""
# Your vendor search logic here
return f"Found 3 vendors matching '{query}'"
@tool
def create_purchase_order(vendor_id: str, amount: float, description: str) -> str:
"""Create a purchase order for a vendor."""
# Your PO creation logic here
return f"PO created: {vendor_id} for ${amount}"
@tool
def send_notification(recipient: str, message: str) -> str:
"""Send a notification to a team member."""
# Your notification logic here
return f"Notification sent to {recipient}"Without Intended, the agent can call any of these tools with any arguments at any time. Now wrap them:
protected_tools = [
guard.protect(search_vendors, risk_category="data:read"),
guard.protect(create_purchase_order, risk_category="financial:write"),
guard.protect(send_notification, risk_category="comms:write"),
]The risk_category parameter tells Intended how to classify the tool call within its risk scoring model. A data read operation is evaluated differently from a financial write operation. The policy engine uses this classification along with the actual arguments to compute risk scores across eight dimensions.
Step 4: Build and Run Your Agent
Now create the agent using the protected tools instead of the originals:
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a procurement assistant. Help users find vendors and create purchase orders."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_tools_agent(llm, protected_tools, prompt)
executor = AgentExecutor(agent=agent, tools=protected_tools, verbose=True)
result = executor.invoke({
"input": "Find cloud hosting vendors and create a PO for $15,000 with CloudCorp"
})That is it. Every tool call the agent makes now goes through Intended authority evaluation. The agent does not know or care that governance is happening. It calls tools the same way it always did. The guard handles everything transparently.
What Happens Under the Hood
When the agent calls create_purchase_order with vendor_id="cloudcorp" and amount=15000, the guard intercepts the call and performs the following sequence:
First, it constructs an intent object containing the tool name, arguments, agent identity, and risk category. This intent is submitted to Intended's evaluation endpoint.
Second, Intended classifies the intent using its canonical intent taxonomy. A purchase order creation maps to the financial.procurement.create intent type, which activates the relevant policy rules from the saas-ops domain pack.
Third, the policy engine evaluates the intent against all applicable rules. For a $15,000 purchase order, this might include rules like: agent spending authority must cover the amount, vendor must be on the approved list, single-transaction limit must not be exceeded, and daily cumulative spend must be within budget.
Fourth, Intended computes risk scores across eight dimensions: financial impact, data sensitivity, operational risk, compliance exposure, reversibility, blast radius, velocity, and privilege level. These scores feed into the final decision.
Fifth, Intended returns an Authority Decision Token containing the decision (allow, deny, or escalate), the risk scores, the policies that were evaluated, any conditions attached to the approval, and a cryptographic signature over the entire payload.
If the decision is allow, the guard passes the call through to the original tool function. If the decision is deny, the guard returns an error message to the agent explaining why the action was blocked. If the decision is escalate, the guard can either block the call and notify a human reviewer, or pause execution until approval is received, depending on your configuration.
This entire sequence completes in under 50 milliseconds.
What the Audit Trail Looks Like
Every decision is recorded in your Intended audit ledger. Here is an example of what a single decision record contains:
{
"decision_id": "dec_8f3a2b1c",
"timestamp": "2026-03-21T14:32:07.103Z",
"agent_id": "langchain-procurement-agent",
"intent": "financial.procurement.create",
"action": "create_purchase_order",
"arguments": {
"vendor_id": "cloudcorp",
"amount": 15000,
"description": "Cloud hosting services"
},
"decision": "allow",
"risk_scores": {
"financial_impact": 0.62,
"data_sensitivity": 0.1,
"operational_risk": 0.25,
"compliance_exposure": 0.3,
"reversibility": 0.7,
"blast_radius": 0.2,
"velocity": 0.15,
"privilege_level": 0.5
},
"policies_evaluated": ["saas-ops.financial.spending-limits", "saas-ops.financial.vendor-approval"],
"conditions": ["po_expires_in_72h", "requires_manager_countersign_above_10000"],
"token_signature": "eyJhbGciOiJFZDI1NTE5..."
}This record is immutable and cryptographically signed. You can query it through the Intended API, export it for compliance reporting, or stream it to your SIEM. When an auditor asks what your procurement agent did last quarter, you have a complete, verifiable answer.
Going Further
This tutorial covers the basic integration pattern. In production, you will likely want to:
- Define custom policies for your organization's specific rules
- Set up escalation webhooks that route to Slack, PagerDuty, or your ticketing system
- Configure different domain packs for different agents
- Use the audit API to build compliance dashboards
- Set up alerts for high-risk decisions or policy violations
The Intended documentation covers all of these scenarios with working examples.
Try It Free
Intended's free tier includes 1,000 authority decisions per month, enough to build and test your integration. Sign up at meritt.run, grab your API key, and add governance to your LangChain agent in the time it takes to finish your coffee.
Your agents are already making decisions. Now you can prove they are making the right ones.