Skip to content

2026-03-02

OpenAI Agents SDK + Intended: Adding Authority to Every Tool Call

Developer Relations · Developer Experience

OpenAI Agents SDK + Intended: Adding Authority to Every Tool Call

The OpenAI Agents SDK provides a streamlined way to build AI agents that use tools, follow instructions, and hand off tasks between specialized agents. It handles orchestration, tool calling, and multi-agent workflows out of the box. What it does not handle is authorization.

When an agent in the OpenAI SDK calls a tool, the call executes immediately. There is no policy check, no risk assessment, and no audit trail. The agent has a tool, so it uses the tool. For development and testing, that is fine. For production, where agents handle real data and real operations, you need governance.

This tutorial shows you how to add Intended authority governance to an OpenAI Agents SDK application with minimal code changes.

What You Will Build

You will build a multi-agent customer operations system with three specialized agents: a lookup agent that retrieves customer information, an actions agent that can modify accounts and process refunds, and a router agent that delegates to the appropriate specialist. Intended will govern the actions agent, ensuring that account modifications and refunds are authorized against your policies.

Prerequisites

You need Node.js 20 or later, an OpenAI API key, and a Intended API key. Install the dependencies:

bash
npm install @openai/agents @intended/sdk

Step 1: Define Your Tools

Start with the standard OpenAI Agents SDK tool definitions:

typescript
import { tool } from "@openai/agents";
import { z } from "zod";

const lookupCustomer = tool({
  name: "lookup_customer",
  description: "Look up customer information by ID or email",
  parameters: z.object({
    identifier: z.string().describe("Customer ID or email address"),
  }),
  execute: async ({ identifier }) => {
    const customer = await db.customers.findByIdOrEmail(identifier);
    return JSON.stringify(customer);
  },
});

const processRefund = tool({
  name: "process_refund",
  description: "Process a refund for a customer order",
  parameters: z.object({
    orderId: z.string(),
    amount: z.number(),
    reason: z.string(),
  }),
  execute: async ({ orderId, amount, reason }) => {
    const result = await billing.processRefund(orderId, amount, reason);
    return JSON.stringify(result);
  },
});

const updateAccount = tool({
  name: "update_account",
  description: "Update customer account details",
  parameters: z.object({
    customerId: z.string(),
    updates: z.record(z.string()),
  }),
  execute: async ({ customerId, updates }) => {
    const result = await db.customers.update(customerId, updates);
    return JSON.stringify(result);
  },
});

Step 2: Create Intended-Protected Tools

The Intended SDK provides a `wrapTool` function that takes an OpenAI Agents SDK tool and returns a governed version:

typescript
import { IntendedGuard } from "@intended/sdk";

const guard = new IntendedGuard({
  apiKey: process.env.Intended_API_KEY!,
  orgId: process.env.Intended_ORG_ID!,
  agentId: "customer-ops-agent",
  domainPack: "saas-ops",
});

const protectedLookup = guard.wrapTool(lookupCustomer, {
  intent: "data.customer.read",
});

const protectedRefund = guard.wrapTool(processRefund, {
  intent: "financial.refund.process",
  riskParams: (args) => ({
    amount: args.amount,
    currency: "USD",
  }),
});

const protectedUpdate = guard.wrapTool(updateAccount, {
  intent: "data.customer.update",
  riskParams: (args) => ({
    fields: Object.keys(args.updates),
    customerId: args.customerId,
  }),
});

The `wrapTool` function preserves the tool's name, description, and parameter schema. The `intent` parameter maps the tool to a Intended intent category for policy evaluation. The optional `riskParams` function extracts risk-relevant data from the tool arguments for contextual risk scoring.

When the agent calls a wrapped tool, the wrapper submits the intent to Intended before executing the original function. If the decision is "allow," the original function executes normally. If the decision is "deny," the wrapper returns a structured error message that the agent can understand. If the decision is "escalate," the wrapper returns a message indicating that human approval is required.

Step 3: Build the Multi-Agent System

Now create the agents using the protected tools:

typescript
import { Agent, run } from "@openai/agents";

const lookupAgent = new Agent({
  name: "Customer Lookup",
  instructions:
    "You look up customer information. Use the lookup tool to find " +
    "customers by ID or email. Return the information clearly.",
  tools: [protectedLookup],
});

const actionsAgent = new Agent({
  name: "Customer Actions",
  instructions:
    "You process customer account changes and refunds. Always confirm " +
    "the details before taking action. If an action is blocked or " +
    "escalated, explain why to the user.",
  tools: [protectedRefund, protectedUpdate],
});

const routerAgent = new Agent({
  name: "Customer Ops Router",
  instructions:
    "You are a customer operations assistant. Route lookup requests " +
    "to the lookup agent and action requests to the actions agent.",
  handoffs: [lookupAgent, actionsAgent],
});

Step 4: Configure Policies

Set up policies that reflect your operational rules. You can do this through the Intended console UI or programmatically:

typescript
import { PolicyClient } from "@intended/sdk";

const policies = new PolicyClient({
  apiKey: process.env.Intended_API_KEY!,
  orgId: process.env.Intended_ORG_ID!,
});

// Customer lookups are always allowed
await policies.upsert({
  id: "customer-read-allow",
  intent: "data.customer.read",
  decision: "allow",
  description: "Allow customer data lookups",
});

// Refunds under $100 auto-approve, $100-$1000 escalate,
// over $1000 deny (requires manual processing)
await policies.upsert({
  id: "refund-tiered",
  intent: "financial.refund.process",
  rules: [
    { condition: "params.amount < 100", decision: "allow" },
    { condition: "params.amount <= 1000", decision: "escalate" },
    { condition: "params.amount > 1000", decision: "deny",
      reason: "Refunds over $1000 require manual processing" },
  ],
});

// Account updates to sensitive fields require approval
await policies.upsert({
  id: "account-update-sensitive",
  intent: "data.customer.update",
  rules: [
    {
      condition: "params.fields intersects ['email', 'billing_address', 'payment_method']",
      decision: "escalate",
      reason: "Changes to sensitive account fields require approval",
    },
    { condition: "true", decision: "allow" },
  ],
});

Step 5: Run the Agent

typescript
async function handleCustomerRequest(userMessage: string) {
  const result = await run(routerAgent, userMessage);
  return result.finalOutput;
}

// Example interactions
await handleCustomerRequest(
  "Look up the account for customer cust_12345"
);
// -> Allowed. Returns customer info.

await handleCustomerRequest(
  "Process a $75 refund for order ord_98765, reason: item damaged"
);
// -> Allowed. Under $100 threshold. Refund processed.

await handleCustomerRequest(
  "Process a $500 refund for order ord_44321, reason: service outage"
);
// -> Escalated. Between $100-$1000. Held for human approval.

await handleCustomerRequest(
  "Update the billing email for cust_12345 to newemail@example.com"
);
// -> Escalated. Email is a sensitive field. Held for human approval.

Handling Escalations

When a tool call is escalated, the agent receives a structured response indicating the escalation. You can configure how your application handles this:

typescript
const guard = new IntendedGuard({
  apiKey: process.env.Intended_API_KEY!,
  orgId: process.env.Intended_ORG_ID!,
  agentId: "customer-ops-agent",
  domainPack: "saas-ops",
  onEscalation: async (escalation) => {
    // Send to Slack for human review
    await slack.postMessage({
      channel: "#customer-ops-approvals",
      text: `Action requires approval: ${escalation.intent}\n` +
            `Agent: ${escalation.agentId}\n` +
            `Details: ${JSON.stringify(escalation.params)}\n` +
            `Risk score: ${escalation.riskScore}\n` +
            `Approve: ${escalation.approveUrl}\n` +
            `Deny: ${escalation.denyUrl}`,
    });
  },
});

When a human clicks the approve or deny link, Intended records the decision, and the action either proceeds or is permanently blocked. The full chain of events -- the original intent, the escalation, the human decision, and the final outcome -- is recorded in the audit trail.

The Audit Trail

Every tool call, whether allowed, denied, or escalated, is recorded. You can query the audit trail through the SDK:

typescript
import { AuditClient } from "@intended/sdk";

const audit = new AuditClient({
  apiKey: process.env.Intended_API_KEY!,
  orgId: process.env.Intended_ORG_ID!,
});

const decisions = await audit.query({
  agentId: "customer-ops-agent",
  dateRange: { from: "2026-03-01", to: "2026-03-02" },
  intentPattern: "financial.refund.*",
});

for (const decision of decisions) {
  console.log(
    `${decision.timestamp} | ${decision.intent} | ` +
    `${decision.decision} | risk: ${decision.compositeRisk}`
  );
}

Each record includes the full evaluation context: the intent classification, the risk scores across all eight dimensions, the policies that were evaluated, and a cryptographic signature that can be verified independently.

What You Gain

Your OpenAI Agents SDK application now has production-grade governance. Customer lookups flow through without friction. Low-risk refunds auto-approve. High-risk refunds and sensitive account changes require human approval. Every decision is recorded and cryptographically signed.

The integration required wrapping your tools with `guard.wrapTool` and configuring policies. Your agent logic, your multi-agent routing, and your tool implementations are unchanged. The governance layer is transparent to the agents and visible to your security and compliance teams.

Your agents are making decisions that affect real customers and real money. Now every one of those decisions is authorized, recorded, and provable.