2026-03-12
Building an AI-Native Company: How Intended Runs on Its Own Product
Intended Team · Founding Team
What AI-Native Actually Means
"AI-native" is one of those terms that gets thrown around without precision. For some companies, it means they use ChatGPT for writing marketing copy. For others, it means they have a chatbot on their website. Neither qualifies.
At Intended, AI-native means something specific: AI agents are primary operators of our business systems, and humans are overseers. Our agents do not assist with tasks -- they own them. They triage support tickets, process billing events, monitor platform health, manage incident response, handle compliance checks, generate audit reports, review security alerts, and coordinate deployments.
Humans set the policies, review escalations, and handle the edge cases that require judgment. But the default operator for most of our internal processes is an AI agent, not a person.
This is not a philosophical choice. It is a practical one. We are building the governance layer for AI agent operations. If we cannot run our own company on AI agents governed by our own product, we have no business selling it to anyone else.
The 8 Operations Agents
We run eight AI agents across our internal operations. Each agent has a defined domain, a set of authorized actions, and policies that govern its behavior. Every action every agent takes is evaluated by the same Intended Authority Engine that our customers use.
1. Support Triage Agent
This agent monitors incoming support tickets across email, chat, and our support portal. It classifies each ticket by urgency, domain, and likely resolution path. For common issues -- API key rotation, connector configuration, policy syntax errors -- it resolves the ticket directly, sending the customer a response with step-by-step instructions and relevant documentation links.
For complex issues, it escalates to the human support team with a pre-built context package: the customer's configuration, recent decision logs, and a suggested resolution path. The human picks up where the agent left off, with full context.
2. Billing Operations Agent
This agent handles the billing lifecycle. It processes subscription changes, calculates usage-based charges, generates invoices, handles payment failures, and manages dunning sequences. It integrates with Stripe for payment processing and with our own usage metering for decision counting.
When a customer approaches their decision limit, the billing agent sends a proactive notification. If a payment fails, it initiates a retry sequence with customer communication. If a subscription change affects pricing (upgrade, downgrade, cancellation), the agent calculates prorated amounts and updates the billing system.
3. Platform Health Agent
This agent monitors the Intended platform itself. It watches latency metrics, error rates, throughput, and resource utilization across all services. It correlates anomalies across services to detect cascading issues before they become incidents.
When it detects a potential issue, it classifies the severity and either auto-remediates (scaling a service, restarting a container, clearing a cache) or escalates to the on-call engineer with a diagnostic package. The escalation includes the anomaly data, correlated events, and recommended actions.
4. Incident Response Agent
When an incident is declared, this agent coordinates the response. It creates the incident channel, pages the relevant engineers, collects diagnostic data, and maintains the incident timeline. It monitors resolution progress and sends stakeholder updates at configurable intervals.
After resolution, it generates the incident report: timeline, root cause, impact assessment, and action items. The report is drafted by the agent and reviewed by the incident commander before publication.
5. Compliance Check Agent
This agent runs continuous compliance checks against our SOC 2 control framework. It verifies that access reviews are current, encryption is enforced, audit trails are continuous, and configuration drift has not introduced compliance gaps. It generates weekly compliance snapshots and flags any controls that are out of tolerance.
6. Audit Report Agent
This agent generates customer-facing audit reports. When a customer requests an audit export, the agent queries the audit chain, assembles the evidence bundles, verifies chain continuity, and produces a formatted report with cryptographic verification metadata. The report is ready for the customer's compliance team to review.
7. Security Alert Agent
This agent monitors our security event stream. It classifies alerts by severity, correlates related events, and filters false positives. For genuine alerts, it initiates the appropriate response -- blocking a suspicious IP, rotating a compromised credential, or escalating to the security team with a threat assessment.
8. Deployment Coordinator Agent
This agent manages our release process. It monitors the CI pipeline, validates build artifacts, runs pre-deployment checks, coordinates staged rollouts, and monitors post-deployment metrics. If a deployment causes a metric regression, it initiates an automatic rollback.
Self-Governance: The Recursive Loop
Here is the part that makes this interesting: every action taken by these eight agents is governed by Intended. Our agents submit intents through the same Authority Engine that our customers use. Their actions are classified against the MIR taxonomy, evaluated against our internal policies, risk-scored, and recorded in our audit chain.
This creates a recursive loop. The platform governs the agents that operate the platform. If the deployment coordinator tries to push a release that violates our change management policy, Intended blocks it. If the billing agent tries to process a refund above the automatic threshold, Intended escalates it. If the security agent tries to block an IP range that includes a customer's known addresses, Intended flags the conflict.
The recursive loop is not just elegant -- it is practical. It means we discover governance gaps in our own product before our customers do. When an agent hits an edge case that our policies do not handle well, we experience it first. We fix it, improve the product, and ship the improvement to everyone.
Results: What the Numbers Show
After six months of operating as an AI-native company, the metrics tell a clear story:
- 91% auto-resolution rate for support tickets. Nine out of ten customer issues are resolved by the support agent without human intervention. The remaining 9% are escalated with full context, so human resolution time is shorter too.
- 99.7% billing accuracy. The billing agent processes thousands of events per month with a 0.3% error rate, and those errors are caught by reconciliation checks before they affect customers.
- 4.2-minute mean time to detect for platform issues. The health agent catches anomalies faster than any human-monitored dashboard because it correlates signals across all services simultaneously.
- Zero compliance gaps in our last SOC 2 audit. The compliance agent maintains continuous verification, so there are no surprises at audit time.
These are not theoretical projections. They are production numbers from a real company running real operations on AI agents governed by Intended.
What We Learned
Running an AI-native company taught us things we could not have learned any other way.
First, policies need to be granular. Early on, we wrote broad policies -- "the support agent can respond to tickets." We quickly learned that we needed specificity: which ticket categories, which response types, which customer tiers, which escalation triggers. The granularity of the policy determines the quality of the governance.
Second, escalation is not failure. We initially designed our agents to resolve everything autonomously. That was wrong. The right design is agents that resolve what they can and escalate what they should. A well-calibrated escalation rate is a sign of good governance, not bad automation.
Third, audit data is a product feature. When we started reviewing our own audit chain, we realized how valuable the decision history is. It shows patterns: which actions are routine, which are risky, which policies trigger most often, where the edge cases are. That insight is as valuable as the governance itself.
Why This Matters for Customers
When you deploy Intended, you are not using a product built on theory. You are using a product that we run our own company on. Every feature, every policy pattern, every escalation workflow -- they all work because we use them every day.
We are the proof that AI-native operations work at scale when governed properly. And we are the proof that Intended is the governance layer that makes it possible.
See the Intended backoffice in action. Request a demo to see how our AI agents operate under authority -- and how yours can too.