2026-03-06
Why We Open-Sourced Our Intent Taxonomy
Intended Team · Founding Team
The Decision
When we built the MIR taxonomy -- the classification system for AI agent actions -- we had a choice. Keep it proprietary, making it a competitive moat that locks customers into our platform. Or open-source it, making it a standard that anyone can adopt, contribute to, and build on.
We chose open-source. Not because we are altruistic. Because it is the better business strategy.
Lessons from the Companies That Built Categories
Three companies shaped our thinking. Each built a market-defining category by open-sourcing a foundational technology and building a commercial platform on top of it.
Databricks and Apache Spark
Databricks did not invent big data processing. But they made it accessible by creating Spark and open-sourcing it. Spark became the standard for distributed data processing because anyone could use it, and because it was genuinely better than the alternatives. The open-source project attracted thousands of contributors, built a massive ecosystem, and established the mental model that the industry uses to think about data processing.
Databricks built their commercial platform -- the Databricks Lakehouse -- on top of Spark. They did not compete with Spark. They competed with the operational complexity of running Spark yourself. The commercial platform adds managed infrastructure, enterprise security, governance, and collaboration features that Spark alone does not provide.
Result: Spark is everywhere. Databricks is a $43 billion company.
Confluent and Apache Kafka
Confluent followed the same playbook with Kafka. The open-source project defined event streaming. The commercial platform -- Confluent Cloud and Platform -- handles the operational burden of running Kafka in production: multi-region replication, schema governance, exactly-once processing, and enterprise security.
Confluent did not need to convince enterprises that event streaming was valuable. Kafka already proved that. They needed to convince enterprises that running Kafka themselves was harder than paying Confluent to do it. The open-source project created the demand. The commercial platform captured it.
Result: Kafka processes trillions of events per day across the industry. Confluent is a public company.
HashiCorp and Terraform
HashiCorp open-sourced Terraform, and it became the standard for infrastructure as code. The commercial platform -- Terraform Cloud and Enterprise -- adds collaboration, policy enforcement (via Sentinel), state management, and governance features that solo Terraform does not provide.
The open-source project established the workflow. The commercial platform made it enterprise-ready.
Result: Terraform manages infrastructure at most large enterprises. HashiCorp was acquired by IBM for $6.4 billion.
The Pattern
The pattern across all three companies is the same:
- Open-source the foundational layer that defines the category
- Let the open-source project establish the standard and build the ecosystem
- Build a commercial platform that adds enterprise-grade operations, governance, and management on top of the open-source foundation
- The open-source project creates demand. The commercial platform captures it.
The key insight is that the foundational layer is more valuable as a standard than as a proprietary advantage. A proprietary Spark would have been just another data processing tool. An open Spark became the industry standard, and Databricks became the company that enterprises trust to run it.
What We Open-Sourced
MIR Taxonomy (@intended/open-intent-layer)
The complete classification system for AI agent actions. Fourteen domains, 80+ categories, structured metadata. Apache 2.0 licensed. Anyone can install it, use it, extend it, and contribute to it.
This is our Spark. It is the foundational layer that defines how the industry classifies AI agent actions. If MIR becomes the standard vocabulary, every team that classifies AI agent actions is a step closer to needing the governance platform that evaluates them.
SDKs and CLI
The TypeScript, Python, and Go SDKs for interacting with Intended are open-source. The CLI is open-source. These are the developer tools that make it easy to integrate Intended into any environment. Open-sourcing them ensures that developers can inspect the code, contribute improvements, and build confidence in the platform.
MCP Gateway
The gateway that wraps MCP tool calls with Intended governance is open-source. It is the most common entry point for new users -- install the gateway, wrap your tools, and every call is governed. Making it open-source lowers the adoption barrier to zero.
Verification SDK (@intended/verify)
The SDK for verifying authority tokens, evidence bundles, and audit chain segments is open-source. This is critical for trust -- if the verification tools were proprietary, customers would have to trust Intended's claims about cryptographic proof. With open-source verification, they can inspect the code and verify independently.
What Is Commercial
Authority Engine
The core decision engine that evaluates intents against policies, scores risk, and issues authority tokens. This is the computational heart of Intended -- the part that turns classification into governance. It requires significant infrastructure (low-latency evaluation, key management, high-availability deployment) and represents the bulk of our engineering investment.
Domain Intelligence Packs
The 14 pre-built governance models for enterprise domains. Each pack encodes domain-specific intents, risk rules, escalation policies, and compliance mappings. Building a Domain Intelligence Pack requires deep domain expertise and continuous maintenance as enterprise practices evolve. The packs are the "knowledge" layer that makes Intended's governance domain-aware.
Audit Chain
The hash-chained audit ledger with RS256-signed tokens and HMAC evidence bundles. The chain is stored, managed, and served by the Intended platform. Customers can export segments and verify them independently (using the open-source Verification SDK), but the chain itself is a managed service.
Operator Console
The web console for managing policies, reviewing escalations, exploring decisions, and monitoring AI agent operations. The console is the management interface for Intended -- the place where operators define how their AI agents are governed.
The Flywheel Economics
Here is how the economics work:
A developer finds MIR while searching for a way to classify AI agent actions. They install @intended/open-intent-layer and start using it in their logging and monitoring. No Intended account required.
Their team adopts MIR as the standard vocabulary. They build dashboards that show AI agent activity by MIR domain and category. They write alerts based on MIR classifications. The taxonomy becomes part of their infrastructure.
Eventually, someone asks: "We know what our AI agents are doing. Can we control whether they should be doing it?" That question is the bridge from classification to governance. The answer is Intended -- the Authority Engine, Domain Intelligence Packs, and Audit Chain that turn classification into policy-enforced, risk-scored, cryptographically provable authorization.
We do not need to convince that team to adopt MIR. They already did. We do not need to convince them that classification matters. They already know. We need to convince them that governance is the logical next step -- and that Intended is the best way to do it.
The open-source project creates the demand. The commercial platform captures it. That is the flywheel.
Contributing to MIR
MIR is on GitHub at github.com/meritt-ai. We welcome contributions:
- New categories for existing domains
- New sub-domains for industry-specific use cases
- Language-specific packages (Python and Go are in progress)
- Documentation improvements and examples
- Bug reports and feature requests
The taxonomy follows semantic versioning. Non-breaking additions ship as minor releases. Breaking changes (category removals, domain restructuring) only ship in major releases with a deprecation period.
Star us on GitHub. Install the taxonomy. Build on it. The AI agent era needs a standard classification system, and MIR is better when the community builds it together.