§01 The M×N Problem
Before MCP, every AI integration was a bespoke engineering project. You wanted your agent to read from Notion? Build a connector. Query a database? Build a connector. Search Slack? Build a connector. If you had M applications and N data sources, you had an M×N surface area of custom connectors — each one a different shape, a different auth pattern, a different error contract.
The math was unsustainable. MCP collapses M×N into M+N. Build one MCP server, and any MCP-compatible AI can use it. Build one MCP client, and it connects to thousands of existing servers. The integration problem becomes additive instead of multiplicative.
This is not a subtle architectural improvement. It's the difference between every team building their own USB standards for every device versus the world agreeing on one port. The Language Server Protocol did this for code editors and language support. REST did it for web service interactions. MCP is doing it for AI-to-tool communication.
§02 How It Actually Happened
The origin story matters. MCP didn't emerge from a standards committee. It came from a developer's frustration — constantly copying context between Claude Desktop and an IDE. That origin shapes what it is: a practical spec built for builders, not a theoretical framework built for committees.
Released as an open standard with Python and TypeScript SDKs. Written off by most as "another standard that would die in committee." The community starts building servers immediately.
This is the signal. When the largest AI company in the world adopts a competitor's protocol rather than building their own, the protocol has won.
Demis Hassabis confirms integration. The three largest frontier AI labs are now on one protocol.
The protocol grows up. Asynchronous operations enable long-running tasks. Official server discovery registry. These are production-grade features, not alpha improvements.
Co-founded with OpenAI and Block. AWS, Google, Microsoft, Cloudflare, and Bloomberg as supporting members. This is the move that locks in long-term governance and prevents any single company from owning the standard.
§03 What It Actually Is
MCP is a client-server protocol built on JSON-RPC 2.0. Three architectural components: the Host (your AI application — Claude, ChatGPT, your custom agent), the Client (the protocol layer that manages connections), and the Server (a lightweight program exposing specific capabilities). Each server exposes one of three primitive types.
| Primitive | What It Is | Description | Example |
|---|---|---|---|
| Tools | Actions AI can execute | Arbitrary code execution — functions the model can call to do things in the world. Must be treated with care; these have real-world effects. | create_github_issue, send_slack_message, run_sql_query |
| Resources | Data AI can read | Structured data sources providing context. Files, database rows, logs, documentation. Read-only. No side effects. | File contents, calendar events, CRM records, system logs |
| Prompts | Templates shaping behavior | Pre-defined instructions or templates that configure how the AI approaches a task. Baked into the server, not the client. | Bug report template, code review checklist, deal analysis framework |
Transport is flexible: local stdio (for servers running on the same machine) or HTTP with Server-Sent Events for remote/cloud deployments. Cloudflare has first-class MCP server support — which means any Cloudflare Worker can be an MCP server. For builders already on Cloudflare's infrastructure, this isn't a migration. It's a configuration.
§04 The Problem That Emerges at Scale
MCP solves the integration problem brilliantly. It creates a new problem as adoption scales: when you have hundreds of tools loaded into context, the tool definitions themselves become a significant token cost. Every tool definition loaded upfront is tokens the model has to process. At 100+ tools across a dozen MCP servers, you're burning context before the task even starts.
The emerging pattern: instead of loading all tool definitions at session start, build agents that discover tools on demand — a search_tools function that finds relevant tools by keyword, then loads only those definitions. The model reads tool definitions from a virtual filesystem the way it would read code, rather than having them injected into context wholesale.
It's the plumbing.
The thing that lets agents do things.
Before MCP, every team wrote their own connector glue. After MCP, you declare what your agent can access and any compatible model can use it. The bottleneck moves from integration to intelligence — which is exactly where it should be.
§05 What This Means If You're Building
The practical read for anyone running an AI stack in 2026: MCP is not optional infrastructure. It's the standard interface. When your tools speak MCP, any model upgrade — Claude, GPT, Gemini, whatever comes next — can use them without rewriting your integrations. Your tool layer becomes model-agnostic. That's a genuine architectural advantage in a market where the frontier model changes every few months.
The network effects compound: more AI clients supporting MCP makes it more valuable to build MCP servers. More MCP servers makes it more valuable for AI clients to support MCP. This is the same dynamic that locked in USB-C, locked in REST, locked in TCP/IP. The standard that everyone builds around becomes the standard that nobody leaves.
One year from a frustrated developer's clipboard to the Linux Foundation, with OpenAI and Google as co-signers. The committee didn't kill it. The builders adopted it first, and the committee followed.