cat mcp_state_of_the_protocol.md

MCP. The Protocol. The tool use standard that went from internal experiment to critical infrastructure in 12 months.

Anthropic open-sourced it in November 2024. By December 2025 it was donated to the Linux Foundation — with OpenAI, Google, Microsoft, and Cloudflare signing on. 97 million monthly SDK downloads. 10,000+ servers. This is not a niche protocol anymore. This is the USB-C port for AI agents.

97M
monthly SDK downloads
10K+
active servers
12mo
experiment → standard
M+N
replaces M×N integrations
// MCP Architecture · Live Signal Flow TRANSMITTING

§01 The M×N Problem

Before MCP, every AI integration was a bespoke engineering project. You wanted your agent to read from Notion? Build a connector. Query a database? Build a connector. Search Slack? Build a connector. If you had M applications and N data sources, you had an M×N surface area of custom connectors — each one a different shape, a different auth pattern, a different error contract.

The math was unsustainable. MCP collapses M×N into M+N. Build one MCP server, and any MCP-compatible AI can use it. Build one MCP client, and it connects to thousands of existing servers. The integration problem becomes additive instead of multiplicative.

// The Integration Problem — Before and After MCP

This is not a subtle architectural improvement. It's the difference between every team building their own USB standards for every device versus the world agreeing on one port. The Language Server Protocol did this for code editors and language support. REST did it for web service interactions. MCP is doing it for AI-to-tool communication.

§02 How It Actually Happened

The origin story matters. MCP didn't emerge from a standards committee. It came from a developer's frustration — constantly copying context between Claude Desktop and an IDE. That origin shapes what it is: a practical spec built for builders, not a theoretical framework built for committees.

Nov 2024
Anthropic open-sources MCP

Released as an open standard with Python and TypeScript SDKs. Written off by most as "another standard that would die in committee." The community starts building servers immediately.

Mar 2025
OpenAI adopts MCP across Agents SDK, Responses API, and ChatGPT Desktop

This is the signal. When the largest AI company in the world adopts a competitor's protocol rather than building their own, the protocol has won.

Apr 2025
Google DeepMind confirms MCP support in Gemini

Demis Hassabis confirms integration. The three largest frontier AI labs are now on one protocol.

Nov 2025
Spec major update — async ops, statelessness, server identity, community registry

The protocol grows up. Asynchronous operations enable long-running tasks. Official server discovery registry. These are production-grade features, not alpha improvements.

Dec 2025
Anthropic donates MCP to Linux Foundation Agentic AI Foundation (AAIF)

Co-founded with OpenAI and Block. AWS, Google, Microsoft, Cloudflare, and Bloomberg as supporting members. This is the move that locks in long-term governance and prevents any single company from owning the standard.

§03 What It Actually Is

MCP is a client-server protocol built on JSON-RPC 2.0. Three architectural components: the Host (your AI application — Claude, ChatGPT, your custom agent), the Client (the protocol layer that manages connections), and the Server (a lightweight program exposing specific capabilities). Each server exposes one of three primitive types.

// MCP Three Primitives — What Servers Expose
PrimitiveWhat It IsDescriptionExample
Tools Actions AI can execute Arbitrary code execution — functions the model can call to do things in the world. Must be treated with care; these have real-world effects. create_github_issue, send_slack_message, run_sql_query
Resources Data AI can read Structured data sources providing context. Files, database rows, logs, documentation. Read-only. No side effects. File contents, calendar events, CRM records, system logs
Prompts Templates shaping behavior Pre-defined instructions or templates that configure how the AI approaches a task. Baked into the server, not the client. Bug report template, code review checklist, deal analysis framework

Transport is flexible: local stdio (for servers running on the same machine) or HTTP with Server-Sent Events for remote/cloud deployments. Cloudflare has first-class MCP server support — which means any Cloudflare Worker can be an MCP server. For builders already on Cloudflare's infrastructure, this isn't a migration. It's a configuration.

The production implication: Every Cloudflare Worker in your stack is a potential MCP server. The workers you've already built for valuation, leads, sentiment analysis, and slack intelligence can be exposed as MCP tools to any MCP-compatible AI. Your existing architecture is already MCP-ready. It just needs to be declared.

§04 The Problem That Emerges at Scale

MCP solves the integration problem brilliantly. It creates a new problem as adoption scales: when you have hundreds of tools loaded into context, the tool definitions themselves become a significant token cost. Every tool definition loaded upfront is tokens the model has to process. At 100+ tools across a dozen MCP servers, you're burning context before the task even starts.

The emerging pattern: instead of loading all tool definitions at session start, build agents that discover tools on demand — a search_tools function that finds relevant tools by keyword, then loads only those definitions. The model reads tool definitions from a virtual filesystem the way it would read code, rather than having them injected into context wholesale.

The security surface is real. April 2025 research identified key MCP attack vectors: prompt injection via tool output (a malicious server can instruct the model through what looks like tool results), tool permissions that allow data exfiltration through combined tool calls, and lookalike tools that silently replace trusted ones. MCP cannot enforce security at the protocol level — that's the host's responsibility. Build consent flows. Audit tool invocations. Treat tool descriptions from unverified servers as untrusted.
MCP is not an agent framework.
It's the plumbing.
The thing that lets agents do things.

Before MCP, every team wrote their own connector glue. After MCP, you declare what your agent can access and any compatible model can use it. The bottleneck moves from integration to intelligence — which is exactly where it should be.

§05 What This Means If You're Building

The practical read for anyone running an AI stack in 2026: MCP is not optional infrastructure. It's the standard interface. When your tools speak MCP, any model upgrade — Claude, GPT, Gemini, whatever comes next — can use them without rewriting your integrations. Your tool layer becomes model-agnostic. That's a genuine architectural advantage in a market where the frontier model changes every few months.

The network effects compound: more AI clients supporting MCP makes it more valuable to build MCP servers. More MCP servers makes it more valuable for AI clients to support MCP. This is the same dynamic that locked in USB-C, locked in REST, locked in TCP/IP. The standard that everyone builds around becomes the standard that nobody leaves.

One year from a frustrated developer's clipboard to the Linux Foundation, with OpenAI and Google as co-signers. The committee didn't kill it. The builders adopted it first, and the committee followed.

J
Justin Erickson — PropTechUSA.ai
87 Cloudflare Workers · Named AI executives · Building on the protocol · March 2026
Continue Reading