Every AI assistant in 2023 had its own proprietary way of connecting to external tools. OpenAI called them "functions." Anthropic called them "tools." Every integration was a custom build. Developers wasted time writing the same glue code over and over. MCP is the attempt to fix that — one open standard, works everywhere.
The problem MCP solves
Before MCP, building an AI integration meant writing custom code for each AI model, each tool, and each connection. Want Claude to read files? Write an Anthropic-specific tool definition. Want the same feature in Cursor? Write it again, differently. Want to add GitHub? Write it a third time.
It's the same problem that HTTP solved for the web. Before HTTP, every system used its own protocol for transferring documents. HTTP standardized it so any browser could talk to any server. MCP does the same thing for AI-to-tool communication.
MCP in one paragraph
MCP defines a standard language — called a protocol — for an AI model (the "client") to discover what a tool server (the "server") can do, and then call those capabilities. The server announces its tools, resources, and prompts. The client decides which ones to use. The whole exchange happens over a defined message format using JSON-RPC 2.0.
That's it. No magic. No AI. Just a well-defined communication standard that both sides agree to speak.
The three things an MCP server can expose
The MCP spec defines three capability types:
- Tools — functions the AI can call, like
read_file,search_web, orexecute_query. These are the most commonly used. - Resources — data the AI can read or subscribe to, like a live metrics feed or a configuration store. Think of these as readable URIs.
- Prompts — templated prompts the server provides, which the AI can use to invoke common patterns (like "summarize this document" with the document already filled in).
In practice, 90% of MCP servers today expose only tools. Resources and prompts are used by more advanced server implementations.
How an MCP conversation actually works
Here's the simplified message flow when you ask Claude to read a file using the filesystem MCP server:
- Claude Desktop starts the MCP server process (via stdio) when it launches.
- The server sends a
tools/listresponse announcing its capabilities. - You ask Claude: "What's in my config.json file?"
- Claude decides it needs to call
read_filewith path/project/config.json. - Claude sends a
tools/callrequest to the server. - The server reads the file and sends back the content.
- Claude incorporates the content into its response to you.
The whole exchange happens in milliseconds. You see it as Claude "knowing" what's in your file — but it actually just called a tool to fetch it on demand.
Why MCP is important beyond just Claude
The real significance of MCP is that it's vendor-neutral and open. Cursor — the AI code editor — adopted MCP in early 2025. Zed adopted it. Windsurf followed. Any tool built as an MCP server now works across all of these clients without any changes.
That's the "write once, use everywhere" promise — and it's actually being delivered. The filesystem MCP server you install for Claude Desktop also works in Cursor, with the same config format.
Turns out, standardization matters. The AI tooling ecosystem is growing faster because of MCP, not despite it.
Is MCP related to the Model Context Window?
No — the names are confusingly similar but unrelated. "Context window" refers to the amount of text an AI can process at once (Claude 3.5 Sonnet's is 200,000 tokens). "Model Context Protocol" is a communication protocol for tool integration. They're completely different things.
What version of MCP are we on in 2026?
MCP 1.0 was published in November 2024. By mid-2026, the spec has had several point releases adding features like streamable HTTP transport, OAuth support in the protocol layer, and expanded resource types. Check spec.modelcontextprotocol.io for the current version and changelog.
Frequently Asked Questions
Anthropic created and open-sourced MCP, announcing it in November 2024. The spec and reference implementations are hosted at github.com/modelcontextprotocol under Apache 2.0 license.
No. MCP is an open protocol that any AI system can implement. As of 2026, Claude Desktop, Cursor, Zed, and Windsurf all support MCP. Other vendors can adopt it freely — the spec is public and Apache-licensed.
MCP supports two transports: stdio (standard input/output, for local servers that Claude Desktop spawns as child processes) and HTTP+SSE (Server-Sent Events, for remote servers accessed over the network). Stdio is used for most local MCP servers today.
Yes, and it's surprisingly approachable. Anthropic provides official SDKs for TypeScript and Python at github.com/modelcontextprotocol/typescript-sdk and github.com/modelcontextprotocol/python-sdk. See our custom MCP server build guide for a step-by-step walkthrough.