Before November 2024, building an AI assistant that could actually use your tools meant writing a different integration for every AI model you wanted to support. GitHub wrote a connector for Claude. Then rewrote it for GPT-4. Then again for Gemini. The AI teams did the same from their side. Everyone was duplicating work, and every integration was a maintenance liability. This is the N×M problem — and MCP was built specifically to kill it.

What the N×M Problem Actually Means

Let's be concrete. Imagine the AI ecosystem has:

  • N = 10 AI models: Claude, GPT-4, Gemini, Mistral, Llama, Command R, Grok, Phi, Qwen, Falcon
  • M = 50 tools/data sources: GitHub, Slack, Notion, Jira, Postgres, Salesforce, filesystem, web search, and 42 more

Without a standard protocol, connecting all AI models to all tools means building N × M = 10 × 50 = 500 separate integrations. Each one is custom code. Each one breaks when either the AI model or the tool updates its API. Each one must be maintained indefinitely by someone.

Scale this to a realistic ecosystem — 50 models and 500 tools — and you're at 25,000 integrations. The math compounds catastrophically. No ecosystem can sustain that.

What the Pre-MCP World Looked Like

This wasn't a hypothetical. Before MCP, this was reality. Consider a simple example: giving an AI model access to the local filesystem.

  • Anthropic built their own file access implementation for Claude.
  • OpenAI built their own, differently structured, for GPT-4.
  • Google built their own for Gemini.
  • Cursor, the AI code editor, built its own.

Now a developer building a filesystem tool had to choose: support Claude only? GPT-4 only? Maintain four separate implementations? The tool vendor was stuck. The AI vendor was stuck. And users suffered because their AI of choice couldn't connect to most of the tools they needed.

Plugin systems tried to help. OpenAI's ChatGPT Plugins (2023) created a de facto standard — but it was controlled by one company, worked only with ChatGPT, and shut down in April 2024. The problem was temporarily papered over, not solved.

The N+M Solution: One Protocol, Everything Works

MCP's solution is elegant in the way good engineering always is: introduce a standard protocol that separates the concerns.

  • N MCP clients: one per AI model or AI-powered application. Claude Desktop has one. Cursor has one. Any future AI tool can add one.
  • M MCP servers: one per tool or data source. GitHub's MCP server works with every client. The filesystem MCP server works with every client.

Total implementations: N + M = 10 + 50 = 60. That's an 88% reduction from 500. At ecosystem scale (50 models, 500 tools), you go from 25,000 to 550. The math is unambiguously better for everyone.

The Architecture: Before and After

Here's the integration topology, before and after MCP:

Before MCP (N×M):

Claude ──── Filesystem integration A
Claude ──── GitHub integration B
Claude ──── Slack integration C
GPT-4  ──── Filesystem integration D
GPT-4  ──── GitHub integration E
GPT-4  ──── Slack integration F
Gemini ──── Filesystem integration G
Gemini ──── GitHub integration H
Gemini ──── Slack integration I
...
(N×M total integrations — each custom, each breaking independently)


After MCP (N+M):

Claude ─┐
GPT-4  ─┤── MCP Protocol ──┬── Filesystem MCP Server
Gemini ─┘                  ├── GitHub MCP Server
                           └── Slack MCP Server

(N clients + M servers = N+M total implementations)
(Every client works with every server automatically)
The N×M integration explosion vs. the N+M hub-and-spoke model that MCP enables

In the N+M world, when a new AI model launches, it adds one MCP client implementation and immediately gains access to all existing MCP servers. When a new tool ships an MCP server, it immediately works with all existing MCP clients. The network effect compounds positively instead of creating maintenance debt.

The USB Analogy

If you find the abstract math unconvincing, here's a physical analogy that makes it tangible: USB solved exactly this problem for hardware peripherals in the 1990s.

Before USB, every peripheral needed its own port. Mice used PS/2 ports. Keyboards used different PS/2 ports. Printers used parallel ports. Modems used serial ports. Scanners used proprietary connections. Connecting a new peripheral to a new computer was genuinely uncertain — you needed to match not just the connector but the electrical protocol.

USB introduced a single standard connector and protocol. Now a mouse, keyboard, printer, camera, and hard drive all use the same port. Peripheral makers write one driver. Computer makers include one port type. The number of required "integrations" collapsed from N×M to N+M — exactly as MCP does for AI tools.

MCP is often described as "USB-C for AI" in the developer community, and the analogy is structurally precise, not just a marketing metaphor.

Real-World Evidence: 10,000+ Servers

The best evidence that the N+M model works is what happened after MCP launched. Within roughly a year, over 10,000 MCP servers had been created by the community and by major software companies.

This would be impossible under an N×M model. A developer building a Notion integration wouldn't write it five times for five AI models. But writing it once as an MCP server — knowing it works with Claude, Cursor, Zed, and every future MCP client — is worth the effort. The single-standard guarantee is what unlocks the ecosystem.

You can see this in practice: once you write an MCP server and install it in Claude Desktop, the exact same server can be configured in Cursor or any other MCP-compatible application without modification. Learn more about what MCP servers actually are if you want to understand the full architecture.

The Governance Angle: Why Linux Foundation Matters

Here's a subtle point that's easy to miss: the N+M math only holds if the standard stays genuinely open and neutral.

If Anthropic controlled the MCP spec forever, OpenAI and Google would be reluctant to adopt it — giving a competitor control over the protocol you depend on is strategically dangerous. They'd build competing standards, and the ecosystem would fragment back toward N×M.

Anthropic solved this by donating MCP to the Linux Foundation in November 2025. The Linux Foundation is a neutral non-profit that governs critical open-source infrastructure (Linux kernel, Kubernetes, and hundreds of other projects). No single company controls the spec. Read the full governance story here.

This is why OpenAI adopted MCP in March 2025 and Google followed. The N+M math is better for everyone — but only if everyone can trust the standard. Neutral governance makes that trust possible.

Why This Should Matter to You as a Builder

If you're building AI-powered tools today, the N×M problem has a direct practical implication for your work.

Under the old model, you had to make a bet: which AI model will win? Build your integration for Claude and you might miss GPT-4 users. Build for both and you're maintaining two codebases. Build for all of them and you're maintaining N codebases with no leverage.

Under MCP, you build one server. You choose the protocol, not the AI model. Understanding the client/server/host distinction is the first step to building correctly within this model.

The investment you make in an MCP server today compounds as more MCP clients appear. You're not betting on one AI model — you're building for the protocol that all AI models are converging on.

Frequently Asked Questions

The N×M problem refers to the combinatorial explosion of custom integrations required when N AI models each need to connect to M tools or data sources individually. With 10 AI models and 50 tools, you end up needing 500 separate integrations — each one custom-built, each one breaking independently when either side updates.

MCP introduces a single standard protocol so each AI model only needs one MCP client implementation, and each tool only needs one MCP server implementation. Instead of N×M integrations, you need N+M implementations. With 10 models and 50 tools, that's 60 total instead of 500 — and every combination works automatically.

Yes. Before MCP (released November 2024), every AI provider — Anthropic, OpenAI, Google — built its own proprietary integration format for connecting to external tools. A tool vendor like GitHub had to build separate integrations for Claude, GPT-4, Gemini, and every other model they wanted to support. Each integration was maintained separately and broke independently.

The N+M solution only works if the shared standard stays truly neutral. If one company controls the MCP spec, competitors may refuse to adopt it or fork it — reverting to an N×M situation. Anthropic donated MCP governance to the Linux Foundation in November 2025 precisely to prevent this: a neutral home means every AI company can adopt MCP without giving a competitor control over the protocol.