Both OpenAI function calling and MCP let an AI model use external tools. So why do they exist as separate things, and which should you actually use? The honest answer is that they solve overlapping problems at different levels of abstraction — and since OpenAI adopted MCP in 2025, the boundary between them has blurred further. Here's the complete breakdown.
What OpenAI Function Calling Actually Is
OpenAI function calling (now called "tools" in the API) is a feature of the OpenAI Chat Completions API. You include a tools array in your API request that describes available functions using JSON Schema. The model reads that schema, decides which function to call and with what arguments, and returns a structured JSON object — it does not call the function itself.
Your application code receives that JSON, executes the actual function, and feeds the result back into the conversation as a tool message. The model then generates its final response incorporating that result. The whole thing is a single-turn API pattern baked into HTTP request/response.
{
"model": "gpt-4o",
"messages": [...],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}
]
}
The key architecture point: the function implementation lives in your application. You define it, you execute it, you return the result. The API just coordinates the JSON handshake.
What MCP Actually Is
MCP (Model Context Protocol) is an open protocol — not an API feature, but a full communication standard. An MCP server is a standalone process that exposes tools, resources, and prompts over a defined wire format. An MCP client (Claude Desktop, Cursor, or a custom agent) connects to that server, discovers what it offers, and routes tool calls to it during the AI session.
Critically, MCP is model-agnostic and client-agnostic. The server doesn't know or care whether Claude, GPT-4, or some future model is on the other end. Any MCP-compatible client can connect and use the same server. You can learn more about the fundamentals in our guide to what an MCP server is.
# An MCP server exposes tools via the protocol
# Any MCP-compatible client can connect
Client (Claude Desktop) ──── MCP Protocol ────▶ Your MCP Server
├── tool: get_weather
├── tool: create_ticket
└── resource: config_file
The Five Core Differences
| Dimension | OpenAI Function Calling | MCP |
|---|---|---|
| Scope | Single API feature | Full ecosystem protocol |
| Portability | OpenAI API only (or clones) | Any MCP client: Claude, Cursor, GPT-4 via MCP |
| State | Stateless — each API call is independent | Stateful — persistent connection across tool calls |
| Server independence | You execute functions inside your app | Standalone server process handles its own execution |
| Who builds what | You define functions in your app; model returns structured calls | You write a server; any client can use it without changes to the client |
1. Scope: Feature vs. Protocol
Function calling is a feature of the OpenAI API — one parameter in a request. MCP is an entire communication protocol with defined message types, transport options, lifecycle events, and schema conventions. This is not a criticism of either; they're designed for different things. Function calling is intentionally simple and self-contained. MCP is intentionally comprehensive and reusable.
2. Portability: Locked vs. Universal
A function definition you write for the OpenAI API works with OpenAI models (and models that clone the OpenAI API format). If you want to switch to Claude, you rewrite your tool definitions in Anthropic's API format. With MCP, you write one server. Any MCP-compatible client uses it — Claude Desktop, Cursor, VS Code extensions with MCP support, and now GPT-4 via OpenAI's MCP client. This is the core value proposition that the N×M problem MCP solves describes in detail.
3. State: Ephemeral vs. Persistent
Each OpenAI API call is independent. If a tool call on turn 3 modifies some state, the API doesn't inherently know about it on turn 4 — you have to manage that in your application logic. MCP maintains a connection for the duration of a session. The server can hold state internally: authenticated sessions, open file handles, accumulated context. This matters for tools that involve multiple sequential steps (like a multi-step database transaction or an OAuth flow).
4. Transport: Baked in vs. Flexible
Function calling lives inside the HTTP API. There's no other transport option. MCP supports stdio transport (the server runs as a subprocess) and HTTP with SSE (the server runs remotely). This means MCP servers can run locally on a developer's machine, inside a Docker container, or as a remote service — without changing how clients connect to them.
5. Who Executes What
With function calling, the model returns JSON and your app runs the code. The model and the execution are tightly coupled in your application. With MCP, the server is a separate process with its own runtime. The model (via the client) sends a tool call, the server executes it and returns the result. The model never directly touches the execution environment — the server is the execution environment.
When to Use Function Calling
OpenAI function calling is the right choice when:
- You're building an application directly against the OpenAI API and don't need cross-client portability.
- You want the simplest possible implementation — no extra server, no extra process, just an array in your API request.
- Your tool logic is tightly coupled to your application and doesn't make sense as a standalone service.
- You're prototyping and want to iterate quickly without infrastructure overhead.
When to Use MCP
MCP is the right choice when:
- You want to build a tool once and use it across Claude Desktop, Cursor, and other MCP clients.
- You want other people (teammates, the open-source community) to install and use your tool without modifying their client applications.
- You need persistent state across tool calls within a session.
- You're using Claude Desktop or Cursor as your primary AI interface and want to extend it with custom capabilities.
- You're building for enterprise environments where a standalone server process integrates better with existing infrastructure.
The Convergence: OpenAI Adopted MCP in 2025
In 2025, OpenAI announced native MCP support in their products — the ChatGPT desktop app and the Agents SDK. This was a significant industry signal: OpenAI, whose function calling spec defined the category, adopted the cross-vendor MCP protocol for tool interoperability.
What this means practically: an MCP server you build today can work with Claude Desktop, Cursor, and GPT-4 via OpenAI's MCP client — without any modification to your server code. The portability argument for MCP got substantially stronger.
What it doesn't mean: function calling is going away. It remains a simple, effective API primitive for applications built directly on the OpenAI API. The two layers coexist — function calling at the API primitive level, MCP at the ecosystem interoperability level. You can learn about the governance implications of this convergence in our article on MCP's Linux Foundation governance.
Is One Replacing the Other?
No. Function calling is a low-level API primitive — the mechanism by which a model signals "I want to call this function with these arguments." MCP is an ecosystem layer built on top of that concept. MCP clients internally use something like function calling when they relay tool requests to the underlying model API. They're at different levels of the stack.
The analogy: HTTP is a low-level protocol. REST APIs are built on HTTP. REST didn't replace HTTP — it used HTTP. MCP didn't replace function calling — it standardized the ecosystem layer above it.
Frequently Asked Questions
Not directly. OpenAI function calling is a proprietary API pattern where you define functions in the request payload and the model returns structured JSON. MCP is a separate protocol with its own transport, session management, and tool schema. However, since OpenAI adopted MCP in 2025, you can now use MCP servers with GPT-4 through OpenAI's MCP client support — meaning the ecosystems can interoperate at the MCP protocol level.
Yes. In 2025, OpenAI announced support for MCP in their products, including the ChatGPT desktop app and the Agents SDK. This means developers can build MCP servers and have them work with both Claude and GPT-4 without modification. It was a significant industry milestone that validated MCP as a cross-vendor standard rather than an Anthropic-specific feature.
Yes, and it's relatively straightforward. An OpenAI function definition has a name, description, and JSON Schema parameters — the same core fields an MCP tool definition requires. The main work is wrapping your function execution logic in an MCP server (using an MCP SDK) and handling the transport layer. The schema itself translates almost directly.
OpenAI function calling is easier to start with — you add a 'tools' array to your API request and handle the response in your existing app code. MCP requires running a separate server process and configuring the client to connect to it. However, MCP is easier to scale and share: once your MCP server exists, any compatible client can use it without you changing your app code.