If you're still using SSE for your MCP server, you're on a deprecated transport. The March 2025 spec update replaced Server-Sent Events with Streamable HTTP — a change that fixes real infrastructure problems SSE couldn't solve. This guide explains what changed, why it matters, and exactly which transport you should use depending on your setup.
A Quick Primer: What Are MCP Transports?
An MCP transport is the communication channel between an MCP client (like Claude Desktop) and an MCP server (the thing that provides tools). The MCP protocol defines what messages get exchanged; the transport defines how those messages physically move between the two parties.
Think of MCP as a language and the transport as the medium — you can speak the same language over a phone call, over email, or in person. The words are the same; the delivery mechanism is different.
MCP has always supported multiple transports because different deployment scenarios have different needs. A tool running on your laptop doesn't need the same infrastructure as a tool running in a cloud function serving thousands of users.
SSE: MCP's Original Remote Transport
Server-Sent Events (SSE) is a web standard that lets a server push a stream of events to a client over a single, long-lived HTTP connection. The client opens one connection and keeps it open; the server sends data down that connection as events occur.
SSE was MCP's original remote transport, introduced in the first public spec release on November 5, 2024 (MCP 2024-11-05). The design made sense at the time: MCP needed a way to stream responses from a remote server to a client, and SSE is a well-understood HTTP standard that's simpler than WebSockets.
In the original MCP SSE setup, the client connected to the server's SSE endpoint and kept that connection open. Tool calls were sent back to the server over a separate HTTP POST endpoint. The server would then push results back down the SSE stream. Two channels: one for pushing events, one for sending requests.
// Original MCP SSE architecture (deprecated)
Client → POST /message (sends tool call request)
Client ← GET /sse (receives streaming response)
// Two separate HTTP channels — persistent SSE connection required
Why SSE Was Deprecated: The Serverless Problem
SSE worked fine for servers running as persistent processes — a Node.js server on a VPS, a Python service in a Docker container. But the modern cloud runs on serverless functions, and SSE breaks there fundamentally.
Here's the core issue: serverless functions like Cloudflare Workers and AWS Lambda are designed to handle a single request and then terminate. They don't stay alive between requests. SSE requires a persistent TCP connection that stays open for the duration of the session — sometimes minutes or hours. A serverless function cannot hold that connection.
The practical consequences were significant:
- No serverless deployment: You couldn't put an MCP server behind Cloudflare Workers, AWS Lambda, Google Cloud Functions, or Vercel Edge Functions.
- Load balancer incompatibility: Many load balancers have aggressive connection timeouts. A persistent SSE connection gets killed before tool calls complete.
- Reverse proxy headaches: Nginx, Caddy, and other proxies often buffer HTTP responses. SSE requires non-buffered streaming, which needs special configuration that's easy to get wrong.
- Scaling complexity: Sticky sessions are required when you have multiple server instances, because the SSE connection must go to the same instance that receives the POST requests.
These weren't edge cases — they were blockers for a large fraction of how cloud infrastructure actually works today. The MCP team recognized this and designed a replacement. For the full history of what changed in the March 2025 update, see the MCP 2024 vs March 2025 spec comparison.
Streamable HTTP: How It Solves SSE's Problems
Streamable HTTP, introduced in MCP 2025-03-26, keeps the benefits of SSE (streaming responses, real-time progress updates) while eliminating the requirement for a persistent connection.
The core insight is simple: instead of requiring a dedicated long-lived SSE channel, Streamable HTTP sends everything over normal HTTP POST requests. The server's response to a POST can itself be a streaming body — either a single JSON response or a stream of SSE-formatted events delivered within the response body of a single HTTP request.
// Streamable HTTP architecture (current standard)
Client → POST /mcp (sends request)
Client ← HTTP response (can be: single JSON OR streaming SSE body)
// One channel — standard HTTP request/response
// No persistent connection required
When the server has a simple, non-streaming response, it replies with a plain JSON body and closes the connection immediately. When the server needs to stream progress or push intermediate events, it uses HTTP streaming (chunked transfer encoding) within the same response. The connection is only held open as long as that single request is being served.
This approach is compatible with every layer of modern HTTP infrastructure:
- Serverless functions: Each MCP request is an independent HTTP POST. Cloudflare Workers, Lambda, and Vercel all handle these natively.
- Load balancers: No sticky sessions needed. Each request can go to any instance.
- Reverse proxies: Standard HTTP proxying works without special streaming configuration.
- CDNs: Requests can pass through CDN edge nodes without special handling.
Transport Comparison Table
| Feature | SSE | Streamable HTTP | stdio |
|---|---|---|---|
| Direction | Server→Client only | Bidirectional | Bidirectional |
| Persistent connection | Required | Not required | N/A (process pipe) |
| Serverless compatible | No | Yes | N/A |
| Load balancer friendly | Needs sticky sessions | Yes, stateless | N/A |
| Spec status | Deprecated (2025-03-26) | Current standard | Active (local) |
| Best use case | Legacy servers only | Remote servers | Local servers |
stdio: Still the Right Answer for Local Servers
The SSE-to-Streamable-HTTP transition is entirely about remote MCP servers — servers running somewhere on the internet that clients connect to over HTTP. It has nothing to do with local MCP servers.
For a server running on the same machine as the client (like most MCP servers used with Claude Desktop), stdio is still the standard transport. stdio uses standard input and output streams — the client spawns the server as a child process and pipes MCP messages through stdin/stdout.
stdio works like this:
// stdio transport (local servers)
Client spawns server process
Client → server via stdin (newline-delimited JSON)
Client ← server via stdout (newline-delimited JSON)
// Simple, fast, zero network overhead
stdio is ideal for local servers because it's dead simple — no networking, no authentication, no ports to configure. If you're building a tool that only runs locally, use stdio. Only reach for Streamable HTTP when you're deploying a server that remote clients will connect to over the internet. For a practical look at setting up local servers, see the MCP server installation guide.
Migration Guide: Switching from SSE to Streamable HTTP
If you have an existing SSE-based MCP server, here's how to migrate. The good news: the MCP protocol logic doesn't change. You're only swapping the transport layer.
Step 1: Update your MCP SDK
Make sure you're using an MCP SDK version that supports Streamable HTTP. For the TypeScript SDK, that's @modelcontextprotocol/sdk version 1.0.0 or later. For the Python SDK, mcp version 1.0.0 or later.
# TypeScript
npm install @modelcontextprotocol/sdk@latest
# Python
pip install mcp --upgrade
Step 2: Replace the SSE transport with Streamable HTTP
In the TypeScript SDK, swap SSEServerTransport for StreamableHTTPServerTransport:
// BEFORE: SSE transport (deprecated)
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/message", res);
await server.connect(transport);
});
app.post("/message", async (req, res) => {
await transport.handlePostMessage(req, res);
});
// AFTER: Streamable HTTP transport (current)
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
app.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
Step 3: Update your client configuration
If you control the client configuration, update the transport URL from the old SSE endpoint to the new single /mcp endpoint. In Claude Desktop's config, this means updating the url field in the server entry.
Step 4: Handle backward compatibility (optional)
If you have existing clients you can't immediately update, you can serve both transports during a transition period — keep the old SSE endpoints alive while also exposing the new Streamable HTTP endpoint. Drop the SSE endpoints once all clients are updated.
Which Transport Should You Choose?
The decision is straightforward once you know your deployment scenario:
Use stdio when:
- Your server runs on the user's local machine
- You're building a tool for personal use with Claude Desktop or another local client
- You want the simplest possible setup with zero networking complexity
Use Streamable HTTP when:
- Your server is deployed remotely and clients connect over the internet
- You're deploying to serverless infrastructure (Cloudflare Workers, AWS Lambda, Vercel)
- You need to serve multiple clients from one server instance
- You want the server to be accessible without installing anything locally
Avoid SSE for new servers. It still works for backward compatibility, but you're building on a deprecated foundation. Any infrastructure investment you make in SSE is debt you'll pay back when the spec drops it entirely.
For a deeper look at how these transport changes fit into the broader spec evolution, see the complete MCP version history — it covers every spec update from the original 2024 draft through the latest releases.
Frequently Asked Questions
No — SSE was deprecated in MCP 2025-03-26, not removed. It's still supported for backward compatibility so existing SSE-based servers continue to work with clients that implement the spec. However, new servers should use Streamable HTTP, and Anthropic has signaled SSE may be fully removed in a future spec version. Don't build new infrastructure on a deprecated transport.
Yes — that's one of the main reasons it was introduced. Cloudflare Workers, AWS Lambda, and other serverless platforms don't support persistent TCP connections, which SSE required. Streamable HTTP uses standard request/response cycles with optional streaming bodies, which work fine in serverless environments. You can now deploy an MCP server to the Cloudflare edge without any workarounds.
If your server is actively used or maintained, yes. The migration is not dramatic — you're changing how the transport layer handles connections, not the MCP protocol logic itself. Most SDK implementations provide a drop-in Streamable HTTP transport class you can swap in. Keep the SSE endpoints temporarily if you have existing clients you can't immediately update. The longer you wait, the more likely SSE support gets removed from a future spec version.
WebSockets were considered but rejected for remote MCP transport because they also require persistent connections, which breaks serverless deployments. They also have inconsistent support through corporate proxies and load balancers — many enterprise networks block or interfere with WebSocket upgrades. Streamable HTTP achieves bidirectional streaming without requiring a persistent socket, making it more broadly compatible across different network environments.