When you add an MCP server to Claude Desktop, you're making a choice that matters more than it might appear: is this server running on your machine, or is it running somewhere else? The answer has real consequences for privacy, performance, security, and setup complexity. This guide breaks down exactly how local and remote MCP servers work and when to use each.
Two Fundamentally Different Architectures
MCP supports two deployment models, and they're not just different in where the server runs — they use completely different communication mechanisms.
Local Servers: stdio Transport
A local MCP server is a process that runs on your own machine, launched by the MCP client (Claude Desktop, Cursor, etc.) as a child process. Communication happens via standard input and output — the same stdio pipes used by command-line tools for decades. No network sockets. No ports. No authentication overhead.
When Claude Desktop reads your claude_desktop_config.json and sees a server entry with a command field, it spawns that process and connects to it via stdio. The process lives and dies with your Claude Desktop session.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/alice/Documents"],
"env": {}
}
}
}
Everything about this server — the process, the data it reads, the results it returns — stays on your machine.
Remote Servers: Streamable HTTP Transport
A remote MCP server is a web service. It runs on a server somewhere — a cloud VM, a Cloudflare Worker, a VPS — and listens for HTTP requests. The client connects to it via a URL and authenticates using OAuth 2.0.
{
"mcpServers": {
"zapier": {
"url": "https://mcp.zapier.com/api/mcp/v1",
"transport": "http"
}
}
}
Note the difference: instead of command, you specify a url. The client doesn't spawn a process — it makes HTTP requests to an endpoint that's already running.
Learn more about the transport layer in our guide to MCP Streamable HTTP vs the deprecated SSE transport.
Head-to-Head Comparison
| Dimension | Local (stdio) | Remote (HTTP) |
|---|---|---|
| Privacy | Gold standard — data never leaves your machine | Tool call data passes through server operator's infrastructure |
| Auth needed | No — OS process permissions only | Yes — OAuth 2.0 required |
| Serverless-compatible | No — requires persistent process | Yes — Cloudflare Workers, Lambda, etc. |
| Multi-user | No — single user per machine | Yes — many clients, one server |
| Setup effort | Low — edit config.json, done | Higher — deploy server, set up OAuth |
| Latency | Near-zero for MCP comms | Network round trip adds latency |
| Cost | Free — runs on your hardware | Compute and hosting costs |
Privacy: The Biggest Real-World Difference
For most individuals, privacy is the deciding factor. With a local server, the data you feed to MCP tools — your documents, your code, your database queries — never leaves your machine. It passes between the client process and the server process via in-memory pipes. There is no network to intercept.
With a remote server, every tool call passes through someone else's infrastructure. The server operator can log inputs and outputs. Even if they have strong privacy policies, their servers represent an additional attack surface. For sensitive data, this is a meaningful risk.
This connects directly to the MCP trust hierarchy: remote servers sit at a lower trust level than local ones, precisely because of this data exposure.
When to Choose Local Servers
Local servers are the right default for most personal-productivity use cases:
- Sensitive data — anything with private documents, personal notes, credentials, code you can't share externally
- Single-user setups — just you on your machine, no collaboration needed
- Low-complexity tools — filesystem access, local databases (SQLite, Postgres running locally), shell commands
- Offline-capable workflows — tools that don't need internet access at all
Well-known local servers include @modelcontextprotocol/server-filesystem, @modelcontextprotocol/server-sqlite, and the Postgres MCP server pointed at a local database instance.
When to Choose Remote Servers
Remote servers shine in collaborative and product-oriented contexts:
- Team tools — a shared Jira or Notion MCP server that every team member connects to
- Complex infrastructure — servers that are too heavy to run locally (large ML models, enterprise data connectors)
- Services you're building — if you're creating a product where users connect their Claude to your service, remote is the only viable option
- Always-on availability — a remote server doesn't require your laptop to be open
Examples of remote servers include Zapier MCP (automation workflows), Cloudflare's MCP server (infrastructure management), and enterprise integrations built on Anthropic's remote server framework. Read about how OAuth works in MCP remote servers to understand the authentication flow.
The Hybrid Pattern: Local Server, Remote API
Here's a nuance that trips people up: many "local" MCP servers call remote APIs as part of their operation. The GitHub MCP server is the canonical example.
When you run the GitHub server locally via npx, the MCP server process lives on your machine and communicates with Claude Desktop via stdio. But when it fetches a repository's issues, it makes an HTTP request to api.github.com. Your data goes to GitHub's API — but the MCP communication layer is entirely local.
Claude Desktop
│ (stdio)
▼
GitHub MCP Server (local process)
│ (HTTPS to api.github.com)
▼
GitHub API (remote)
This is a distinct architecture from a remote MCP server. The MCP protocol layer is local; the tool's implementation makes outbound calls. Whether this satisfies your privacy requirements depends on whether you trust the external API with your data — which you almost certainly already do if you're using GitHub.
Setup Difference in Practice
Adding a local server is a two-step process: install the package (or have the binary available) and add an entry to your client config. Claude Desktop handles the rest.
Adding a remote server involves: navigating to the server's URL, completing an OAuth flow that grants the server specific permissions, and either copying a config snippet or having the client auto-detect the server endpoint. The OAuth setup takes a few minutes but only happens once per server.
For remote servers you're building yourself, the setup burden increases significantly: you need to deploy the server, configure a domain, implement OAuth 2.0 authorization endpoints, and handle token storage. Anthropic's SDKs abstract a lot of this, but it's still meaningfully more work than a local server.
Frequently Asked Questions
It depends on how you run it. The official GitHub MCP server (from Anthropic) is typically installed as a local server — it runs as a subprocess on your machine via stdio, launched by npx or a similar command. However, it connects outward to GitHub's REST API over the internet to fetch data. So the MCP server itself is local, but the data it retrieves comes from GitHub's cloud. This is the hybrid pattern: local server, remote API.
Remote MCP servers can see the inputs sent to their tools and the outputs they return. They cannot see your full conversation with Claude — only the tool call parameters and results that pass through their endpoint. However, if a tool receives a document or query as input, the operator of the remote server does see that data. For sensitive conversations or data, use local MCP servers that run entirely on your machine.
Converting a local server to remote involves deploying it to a server environment (VPS, cloud function, Cloudflare Worker), implementing the Streamable HTTP transport instead of stdio, and adding OAuth 2.0 authentication so clients can securely connect. Anthropic's MCP SDKs for Python and TypeScript support both transports, so the core server logic rarely needs to change — you mainly add the HTTP layer and auth middleware. The client config changes from a command entry to a URL entry.
Local servers are almost always faster for the MCP communication layer itself — there's no network round trip between Claude Desktop and the server. However, if both types of server are calling an external API (like GitHub or a database), that external latency dominates and the local-vs-remote distinction matters less. For operations that are purely local — reading files, querying a local database — local servers are significantly faster than remote equivalents.