Three months after adding JSON-RPC batching, the MCP team removed it. That's the headline for the June 18, 2025 spec revision — a feature reversal that forced anyone who had already implemented batching to rip it out. But 2025-06-18 wasn't just subtraction. It also added structured tool outputs, OAuth Resource Servers, and elicitation — a mechanism that lets servers ask users questions mid-conversation. Here's the complete picture of what changed and what it means if you're maintaining MCP servers built against the March spec.

Where March 2025 Left Things

The March 2025 update (2025-03-26) was the first major revision to MCP and it delivered four significant changes: Streamable HTTP transport (replacing SSE), OAuth 2.0 authentication, tool annotations, and JSON-RPC batching. By the time the June update arrived, the ecosystem had had about three months to adopt these changes.

Most of the March additions aged well. Streamable HTTP was a clear improvement. Tool annotations became immediately useful. OAuth gave remote servers a real auth story. But JSON-RPC batching generated friction almost immediately after it shipped, and the June spec responded by cutting it entirely.

The June 2025 spec is the third published version of MCP. For a full timeline of all versions, see our complete MCP version history.

The Removal: JSON-RPC Batching Is Gone

JSON-RPC 2.0 allows clients to send an array of requests in a single message and receive an array of responses. MCP's March 2025 update adopted this pattern. Three months later, June 2025 removed it.

This is the most disruptive change in 2025-06-18 for anyone who implemented against the March spec. If your server handles batch request arrays, clients built against June 2025 will not send them. If your client sends batch arrays, June 2025-compliant servers are not required to handle them.

Why It Was Removed

The official reasoning centers on implementation complexity versus real-world benefit. The MCP working group found that batching created two problems:

  1. Server complexity with limited upside. Every server had to handle partially failed batches — what do you return when 8 of 10 requests succeed and 2 fail? The spec required servers to return individual error objects for failed requests while still returning results for successful ones. This is tricky to implement correctly and even trickier to test.
  2. The use case was narrow. The primary scenario batching was meant to solve — fetching capabilities in parallel at session start — could be addressed more cleanly by improving the initialization handshake. True parallel independent requests are uncommon in MCP workflows; most tool calls depend on the context established by previous ones.

The community reaction was mixed. Developers who had shipped batching support expressed frustration at the churn. Others pointed out that three months wasn't enough time to know whether batching was actually solving a problem at scale. The spec team's position was clear: better to remove it early than to carry a poorly-motivated feature indefinitely.

Migration Path

If you implemented batching on the server side, the path forward is straightforward:

  • Remove the batch-array handling code from your request router
  • Ensure your server returns a proper error (method not found or invalid request) if it receives a batch array, rather than silently ignoring it
  • Any client code that sends batch requests needs to be refactored to send sequential individual requests
// March 2025 — this was valid
POST /mcp
[
  { "jsonrpc": "2.0", "id": 1, "method": "tools/list",     "params": {} },
  { "jsonrpc": "2.0", "id": 2, "method": "resources/list", "params": {} }
]

// June 2025 — send as separate requests instead
POST /mcp
{ "jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {} }

POST /mcp
{ "jsonrpc": "2.0", "id": 2, "method": "resources/list", "params": {} }
Batch arrays must be split into individual requests when targeting the June 2025 spec.

Structured Tool Outputs: Tools Now Return Typed Data

This is the most developer-friendly addition in June 2025. Before this update, MCP tools returned content — text, images, or embedded resources. If a tool fetched stock data, it returned a text string like "AAPL: $189.43, up 2.1%". The AI model had to parse that text to extract the individual values.

Structured tool outputs change this. A tool can now declare an outputSchema in its definition — a JSON Schema that describes the exact shape of its return value. When the tool runs, it returns a structured object matching that schema.

// Tool definition with outputSchema (June 2025)
{
  "name": "get_stock_quote",
  "description": "Fetches the current price and change for a stock ticker.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "ticker": { "type": "string", "description": "Stock ticker symbol, e.g. AAPL" }
    },
    "required": ["ticker"]
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "ticker":         { "type": "string" },
      "price":          { "type": "number" },
      "change":         { "type": "number" },
      "changePercent":  { "type": "number" },
      "currency":       { "type": "string" }
    },
    "required": ["ticker", "price", "change", "changePercent", "currency"]
  }
}

// Tool call result — structured, not text
{
  "content": [],
  "structuredContent": {
    "ticker": "AAPL",
    "price": 189.43,
    "change": 3.89,
    "changePercent": 2.1,
    "currency": "USD"
  }
}
A tool that returns structured data. The model receives typed values it can reliably read, not a string it has to parse.

The practical benefits are significant. The model doesn't have to guess whether "189.43" is a price or a quantity. Downstream tools that consume another tool's output can do so reliably. Clients can display structured results in custom UI — a table, a chart, a formatted card — rather than rendering raw text.

Outputting structured data is optional. Tools that don't declare an outputSchema keep returning text content exactly as before. The feature is purely additive; no existing tools break.

For a full guide to implementing and consuming structured outputs, see our MCP Structured Tool Outputs explainer.

OAuth Resource Servers: More Granular Authorization

The March 2025 spec introduced OAuth 2.0 as the authentication standard for remote MCP servers. June 2025 extends this with the concept of OAuth Resource Servers — a pattern borrowed from RFC 8707 that allows finer-grained control over what an access token can actually access.

In the March model, a token authorized access to an MCP server as a whole. In the June model, a single authorization server can issue tokens scoped to specific resources — individual databases, specific API endpoints, particular data collections — that multiple MCP servers expose.

// June 2025 — Resource Server metadata endpoint
GET /.well-known/oauth-resource-server

// Response describes what this server offers and what auth server governs it
{
  "resource": "https://files-mcp.example.com",
  "authorization_servers": ["https://auth.example.com"],
  "scopes_supported": ["files:read", "files:write", "files:delete"],
  "bearer_methods_supported": ["header"]
}
Resource Server metadata lets clients discover the auth server and available scopes before requesting tokens.

The practical impact: an enterprise deploying multiple MCP servers can use a single OAuth authorization server to issue tokens with fine-grained scopes across all of them. A token that grants files:read access to the file server cannot be used to write to the database server. This makes MCP practical for organizations that need audit trails and least-privilege access control.

Elicitation: Servers Can Now Ask Users for Input

Elicitation is the most architecturally novel feature in June 2025. It inverts a basic assumption of how MCP servers work.

Previously, information flowed one way into a tool call: the AI model decided what arguments to pass, the tool ran, results came back. If the server needed information it didn't have — which database schema to use, what date range to apply, whether to overwrite an existing file — it had to return an error or make a guess. The AI might then ask the user and retry, but that's indirect and unreliable.

Elicitation lets the server pause execution and send a structured request directly to the client asking for specific information from the user. The client renders a form based on the request schema. The user fills it in. The data goes back to the server. Execution continues.

// Server sends an elicitation request during tool execution
// (sent as a JSON-RPC request from server to client)
{
  "jsonrpc": "2.0",
  "id": "elicit-1",
  "method": "elicitation/create",
  "params": {
    "message": "Which date range should this report cover?",
    "requestedSchema": {
      "type": "object",
      "properties": {
        "startDate": {
          "type": "string",
          "format": "date",
          "title": "Start date"
        },
        "endDate": {
          "type": "string",
          "format": "date",
          "title": "End date"
        },
        "includeWeekends": {
          "type": "boolean",
          "title": "Include weekends?",
          "default": false
        }
      },
      "required": ["startDate", "endDate"]
    }
  }
}

// Client returns the user's input
{
  "jsonrpc": "2.0",
  "id": "elicit-1",
  "result": {
    "action": "accept",
    "content": {
      "startDate": "2025-01-01",
      "endDate":   "2025-06-30",
      "includeWeekends": true
    }
  }
}
Elicitation: the server asks a structured question, the client renders a form, the user's response goes back to the server.

There are three possible responses a client can return:

  • "action": "accept" — the user filled in the form and submitted it
  • "action": "decline" — the user explicitly refused to provide the information
  • "action": "cancel" — the user dismissed the prompt without responding

The server can handle each case differently. Decline might trigger a fallback path. Cancel might abort the operation. Accept continues with the provided data.

Elicitation is a powerful pattern but it requires thoughtful use. Servers that elicit constantly will annoy users. The intent is for servers to elicit only when genuinely needed — when a required piece of information cannot be inferred and the cost of guessing wrong is high. Read more about the design patterns and limits of this feature in our dedicated MCP Elicitation guide.

What Did Not Change in June 2025

The June 2025 update is significant but not a wholesale rewrite. The following remain unchanged from March 2025:

  • Streamable HTTP transport (still the standard; SSE still deprecated)
  • Tool annotations (readOnlyHint, destructiveHint, idempotentHint)
  • The core JSON-RPC 2.0 message format for individual requests
  • stdio transport for local servers
  • The three core capability types: tools, resources, prompts
  • Sampling (server-initiated LLM requests to the client)

Who Is Affected and What You Need to Do

Here's a clear breakdown by server type and what the June 2025 changes require:

Servers that implemented JSON-RPC batching (March spec)

This is the one mandatory action item. You must remove batch-array handling. If your server receives a batch array and has no handler, most JSON-RPC libraries will return a standard error response automatically — check that your error handling is correct rather than silently dropping the request.

Remote HTTP servers seeking tighter auth control

Implement the OAuth Resource Server metadata endpoint. This is optional but worthwhile if you're running multiple MCP servers in an organization and want centralized access control. You'll need to expose /.well-known/oauth-resource-server with the correct metadata structure.

Servers with complex multi-step workflows

Evaluate elicitation for any workflow step where you currently return errors asking the user for more information. A well-placed elicitation request is far cleaner than an error-retry loop through the AI model. However, elicitation requires client support — check that your target clients have implemented the elicitation/create method before relying on it.

Any server that returns tool results

Consider whether structured outputs would improve your tools. If a tool currently returns a text blob containing structured information (a JSON string, a CSV row, a formatted table), switching to a proper outputSchema with structuredContent will make that data more reliable for the model to use and enable richer client-side rendering.

# Quick audit checklist for June 2025 compliance

1. Batching removed?
   grep -r "Array.isArray" ./src/handlers/    # find batch-array checks
   grep -r "jsonrpc.*batch" ./src/            # find any batch-specific logic

2. No batching on client side?
   grep -r "send.*\[" ./src/client/           # find array sends

3. Structured outputs added where useful?
   # Review tools that return text containing numbers or structured data

4. Elicitation available for complex workflows?
   # Identify tool handlers that currently return "need more info" errors
A quick audit to check your codebase for June 2025 compatibility.

Frequently Asked Questions

The MCP team found that batching created implementation complexity for server authors without a proportional benefit in practice. Most MCP operations happen in sequences where each step depends on the result of the previous one, so true parallel batching is rarely possible. The initialization use case — fetching multiple capability lists at once — was better served by improving the session handshake directly. Three months of feedback from real implementations showed more pain than gain.

Structured tool outputs let a tool return typed data — objects, arrays, numbers — rather than just text strings. The tool definition includes an outputSchema (a JSON Schema) that describes the shape of the returned data, and the tool's response includes a structuredContent field containing the actual structured value. The AI model can then read and reason about this data more reliably than it could parse equivalent information from unstructured text. Existing tools that return only text are unaffected — the feature is purely additive.

Elicitation allows an MCP server to pause a task and ask the user a question directly, rather than having the AI relay the question. A server uses it when it needs specific information it can't infer — like which database to connect to, what date range to use, or whether to proceed with a destructive action. The server sends an elicitation/create request with a JSON Schema describing what it needs, and the client renders a form for the user to fill in. The user can accept (providing the data), decline (refusing), or cancel (dismissing the prompt).

Mostly no. If you have a local stdio server that doesn't use JSON-RPC batching, the June 2025 update doesn't break anything you have running. Structured outputs and elicitation are additive features you adopt when ready. OAuth Resource Servers only apply to remote HTTP servers. The one action item for all server authors is to remove any batching code, since clients built against the June spec are not required to support it. If you never implemented batching, your server is already compliant.