An MCP server can expose three completely different types of capabilities to Claude: tools, resources, and prompts. Most tutorials only cover tools — and that's understandable, because tools are what most servers actually use. But if you're building a server or evaluating one, understanding all three primitives changes what you think is possible. Tools let Claude act. Resources let Claude watch. Prompts let users hand Claude a pre-loaded workflow. Here's exactly how each one works.

The Three Primitives at a Glance

Before diving deep, here's the one-sentence version of each:

  • Tools — functions Claude can call to do things. The result comes back and Claude incorporates it into its response.
  • Resources — data sources identified by a URI that Claude (or the host) can read. They can also push updates when data changes.
  • Prompts — parameterized prompt templates the server pre-defines. Users browse and invoke them; Claude executes the workflow.

The comparison table below shows the key mechanical differences:

Capability Who Initiates What It Returns Best For
Tool Claude (AI model) Text or structured content Taking actions, fetching data on demand
Resource Client / host Text or binary data at a URI Live data, documents, subscriptions
Prompt User (via host UI) A pre-built message sequence Reusable workflows, complex instructions

Tools: Claude's Hands

Tools are the workhorse of the MCP ecosystem. When you hear "I connected Claude to GitHub" or "Claude can read my filesystem," tools are what makes that possible. A tool is essentially a function — it has a name, a description, and an input schema defining what parameters it accepts. Claude reads the description and schema, decides when to call the tool, and sends a tools/call JSON-RPC request with the appropriate arguments.

Here's what a tool definition looks like in a tools/list response:

{
  "name": "create_github_issue",
  "description": "Creates a new issue in a GitHub repository. Use this when the user wants to log a bug, feature request, or task.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "owner": {
        "type": "string",
        "description": "The GitHub username or organization that owns the repository"
      },
      "repo": {
        "type": "string",
        "description": "The repository name"
      },
      "title": {
        "type": "string",
        "description": "The issue title"
      },
      "body": {
        "type": "string",
        "description": "The issue body in Markdown"
      }
    },
    "required": ["owner", "repo", "title"]
  }
}
A tool definition from a GitHub MCP server. The description is what Claude reads to decide when to use this tool. The inputSchema is a JSON Schema object that tells Claude what parameters to supply.

Tools that servers commonly expose include:

  • File operations: read_file, write_file, list_directory
  • Database queries: execute_query, list_tables, describe_schema
  • API calls: send_slack_message, create_github_issue, search_web
  • System operations: run_command, get_process_list, read_environment

Tools can return plain text (the most common case) or structured content. With the structured tool outputs feature added in 2025, tools can return typed JSON that Claude and the host can work with programmatically — not just as readable text.

One important point: tool execution has side effects. When Claude calls send_slack_message, that message gets sent. Hosts should display tool calls to users before executing, and the MCP spec includes tool annotations that let server authors signal whether a tool is read-only, destructive, or requires user confirmation.

Resources: Claude's Eyes on Live Data

Resources are data endpoints. A resource has a URI — a unique identifier that looks like a URL or a file path — and Claude or the host can read its contents at any time. Resources can also support subscriptions, which means the server pushes a notification whenever the data changes.

Resources are identified by URIs like these:

  • file:///home/user/project/README.md — a local file
  • postgres://mydb/users — a database table
  • github://repos/octocat/hello-world/branches/main — a GitHub branch
  • metrics://cpu/usage — a live system metric

Here's what a resources/list response looks like:

{
  "resources": [
    {
      "uri": "github://repos/myorg/api/branches/main",
      "name": "main branch status",
      "description": "Live status of the main branch including latest commit and CI state",
      "mimeType": "application/json"
    },
    {
      "uri": "file:///var/log/app.log",
      "name": "application log",
      "description": "Current application log file",
      "mimeType": "text/plain"
    }
  ]
}
A resources/list response. Each resource has a URI, a human-readable name, an optional description, and a MIME type indicating what kind of data to expect.

The key difference from tools: resources are about observing data, not acting on it. If you want Claude to watch a dashboard, monitor a log file, or stay current on a database table without repeatedly asking Claude to "check again" — resources with subscriptions are the right primitive. The server fires a notifications/resources/updated message whenever data changes, and the client can re-read the resource without Claude having to invoke a tool.

Resources are less common in practice because the subscription model adds server-side complexity. Most teams find that a simple read_* tool covers 90% of their data-access needs without the overhead of implementing subscriptions.

Prompts: Reusable Workflows

Prompts are the least-understood of the three primitives. An MCP prompt is a pre-defined message template that the server exposes. Users can browse available prompts through the host UI, fill in any arguments, and Claude receives a fully-constructed message sequence ready to execute.

Think of it like a macro or a canned procedure. A code-review prompt might look like this:

// From prompts/list:
{
  "name": "pr-review",
  "description": "Performs a structured code review on a pull request diff",
  "arguments": [
    {
      "name": "diff",
      "description": "The git diff to review",
      "required": true
    },
    {
      "name": "focus",
      "description": "Area to focus on: security, performance, or style",
      "required": false
    }
  ]
}

// From prompts/get with arguments filled in:
{
  "messages": [
    {
      "role": "user",
      "content": {
        "type": "text",
        "text": "Please review this diff focusing on security:\n\n```diff\n+  const query = `SELECT * FROM users WHERE id = ${userId}`\n```\n\nCheck for SQL injection, authentication issues, and improper data exposure."
      }
    }
  ]
}
A prompt definition (from prompts/list) and the expanded message it produces (from prompts/get). The server fills in the template; Claude receives a ready-to-execute instruction set.

Prompts are powerful when you have a complex, multi-step workflow that users need to invoke repeatedly with consistent structure. Instead of typing out the same detailed instruction every time, you expose it as a named prompt with slots for the variable parts.

In current host implementations, prompts are a user-facing feature — the user selects a prompt from a menu, not Claude. Claude receives the already-expanded prompt text and executes from there.

A Real Example: One GitHub Server, All Three Primitives

A well-designed GitHub MCP server could expose all three primitives:

  • Tools: create_issue, merge_pull_request, add_comment, search_code — actions Claude takes when the user asks.
  • Resources: github://repos/myorg/api/pulls/open for live open PRs, github://repos/myorg/api/actions/latest for CI status — data Claude or the host monitors passively.
  • Prompts: pr-review (takes a PR number, returns a structured review prompt), release-notes (takes a milestone, generates a changelog prompt) — reusable workflows users invoke from a menu.

This isn't hypothetical — the MCP ecosystem's more mature servers do exactly this. But most servers in production today expose only tools, because that covers the core "Claude as an agent" use case without the added complexity of resources and prompts.

Why Most Servers Only Implement Tools

Building a tool-only server is straightforward: define your functions, write their JSON schemas, hook up the handlers, done. Resources require you to implement subscription logic — tracking which clients are subscribed to which URIs and pushing notifications when data changes. Prompts require you to design good templates and test them across different argument combinations.

For most use cases, a well-designed set of tools is sufficient. Claude can call a get_open_prs tool whenever it needs that data, making subscriptions redundant. And most workflows that might become prompts can be encoded in a system prompt or in Claude's context instead.

That said, if you're building a monitoring dashboard, a live analytics server, or a development environment integration where Claude needs to react to changes automatically — resources become genuinely valuable. And if you're building a specialized tool for non-technical users who need guided workflows, prompts can dramatically improve usability.

If you want to understand how the underlying messages for all three primitives are structured, see our guide on how JSON-RPC works in MCP servers — the same request/response pattern drives tools/call, resources/read, and prompts/get alike. And if you're wondering how MCP compares to other approaches to giving Claude access to data, our MCP vs RAG comparison covers the tradeoffs in detail.

Frequently Asked Questions

No. Tools are the only primitive most servers implement. Resources and prompts are optional — add them when they solve a real problem. A server that exposes only tools is a perfectly complete, production-ready MCP server.

A resource is identified by a URI and can be subscribed to — the server pushes updates when the data changes. A tool that reads data is a one-shot function call: Claude asks, the tool fetches, done. Use resources when the data is live and Claude needs to stay current without repeatedly polling. Use a read tool when Claude just needs to fetch once.

In most current host implementations (like Claude Desktop), MCP prompts appear as options the user can select — they're not automatically invoked by Claude. The user picks a prompt template, fills in any arguments, and Claude receives the expanded prompt. Future hosts may give Claude more autonomy over prompt selection, but today it's primarily a user-controlled feature.

No, but they solve an adjacent problem. RAG (Retrieval-Augmented Generation) typically involves embedding documents into a vector database and retrieving semantically similar chunks at query time. MCP resources are structured data endpoints — identified by URI, read on demand, optionally subscribed to. You could build a RAG system on top of MCP resources, but resources themselves have no built-in semantic search. See our comparison of MCP vs RAG for the full breakdown.