This is the overview page for MCP. See also: Available Servers, Building Your Own.

MCP Servers — Model Context Protocol

The Model Context Protocol (MCP) is an open standard created by Anthropic for connecting AI applications to external systems — databases, APIs, file systems, and services. Think of it as a USB-C port for AI: one standardized connector instead of a custom integration for every tool.

What MCP Enables

  • Claude Code reading your database schema and writing queries
  • Claude Desktop browsing your file system securely
  • VS Code Copilot searching GitHub issues
  • Any AI app connecting to any MCP-compatible service

Architecture

MCP uses a client-server architecture with three participants:

MCP Host (Claude Desktop, Claude Code, VS Code, Cursor, ChatGPT...)
  |
  +-- MCP Client 1 <---JSON-RPC 2.0---> MCP Server A (local, stdio)
  +-- MCP Client 2 <---JSON-RPC 2.0---> MCP Server B (remote, HTTP)
  +-- MCP Client N <---JSON-RPC 2.0---> MCP Server N
ComponentRole
MCP HostThe AI application (Claude Desktop, Claude Code, VS Code). Creates and manages clients.
MCP ClientComponent inside the host. One-to-one connection with a single server.
MCP ServerProgram that provides tools, resources, and prompts. Local (stdio) or remote (HTTP).

Key Facts

PropertyValue
CreatorAnthropic (open-source)
ProtocolJSON-RPC 2.0 over stdio or HTTP
Current revision2025-06-18
SDKs available10 languages (TypeScript, Python, Java, Kotlin, C#, Go, Swift, Rust, Ruby, PHP)
Primary SDKsTypeScript and Python (most mature)

Core Primitives

MCP defines three primitives with distinct control models:

PrimitiveControlWho DecidesDescription
ToolsModel-controlledThe LLM decides when to invokeExecutable functions (search, query, create)
ResourcesApplication-controlledThe host app decidesRead-only data/context (files, schemas, configs)
PromptsUser-controlledThe user explicitly invokesReusable templates (slash commands)
Start with Tools. They are universally supported by all MCP clients. Resources and prompts have partial support. If you’re building a server, implement tools first.

Tools (Most Important)

Tools are functions the LLM can call to perform actions. Each tool has a name, description, input schema, and optional output schema.

{
  "name": "get_weather",
  "description": "Get current weather for a location",
  "inputSchema": {
    "type": "object",
    "properties": {
      "location": { "type": "string" }
    },
    "required": ["location"]
  }
}

The LLM reads the tool’s description to decide when to use it. Clear, descriptive names and descriptions are critical.

MCP vs Function Calling

Native function calling (OpenAI’s or Anthropic’s tool use) is hardcoded per API call. MCP tools are dynamic — discovered at runtime, shared across clients, and managed by external servers. They are complementary: MCP is a standardized layer on top of function calling.

Server Lifecycle

1

Initialization

Client sends initialize request, server responds with capabilities.
2

Operation

Normal message exchange — tool calls, resource reads, etc.
3

Shutdown

Client closes the connection. For stdio: closes stdin, then SIGTERM, then SIGKILL.

Client Compatibility

ClientToolsResourcesPrompts
Claude DesktopYesYesYes
Claude CodeYesYesYes
VS Code CopilotYesYesYes
CursorYesNoYes
ChatGPTYesNoNo
WindsurfYesNoNo

Transport Options

FactorstdioHTTP
UsersSingleMulti-user
SetupConfigure command + argsDeploy web service
AuthEnvironment variablesOAuth 2.1
LatencyMinimal (IPC)Network overhead
ScalingN/AHorizontal
Best forDev tools, local filesSaaS integrations, team tools

Start with stdio. Graduate to HTTP when you need multi-user support.