MCP Servers — Model Context Protocol
The Model Context Protocol (MCP) is an open standard created by Anthropic for connecting AI applications to external systems — databases, APIs, file systems, and services. Think of it as a USB-C port for AI: one standardized connector instead of a custom integration for every tool.
What MCP Enables
- Claude Code reading your database schema and writing queries
- Claude Desktop browsing your file system securely
- VS Code Copilot searching GitHub issues
- Any AI app connecting to any MCP-compatible service
Architecture
MCP uses a client-server architecture with three participants:
MCP Host (Claude Desktop, Claude Code, VS Code, Cursor, ChatGPT...)
|
+-- MCP Client 1 <---JSON-RPC 2.0---> MCP Server A (local, stdio)
+-- MCP Client 2 <---JSON-RPC 2.0---> MCP Server B (remote, HTTP)
+-- MCP Client N <---JSON-RPC 2.0---> MCP Server N
| Component | Role |
|---|---|
| MCP Host | The AI application (Claude Desktop, Claude Code, VS Code). Creates and manages clients. |
| MCP Client | Component inside the host. One-to-one connection with a single server. |
| MCP Server | Program that provides tools, resources, and prompts. Local (stdio) or remote (HTTP). |
Key Facts
| Property | Value |
|---|---|
| Creator | Anthropic (open-source) |
| Protocol | JSON-RPC 2.0 over stdio or HTTP |
| Current revision | 2025-06-18 |
| SDKs available | 10 languages (TypeScript, Python, Java, Kotlin, C#, Go, Swift, Rust, Ruby, PHP) |
| Primary SDKs | TypeScript and Python (most mature) |
Core Primitives
MCP defines three primitives with distinct control models:
| Primitive | Control | Who Decides | Description |
|---|---|---|---|
| Tools | Model-controlled | The LLM decides when to invoke | Executable functions (search, query, create) |
| Resources | Application-controlled | The host app decides | Read-only data/context (files, schemas, configs) |
| Prompts | User-controlled | The user explicitly invokes | Reusable templates (slash commands) |
Tools (Most Important)
Tools are functions the LLM can call to perform actions. Each tool has a name, description, input schema, and optional output schema.
{
"name": "get_weather",
"description": "Get current weather for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
The LLM reads the tool’s description to decide when to use it. Clear, descriptive names and descriptions are critical.
MCP vs Function Calling
Native function calling (OpenAI’s or Anthropic’s tool use) is hardcoded per API call. MCP tools are dynamic — discovered at runtime, shared across clients, and managed by external servers. They are complementary: MCP is a standardized layer on top of function calling.
Server Lifecycle
Initialization
initialize request, server responds with capabilities.Operation
Shutdown
Client Compatibility
| Client | Tools | Resources | Prompts |
|---|---|---|---|
| Claude Desktop | Yes | Yes | Yes |
| Claude Code | Yes | Yes | Yes |
| VS Code Copilot | Yes | Yes | Yes |
| Cursor | Yes | No | Yes |
| ChatGPT | Yes | No | No |
| Windsurf | Yes | No | No |
Transport Options
| Factor | stdio | HTTP |
|---|---|---|
| Users | Single | Multi-user |
| Setup | Configure command + args | Deploy web service |
| Auth | Environment variables | OAuth 2.1 |
| Latency | Minimal (IPC) | Network overhead |
| Scaling | N/A | Horizontal |
| Best for | Dev tools, local files | SaaS integrations, team tools |
Start with stdio. Graduate to HTTP when you need multi-user support.