A comprehensive guide to Model Context Protocol (MCP) — the open standard that lets AI agents talk to tools, APIs, and services in a structured, safe, and extensible way.
- What is MCP?
- Why MCP?
- Core Architecture
- Key Concepts
- How It Works
- MCP Transport Layers
- MCP in AI Agents
- Example: GitHub MCP Server
- Security Model
- Ecosystem
- Quick Start
- Resources & Links
Model Context Protocol (MCP) is an open protocol introduced by Anthropic in November 2024. It defines a standard interface for AI language models (like Claude, GPT-4, etc.) to communicate with external tools, data sources, and services.
Think of MCP as USB-C for AI — a universal connector that lets any AI model plug into any tool without custom glue code for every integration.
┌────────────────┐ MCP Protocol ┌────────────────┐
│ AI Model / ├──────────────────┤ MCP Server │
│ Agent (Host) │ (JSON-RPC 2.0) │ (Tool/Service) │
└────────────────┘ └────────────────┘
Before MCP, every AI integration required bespoke code:
- Custom API wrappers for each service
- No standard way for models to discover capabilities
- Tool definitions duplicated across every client
- Security policies inconsistently applied
MCP solves this with one protocol to rule them all:
| Problem (Before MCP) | Solution (With MCP) |
|---|---|
| Custom glue code per integration | Standardised JSON-RPC 2.0 messages |
| Model can’t discover what tools exist | Server exposes a tools/list endpoint |
| No lifecycle management | Formal init/shutdown handshake |
| Ad-hoc security | Per-tool input schemas + human-in-the-loop hooks |
| Vendor lock-in | Open spec — any model, any server |
MCP uses a client–server model with three roles:
┌─────────────────────────────────────────────┐
│ HOST │
│ (e.g. Claude Desktop, Cursor, your app) │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ MCP Client 1 │ │ MCP Client 2 │ │
│ └─────┬───────┘ └─────┬───────┘ │
└─────────────────────────────────────────────┘
│ │
MCP Protocol MCP Protocol
(stdio / SSE) (stdio / SSE)
│ │
┌──────────┴──┐ ┌──────────┴──┐
│ MCP Server A │ │ MCP Server B │
│ (GitHub API) │ │ (Filesystem) │
└───────────────┘ └───────────────┘
- Host — the application that runs the AI model and manages MCP client connections (e.g. Claude Desktop, Cursor IDE, a custom chatbot)
- MCP Client — a component inside the host that maintains a 1:1 connection with one MCP server
- MCP Server — a lightweight process that exposes capabilities (tools, resources, prompts) over the protocol
Tools are callable functions the AI can invoke. Each tool has:
- A name and description (used by the model to decide when to call it)
- An input schema (JSON Schema) the model must conform to
- An output returned to the model after execution
{
"name": "create_issue",
"description": "Create a new GitHub issue",
"inputSchema": {
"type": "object",
"properties": {
"owner": { "type": "string" },
"repo": { "type": "string" },
"title": { "type": "string" },
"body": { "type": "string" }
},
"required": ["owner", "repo", "title"]
}
}Resources are read-only data the server exposes — files, database rows, API responses, etc. They are identified by a URI and can be fetched by the client on demand.
mcp://github/repos/jagkarnan/github-agent/readme
mcp://filesystem//home/user/project/src/index.ts
Prompts are reusable prompt templates defined by the server. They let servers package up common interaction patterns (e.g. “summarise this PR”, “review this file”) that the host can surface directly to users.
Sampling allows an MCP server to request the host LLM to generate text. This enables agentic loops where the server itself can ask the model for help mid-task (while the host remains in control of what is actually sent to the model).
A full MCP session follows this lifecycle:
Client Server
│ │
├── initialize {clientInfo} ───►│
│◄── {serverInfo, capabilities} ─│
├── initialized (notify) ────►│
│ │
├── tools/list ────────────►│
│◄── [{name, description, ...}] ─│
│ │
├── tools/call {name, args} ──►│
│◄── {content: [{type, text}]} ──│
│ │
├── resources/list ────────►│
│◄── [{uri, name, mimeType}] ───│
│ │
├── resources/read {uri} ───►│
│◄── {contents: [{uri, text}]} ─│
All messages are JSON-RPC 2.0 — simple request/response pairs with an id, method, and params.
MCP is transport-agnostic. Two transports are defined in the spec:
| Transport | Description | Best For |
|---|---|---|
| stdio | Client spawns server as a subprocess; communicates over stdin/stdout | Local tools, CLI agents |
| HTTP + SSE | Server runs as an HTTP endpoint; uses Server-Sent Events for streaming | Remote servers, cloud deployments |
MCP is the backbone of modern agentic AI workflows:
- Discovery — At startup the agent calls
tools/listandresources/listto learn what it can do - Planning — The LLM reasons over available tools to form a plan
- Execution — The agent calls
tools/callwith the model’s chosen arguments - Observation — Results are fed back into the model’s context
- Loop — Steps 2–4 repeat until the task is complete
┌─────────────────────────────────────────────────┐
│ AGENT LOOP │
│ │
│ User Input │
│ ↓ │
│ ┌───────┐ think ┌────────────┐ │
│ │ LLM ├──────────►│ Tool Call │ │
│ │ │ └─────┬──────┘ │
│ │ │◄──────────────────────────────┤ │
│ └───────┘ result MCP Server (tool executes) │
│ ↓ │
│ Final Answer │
└─────────────────────────────────────────────────┘
The GitHub MCP Server is a first-party MCP server from GitHub that exposes the GitHub API as MCP tools. It powers GitHub Copilot agent mode and can be used by any MCP-compatible client.
Example tools it exposes:
| Tool | Description |
|---|---|
create_repository |
Create a new repo in your account or an org |
create_or_update_file |
Push a file to a branch |
create_pull_request |
Open a pull request |
issue_write |
Create or update an issue |
search_code |
Search code across GitHub |
list_commits |
List commits on a branch |
get_me |
Get the authenticated user’s profile |
Example tools/call request:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "create_repository",
"arguments": {
"name": "github-agent",
"description": "My MCP-powered repo",
"private": false
}
}
}MCP is designed with human-in-the-loop principles:
- User consent — Hosts must get explicit user approval before connecting to a server
- Capability scoping — Clients declare which capabilities they support; servers do the same
- Schema validation — All tool inputs are validated against JSON Schema before execution
- No ambient authority — Servers cannot initiate actions without being called by the client
- Sampling control — Even when servers request LLM sampling, the host controls what goes to the model
The MCP ecosystem is growing rapidly:
Official MCP Servers
github— GitHub API (repos, issues, PRs, code search)filesystem— Local file read/writebrave-search— Web search via Brave APIsqlite— Query local SQLite databasespuppeteer— Browser automationslack— Post messages, list channelspostgres— Run SQL queries
Hosts that support MCP
- Claude Desktop (Anthropic)
- Cursor IDE
- Windsurf IDE
- GitHub Copilot (agent mode)
- Continue.dev
- Zed Editor
Run the official GitHub MCP server locally:
# Using Docker
docker run -i --rm \
-e GITHUB_PERSONAL_ACCESS_TOKEN=<your-token> \
ghcr.io/github/github-mcp-server
# Or using npx
npx @modelcontextprotocol/server-githubAdd it to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"github": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<your-token>"
}
}
}
}| Resource | URL |
|---|---|
| MCP Official Specification | https://spec.modelcontextprotocol.io |
| MCP Documentation | https://modelcontextprotocol.io |
| GitHub MCP Server | https://github.com/github/github-mcp-server |
| Anthropic MCP Blog Post | https://www.anthropic.com/news/model-context-protocol |
| MCP TypeScript SDK | https://github.com/modelcontextprotocol/typescript-sdk |
| MCP Python SDK | https://github.com/modelcontextprotocol/python-sdk |
| Awesome MCP Servers | https://github.com/punkpeye/awesome-mcp-servers |
This repository was created using the GitHub MCP Server — demonstrating MCP in action.