A fully-structured MCP (Model Context Protocol) server for exploring GitHub's public API. Built as a learning project β every file is commented to explain the why, not just the what.
Model Context Protocol is an open standard that lets AI assistants (like Claude) talk to external tools and data sources in a structured way. Instead of writing custom integrations for every AI client, you build one MCP server and any MCP-compatible client can use it.
βββββββββββββββββββ MCP Protocol ββββββββββββββββββββββββ
β AI Client β βββββββββββββββββββββββββββΊ β MCP Server β
β (Claude Desktop β tools / resources / promptsβ (this project) β
β or any client) β β wraps GitHub API β
βββββββββββββββββββ ββββββββββββββββββββββββ
| Concept | What it is | Example in this project |
|---|---|---|
| Tool | A callable function the AI can invoke | get_user_profile, search_repos |
| Resource | A read-only URI-addressed data source | github://user/{username} |
| Transport | How client and server communicate (stdio or SSE) | See src/server.py |
| FastMCP | Python framework that makes building MCP servers easy | Used throughout |
| Tool | What it does |
|---|---|
get_user_profile |
Fetch any public GitHub user or organisation profile |
list_user_repos |
List public repos with sorting β paginated |
get_repo_details |
Full metadata for a single repo |
get_repo_commits |
Recent commit history β paginated |
search_repos |
Search with language, star filters |
check_rate_limit |
See your current API quota |
github-mcp-server/
β
βββ src/ # All application code lives here
β βββ server.py # β START HERE: entrypoint, wires everything together
β β
β βββ tools/ # MCP Tools (callable functions)
β β βββ __init__.py # register_all() β one call to wire all tools
β β βββ user_tools.py # get_user_profile
β β βββ repo_tools.py # list_user_repos, get_repo_details, get_repo_commits
β β βββ search_tools.py # search_repos
β β βββ utility_tools.py # check_rate_limit
β β
β βββ resources/ # MCP Resources (read-only URI endpoints)
β β βββ __init__.py
β β βββ github_resources.py # github://user/{username}, github://repo/{owner}/{repo}
β β
β βββ utils/ # Shared helpers (no MCP awareness)
β βββ __init__.py
β βββ github_client.py # HTTP layer: get(), get_paginated(), error types
β βββ formatters.py # Pure functions: raw JSON β readable strings
β
βββ tests/
β βββ test_github_client.py # Unit tests for HTTP client (mocked network)
β βββ test_formatters.py # Unit tests for formatters (pure functions)
β βββ test_tools.py # Integration tests for tools (mocked client)
β
βββ .env.example # Copy to .env and fill in
βββ claude_desktop_config.json # Paste into Claude Desktop config
βββ Dockerfile # For SSE transport in containers
βββ pyproject.toml # Modern Python project config
βββ requirements.txt # Simple pip install list
Each layer has one job:
tools/ β MCP interface (decorators, docstrings, argument parsing)
β calls
utils/github_client.py β HTTP (headers, pagination, error handling)
β returns raw JSON, which is passed to
utils/formatters.py β Presentation (JSON β human-readable strings)
This means you can test formatters without any network, test the HTTP client without any MCP, and test tools by mocking just the client. Each piece is independently changeable.
git clone <your-repo>
cd github-mcp-server
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtcp .env.example .env
# Edit .env β optionally add your GITHUB_TOKEN for higher rate limitspython -m src.serverThe server is now running and waiting for MCP messages on stdin. You won't see much output β that's correct for stdio transport.
pip install pytest pytest-asyncio
pytest tests/ -vIf you prefer to test the server directly without Claude Desktop, you have two great options included in this setup:
The official MCP Inspector provides a clean web interface to manually test tools.
npx @modelcontextprotocol/inspector python -m src.serverNavigating to the provided localhost link will allow you to explore all endpoints.
A custom Python script (local_brain.py) is included to let you use an Ollama-hosted local model (like gemma4:26b) as the orchestration "brain".
python local_brain.py "Show me the most starred repos for user torvalds"The script will automatically detect the tools from our MCP Server, pass them to your local model, and execute any tools the AI requests, returning the final response.
-
Find your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
-
Open
claude_desktop_config.jsonfrom this project, copy its contents, and replace/ABSOLUTE/PATH/TO/github-mcp-serverwith the real path. -
Merge it into your Claude Desktop config under the
"mcpServers"key. -
Restart Claude Desktop.
-
You should now see GitHub tools available. Try asking:
- "What are Linus Torvalds' most starred repos?"
- "Search for beginner Python projects with over 1000 stars"
- "Show me the recent commits on torvalds/linux"
GitHub's API has two tiers:
| Mode | Core API | Search API |
|---|---|---|
| Unauthenticated | 60 req / hour | 10 req / minute |
| Authenticated | 5,000 req / hr | 30 req / minute |
To authenticate: go to https://github.com/settings/tokens, create a token
with no scopes (public data only), paste it into .env as GITHUB_TOKEN.
Use the check_rate_limit tool at any time to see your current quota.
Claude Desktop β spawns your server as a child process β talks over stdin/stdout
- No network port needed
- Most MCP clients use this
- Set
MCP_TRANSPORT=stdio(or leave blank β it's the default)
HTTP Client β connects to http://localhost:8000/sse β streams events
- Useful for browser clients or custom scripts
- Set
MCP_TRANSPORT=ssein.env - Run with Docker:
docker build -t github-mcp . && docker run -p 8000:8000 github-mcp
GitHub list endpoints return results in pages. The get_paginated() utility in
src/utils/github_client.py handles this automatically:
async def get_paginated(path, params, max_pages=3, per_page=30):
for page in range(1, max_pages + 1):
result = await get(path, params={...params, "page": page, "per_page": per_page})
all_items.extend(result)
if len(result) < per_page:
break # β last page: GitHub returned fewer than expected
return all_itemsThe max_pages cap prevents accidentally burning through your rate limit.
Tools expose a limit parameter so callers control how much they fetch.
GitHub API error
β
βββ 404 β return None (tool surfaces: "X not found")
βββ 403 β raise RateLimitError (tool surfaces: friendly quota message)
βββ timeout β raise GitHubClientError (tool surfaces: "try again")
βββ other β raise GitHubClientError (tool surfaces: status + snippet)
Tools never crash β they always return a string, even on error. This is important for MCP: a tool that raises an unhandled exception will break the AI's flow.
Work through these to solidify your understanding:
- Add a new tool:
get_user_starred_repos(endpoint:/users/{username}/starred) - Add a new tool:
get_repo_languages(endpoint:/repos/{owner}/{repo}/languages) - Add a resource:
github://search/{query}that returns top 5 results - Add an MCP Prompt: a reusable prompt template like "Summarise this repo for a job application"
- Try SSE transport: set
MCP_TRANSPORT=sseand connect withclient-sse.py - Add
GITHUB_TOKENto your.envand verify rate limits increase withcheck_rate_limit
# Build
docker build -t github-mcp-server .
# Run (pass token as env var β never bake it into the image)
docker run -p 8000:8000 -e GITHUB_TOKEN=your_token_here github-mcp-server
# Server is now at http://localhost:8000/sse