A Model Context Protocol (MCP) server that provides intelligent context management and web content fetching capabilities. This server enables AI assistants to efficiently store, retrieve, and manage contextual data while also fetching web content for real-time information access.
- π Smart Content Fetching: Retrieve web content using Jina Reader API with fallback mechanisms
- π Web Content Processing: Convert HTML to markdown for better AI consumption
- πΎ File Management: Save fetched content to organized file structures
- π High Performance: Optimized fetching algorithms with intelligent caching
- π§ Easy Integration: Standard MCP protocol compatibility with various AI clients
Fetches content from a URL and returns it as text. This tool attempts to get content using the Jina Reader API first, and falls back to direct HTTP request if that fails.
Arguments:
url
(string, required): The URL to fetch content frommax_length
(integer, optional): Maximum number of characters to return (default: 5000)start_index
(integer, optional): Start content from this character index (default: 0)raw
(boolean, optional): Get raw content without markdown conversion (default: false)
Returns:
- The content of the URL as text
Example usage:
Please fetch the content from https://example.com
Fetches content from a URL and saves it to a file. This tool attempts to get content using the Jina Reader API first, and falls back to direct HTTP request if that fails.
Arguments:
url
(string, required): The URL to fetch content fromfile_path
(string, optional): The path where to save the file. If not provided, a filename will be automatically generated based on the URL domain and timestampraw
(boolean, optional): Get raw content without markdown conversion (default: false)
Returns:
- The path where the file was saved
Example usage:
Please fetch and save the content from https://example.com to article.txt
Or with automatic naming:
Please fetch and save the content from https://example.com
- fetch
- Fetch a URL and extract its contents as markdown
- Arguments:
url
(string, required): URL to fetch
-
Clone or download the source code:
git clone https://github.com/LangGPT/context-mcp-server.git cd context-mcp-server
-
Install dependencies using uv:
uv sync
-
Test the server:
uv run python -m context_mcp_server --help
Add this configuration to your Claude Desktop config file:
{
"mcpServers": {
"context-mcp-server": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/your/context-mcp-server",
"python",
"-m",
"context_mcp_server"
],
"env": {
"CONTEXT_DIR": "/path/to/your/data/directory"
}
}
}
}
Configuration file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%/Claude/claude_desktop_config.json
- Linux:
~/.config/Claude/claude_desktop_config.json
Add to your VS Code settings or .vscode/mcp.json
:
{
"mcpServers": {
"context-mcp-server": {
"command": "uv",
"args": [
"run",
"--directory",
"/path/to/your/context-mcp-server",
"python",
"-m",
"context_mcp_server"
],
"env": {
"CONTEXT_DIR": "/path/to/your/data/directory"
}
}
}
}
When using uv
no specific installation is needed. We will
use uvx
to directly run context-mcp-server:
uvx context-mcp-server
pip install context-mcp-server
After installation, run it as:
python -m context_mcp_server
{
"mcpServers": {
"context-mcp-server": {
"command": "uvx",
"args": ["context-mcp-server"],
"env": {
"CONTEXT_DIR": "/path/to/your/data/directory"
}
}
}
}
{
"mcp": {
"servers": {
"context-mcp-server": {
"command": "uvx",
"args": ["context-mcp-server"],
"env": {
"CONTEXT_DIR": "/path/to/your/data/directory"
}
}
}
}
}
Sets the working directory where files will be saved when using the fetch_and_save
tool.
- Default:
data
- Priority:
CONTEXT_DIR
environment variable > default valuedata
Example:
export CONTEXT_DIR=/path/to/your/data
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the server will use either the user-agent:
ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)
or:
ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers)
This can be customized by adding the argument --user-agent=YourUserAgent
to the args
list in the configuration.
The server can be configured to use a proxy by using the --proxy-url
argument.
-
Install development dependencies:
uv sync --dev
-
Run linting and type checking:
uv run ruff check uv run pyright
-
Build the package:
uv build
Test the server locally:
uv run python -m context_mcp_server
With custom work directory:
CONTEXT_DIR=/custom/path uv run python -m context_mcp_server
Use the MCP inspector for debugging:
npx @modelcontextprotocol/inspector uv run python -m context_mcp_server
With custom work directory:
CONTEXT_DIR=/custom/path npx @modelcontextprotocol/inspector uv run python -m context_mcp_server
- Edit the source code in
src/context_mcp_server/
- Test your changes with
uv run python -m context_mcp_server
- Update version in
pyproject.toml
if needed - Run tests and linting
You can use the MCP inspector to debug the server:
For local development:
npx @modelcontextprotocol/inspector uv run python -m context_mcp_server
For uvx installations:
npx @modelcontextprotocol/inspector uvx context-mcp-server
We encourage contributions to help expand and improve context-mcp-server. Whether you want to add new tools, enhance existing functionality, or improve documentation, your input is valuable.
context-mcp-server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.