MCP server that fetches web content in LLM-friendly formats. Automatically discovers and uses llms.txt files when available, tries Markdown versions, and falls back to clean HTML-to-Markdown conversion.
Add to your MCP client configuration:
{
"mcpServers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp"]
}
}
}
{
"mcp.servers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp"]
}
}
}
When you fetch a URL, the server tries multiple sources in parallel:
https://example.com/llms-full.txt
- Comprehensive LLM documentationhttps://example.com/llms.txt
- Concise LLM documentationhttps://example.com.md
- Markdown versionhttps://example.com/index.md
- Directory Markdownhttps://example.com
- Original URL (converts HTML to Markdown if needed)
Content is cached locally in .llms-fetch-mcp/
for quick access.
llms.txt is an emerging standard for websites to provide LLM-optimized documentation. Sites like FastHTML, Anthropic Docs, and others are adopting it. This server automatically discovers and uses these files when available, giving you cleaner, more concise content than HTML scraping.
If you prefer installing instead of using npx
:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/Crazytieguy/llms-fetch-mcp/releases/latest/download/llms-fetch-mcp-installer.sh | sh
irm https://github.com/Crazytieguy/llms-fetch-mcp/releases/latest/download/llms-fetch-mcp-installer.ps1 | iex
brew install Crazytieguy/tap/llms-fetch-mcp
npm install -g llms-fetch-mcp
cargo install llms-fetch-mcp
Then use the binary directly instead of npx
:
{
"mcpServers": {
"llms-fetch": {
"command": "llms-fetch-mcp"
}
}
}
{
"mcpServers": {
"llms-fetch": {
"command": "llms-fetch-mcp",
"args": ["/path/to/custom/cache"]
}
}
}
MIT