Part of the Efficient MCP series - Web search optimized for LLMs with intelligent caching and rich context.
π Now Published! Install with
bunx @om-surushe/efficient-search
# Install globally
npm install -g @om-surushe/efficient-search
# Or run directly
bunx @om-surushe/efficient-searchGet your Google PSE credentials:
- Create a Programmable Search Engine: https://programmablesearchengine.google.com/
- Get API key: https://console.cloud.google.com/apis/credentials
- Copy Search Engine ID from PSE dashboard
Add to your MCP client config:
{
"mcpServers": {
"efficient-search": {
"command": "bunx",
"args": ["@om-surushe/efficient-search"],
"env": {
"GOOGLE_API_KEY": "your_api_key_here",
"SEARCH_ENGINE_ID": "your_search_engine_id"
}
}
}
}Traditional Google Search API:
{
"title": "Example Page",
"link": "https://example.com",
"snippet": "Some text..."
}β LLM has to extract metadata, clean HTML entities, rank results = More tokens, slower
Efficient Search MCP:
{
"title": "Example Page",
"url": "https://example.com",
"snippet": "Clean, formatted text",
"displayUrl": "example.com",
"relevance": 0.95,
"metadata": {
"description": "Full page description",
"author": "Author name",
"publishedDate": "2024-01-01",
"thumbnail": "https://...",
"siteName": "Example"
},
"summary": "Found 1,234 results. Most relevant: ..."
}β Everything pre-processed. LLM just reads and responds. = Faster, efficient, cached
- π Clean, structured results - No HTML entities, formatted for LLM consumption
- π§ Rich metadata extraction - Author, publish date, thumbnails, descriptions
- β‘ Smart caching - 60min default TTL, configurable
- π― Relevance scoring - Pre-calculated relevance for each result
- π LLM-friendly summaries - "Found X results, most relevant: ..."
- π Geolocation & language - Filter by country and language
- π Safe search - Configurable safety levels
- π Built with Bun - Fast, modern TypeScript runtime
| Tool | Description |
|---|---|
web_search |
Search the web with LLM-optimized results |
clear_cache |
Clear cached search results |
get_cache_stats |
View cache size, TTL, and hit rate |
Environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_API_KEY |
β | - | Google Cloud API key |
SEARCH_ENGINE_ID |
β | - | Programmable Search Engine ID |
CACHE_TTL_MINUTES |
β | 60 | Cache time-to-live in minutes |
MAX_RESULTS |
β | 10 | Maximum results per query |
web_search({ query: "typescript best practices" })web_search({
query: "machine learning papers",
num: 5,
gl: "us",
lr: "lang_en",
safe: "high"
}){
"query": "typescript best practices",
"totalResults": 12400000,
"searchTime": 0.45,
"cached": false,
"summary": "Found 12,400,000 results. Most relevant: ...",
"results": [
{
"title": "TypeScript Best Practices Guide",
"url": "https://example.com/guide",
"snippet": "Clean, formatted snippet...",
"displayUrl": "example.com",
"relevance": 1.0,
"metadata": {
"description": "Comprehensive guide...",
"author": "John Doe",
"publishedDate": "2024-01-15"
}
}
]
}- Runtime: Bun - Fast JavaScript runtime
- Language: TypeScript 5.3+
- Protocol: Model Context Protocol (MCP)
- API: Google Programmable Search Engine
# Install dependencies
npm install
# Run in dev mode
bun run dev
# Build
bun run build
# Type check
bun run typecheck
# Lint
bun run lint- @om-surushe/efficient-ticktick - LLM-optimized TickTick task management
- @om-surushe/efficient-search - LLM-optimized web search (this package)
- More coming soon...
All packages focus on:
- Token efficiency - Pre-processed, rich context
- LLM-first design - Built for AI consumption
- Professional quality - Production-ready, tested, documented
MIT License - see LICENSE for details.
Om Surushe
- GitHub: @om-surushe
- LinkedIn: om-surushe
- npm: @om-surushe
Made with β€οΈ and Bun for AI assistants