Deep-research MCP app. Exposes one task-augmented tool (start_research) that runs GPT-Researcher against Tavily (web search), Anthropic Claude (planner + writer LLM), and OpenAI (embeddings only), and streams progress back through both the MCP tasks protocol and the Upjack entity stream. Fully compliant with the MCP 2025-11-25 draft tasks utility via FastMCP 3.
View on mpak registry | Built by NimbleBrain
Install via mpak into your NimbleBrain workspace:
mpak install @nimblebraininc/synapse-researchSet the three required credentials in your host's shell (Bun auto-loads .env, or export directly):
export ANTHROPIC_API_KEY=sk-ant-...
export TAVILY_API_KEY=tvly-...
export OPENAI_API_KEY=sk-...Run a research task from your agent chat:
"Research what's new with Model Context Protocol in 2026"
The agent fires start_research, the worker streams progress into the chat UI and into the Synapse sidebar dashboard, and you get back a markdown report in ~30s–3min.
chat: "research X"
│
▼
NimbleBrain engine ──┐
│ tools/call (task-augmented)
▼
FastMCP server (this app)
│
├─► creates research_run entity (status=working)
├─► spawns worker (asyncio)
│ │
│ ├─► ctx.report_progress ──► notifications/tasks/status ──► engine
│ └─► app.update_entity ──► filesystem ──► Synapse UI live stream
│
└─► returns CreateTaskResult immediately
(engine polls tasks/get, retrieves via tasks/result when terminal)
Two independent channels update in lockstep:
- Engine channel — MCP task status notifications. The engine uses these to render progress in the chat UI and to stabilise polling cadence.
- UI channel — entity writes via Upjack. The Synapse sidebar app reads the entity stream to render a live dashboard of runs.
The host runtime prompts for these at install time or resolves them from a workspace-scoped store, then injects them into the bundle subprocess via mcp_config.env:
| Config key | Env var exposed | Purpose |
|---|---|---|
anthropic_api_key |
ANTHROPIC_API_KEY |
Claude LLM — planning + report writing |
tavily_api_key |
TAVILY_API_KEY |
Web search |
openai_api_key |
OPENAI_API_KEY |
Embeddings only (text-embedding-3-small) |
All three are required and marked sensitive: true.
Not tenant-tunable in v1 — set directly in the manifest:
RETRIEVER=tavily
FAST_LLM=anthropic:claude-haiku-4-5
SMART_LLM=anthropic:claude-sonnet-4-6
STRATEGIC_LLM=anthropic:claude-sonnet-4-6
EMBEDDING=openai:text-embedding-3-small
To change an LLM or retriever, edit manifest.json and reinstall the bundle. Promoting any of these to user_config is a one-line change if per-workspace tuning is needed.
- Typical run: 30s–3min.
- Typical cost: $0.15–$0.60/run on Sonnet 4.6 + Tavily advanced + OpenAI embeddings.
- Hard-cap: 5 minutes via
asyncio.wait_for. Longer runs are markedfailedwith a timeout error.
One entity: research_run. Lives under:
$UPJACK_ROOT/apps/research/data/research_runs/{id}.json
Data-root resolution priority:
UPJACK_ROOTenv varMPAK_WORKSPACEenv var~/.synapse-research(fallback)
Each workspace spawns its own server process with its own root. There is no cross-workspace state inside the server.
uv sync
cd ui && npm install && npm run build && cd ..uv run python -m mcp_research.serveruv run uvicorn mcp_research.server:app --port 8002uv run pytest tests/ -vThe spec-compliance suite (tests/test_spec_compliance.py) exercises every MUST from the MCP tasks draft: capability advertisement, execution.taskSupport gating, tasks/get|result|cancel|list, TTL behaviour, progress notifications, workspace isolation. The worker suite (tests/test_worker.py) covers happy path, cancel, failure, monotonic progress, and source streaming. All tests use a FakeGPTR monkeypatch so real providers are never called in CI.
| Tool | Task support | Description |
|---|---|---|
start_research |
optional |
The only custom tool. Runs the research worker end-to-end. |
get_research_run |
n/a | Auto-generated entity tool (read by id). |
list_research_runs |
n/a | Auto-generated entity tool. |
search_research_runs |
n/a | Auto-generated entity tool. |
delete_research_run |
n/a | Auto-generated entity tool (soft delete). |
Cancellation is handled at the MCP protocol level via tasks/cancel. The worker catches asyncio.CancelledError, flips the entity to cancelled, and re-raises so FastMCP transitions the task to its cancelled terminal state.
See CLAUDE.md for the architecture walkthrough, commands, conventions, and build pipeline.
Quality gates (run before opening a PR):
uv run ruff check src/ tests/
uv run ruff format --check src/ tests/
uv run ty check src/
uv run pytest tests/ -v
cd ui && npm ci && npm run buildCI enforces the same gates — see .github/workflows/ci.yml.
- NimbleBrain — the agent platform this app runs on
- mpak — MCP bundle registry where releases are published
- Upjack — declarative AI-app framework (entity schemas, skills, hooks)
- Synapse SDK — React hooks powering the UI
- GPT-Researcher — Apache-2.0 research engine this app wraps
- Discord community
MIT — see LICENSE.