Skip to content

NimbleBrainInc/synapse-research

Repository files navigation

synapse-research

mpak NimbleBrain Discord License: MIT

Deep-research MCP app. Exposes one task-augmented tool (start_research) that runs GPT-Researcher against Tavily (web search), Anthropic Claude (planner + writer LLM), and OpenAI (embeddings only), and streams progress back through both the MCP tasks protocol and the Upjack entity stream. Fully compliant with the MCP 2025-11-25 draft tasks utility via FastMCP 3.

View on mpak registry | Built by NimbleBrain

Quick Start

Install via mpak into your NimbleBrain workspace:

mpak install @nimblebraininc/synapse-research

Set the three required credentials in your host's shell (Bun auto-loads .env, or export directly):

export ANTHROPIC_API_KEY=sk-ant-...
export TAVILY_API_KEY=tvly-...
export OPENAI_API_KEY=sk-...

Run a research task from your agent chat:

"Research what's new with Model Context Protocol in 2026"

The agent fires start_research, the worker streams progress into the chat UI and into the Synapse sidebar dashboard, and you get back a markdown report in ~30s–3min.

Architecture

chat: "research X"
  │
  ▼
NimbleBrain engine ──┐
                     │  tools/call (task-augmented)
                     ▼
            FastMCP server (this app)
                     │
                     ├─► creates research_run entity (status=working)
                     ├─► spawns worker (asyncio)
                     │     │
                     │     ├─► ctx.report_progress  ──► notifications/tasks/status ──► engine
                     │     └─► app.update_entity    ──► filesystem ──► Synapse UI live stream
                     │
                     └─► returns CreateTaskResult immediately
                         (engine polls tasks/get, retrieves via tasks/result when terminal)

Two independent channels update in lockstep:

  • Engine channel — MCP task status notifications. The engine uses these to render progress in the chat UI and to stabilise polling cadence.
  • UI channel — entity writes via Upjack. The Synapse sidebar app reads the entity stream to render a live dashboard of runs.

Configuration

Credentials (declared as user_config in manifest.json)

The host runtime prompts for these at install time or resolves them from a workspace-scoped store, then injects them into the bundle subprocess via mcp_config.env:

Config key Env var exposed Purpose
anthropic_api_key ANTHROPIC_API_KEY Claude LLM — planning + report writing
tavily_api_key TAVILY_API_KEY Web search
openai_api_key OPENAI_API_KEY Embeddings only (text-embedding-3-small)

All three are required and marked sensitive: true.

Routing (hard-coded in mcp_config.env)

Not tenant-tunable in v1 — set directly in the manifest:

RETRIEVER=tavily
FAST_LLM=anthropic:claude-haiku-4-5
SMART_LLM=anthropic:claude-sonnet-4-6
STRATEGIC_LLM=anthropic:claude-sonnet-4-6
EMBEDDING=openai:text-embedding-3-small

To change an LLM or retriever, edit manifest.json and reinstall the bundle. Promoting any of these to user_config is a one-line change if per-workspace tuning is needed.

Cost and latency

  • Typical run: 30s–3min.
  • Typical cost: $0.15–$0.60/run on Sonnet 4.6 + Tavily advanced + OpenAI embeddings.
  • Hard-cap: 5 minutes via asyncio.wait_for. Longer runs are marked failed with a timeout error.

Data layout

One entity: research_run. Lives under:

$UPJACK_ROOT/apps/research/data/research_runs/{id}.json

Data-root resolution priority:

  1. UPJACK_ROOT env var
  2. MPAK_WORKSPACE env var
  3. ~/.synapse-research (fallback)

Each workspace spawns its own server process with its own root. There is no cross-workspace state inside the server.

Running locally

Install deps

uv sync
cd ui && npm install && npm run build && cd ..

Stdio (Claude Desktop, any MCP client)

uv run python -m mcp_research.server

HTTP (NimbleBrain platform)

uv run uvicorn mcp_research.server:app --port 8002

Tests (keyless — no API keys required)

uv run pytest tests/ -v

The spec-compliance suite (tests/test_spec_compliance.py) exercises every MUST from the MCP tasks draft: capability advertisement, execution.taskSupport gating, tasks/get|result|cancel|list, TTL behaviour, progress notifications, workspace isolation. The worker suite (tests/test_worker.py) covers happy path, cancel, failure, monotonic progress, and source streaming. All tests use a FakeGPTR monkeypatch so real providers are never called in CI.

Tool reference

Tool Task support Description
start_research optional The only custom tool. Runs the research worker end-to-end.
get_research_run n/a Auto-generated entity tool (read by id).
list_research_runs n/a Auto-generated entity tool.
search_research_runs n/a Auto-generated entity tool.
delete_research_run n/a Auto-generated entity tool (soft delete).

Cancellation is handled at the MCP protocol level via tasks/cancel. The worker catches asyncio.CancelledError, flips the entity to cancelled, and re-raises so FastMCP transitions the task to its cancelled terminal state.

Contributing

See CLAUDE.md for the architecture walkthrough, commands, conventions, and build pipeline.

Quality gates (run before opening a PR):

uv run ruff check src/ tests/
uv run ruff format --check src/ tests/
uv run ty check src/
uv run pytest tests/ -v
cd ui && npm ci && npm run build

CI enforces the same gates — see .github/workflows/ci.yml.

Ecosystem

License

MIT — see LICENSE.

About

Deep-research Synapse app — task-augmented MCP tool powered by GPT-Researcher, Tavily, and Anthropic Claude. Live progress streaming via the MCP 2025-11-25 draft tasks utility.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors