Open Deep Research is an experimental, fully open-source research assistant that automates deep research and produces comprehensive reports on any topic. It features two implementations - a workflow and a multi-agent architecture - each with distinct advantages. You can customize the entire research and writing process with specific models, prompts, report structure, and search tools.
Clone the repository:
git clone https://github.com/langchain-ai/open_deep_research.git
cd open_deep_research
Then edit the .env
file to customize the environment variables (for model selection, search tools, and other configuration settings):
cp .env.example .env
Launch the assistant with the LangGraph server locally, which will open in your browser:
# Install uv package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies and start the LangGraph server
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev --allow-blocking
# Install dependencies
pip install -e .
pip install -U "langgraph-cli[inmem]"
# Start the LangGraph server
langgraph dev
Use this to open the Studio UI:
- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API Docs: http://127.0.0.1:2024/docs
(1) Chat with the agent about your topic of interest, and it will initiate report generation:

(2) The report is produced as markdown.
(1) Provide a Topic
:

(2) This will generate a report plan and present it to the user for review.
(3) We can pass a string ("..."
) with feedback to regenerate the plan based on the feedback.

(4) Or, we can just pass true
to the JSON input box in Studio accept the plan.

(5) Once accepted, the report sections will be generated.

The report is produced as markdown.

Available search tools:
- Tavily API - General web search
- Perplexity API - General web search
- Exa API - Powerful neural search for web content
- ArXiv - Academic papers in physics, mathematics, computer science, and more
- PubMed - Biomedical literature from MEDLINE, life science journals, and online books
- Linkup API - General web search
- DuckDuckGo API - General web search
- Google Search API/Scrapper - Create custom search engine here and get API key here
- Microsoft Azure AI Search - Cloud based vector database solution
Open Deep Research is compatible with many different LLMs:
- You can select any model that is integrated with the
init_chat_model()
API - See full list of supported integrations here
pip install open-deep-research
See src/open_deep_research/graph.ipynb and src/open_deep_research/multi_agent.ipynb for example usage in a Jupyter notebook:
Open Deep Research features two distinct implementation approaches, each with its own strengths:
The graph-based implementation follows a structured plan-and-execute workflow:
- Planning Phase: Uses a planner model to analyze the topic and generate a structured report plan
- Human-in-the-Loop: Allows for human feedback and approval of the report plan before proceeding
- Sequential Research Process: Creates sections one by one with reflection between search iterations
- Section-Specific Research: Each section has dedicated search queries and content retrieval
- Supports Multiple Search Tools: Works with all search providers (Tavily, Perplexity, Exa, ArXiv, PubMed, Linkup, etc.)
This implementation provides a more interactive experience with greater control over the report structure, making it ideal for situations where report quality and accuracy are critical.
You can customize the research assistant workflow through several parameters:
report_structure
: Define a custom structure for your report (defaults to a standard research report format)number_of_queries
: Number of search queries to generate per section (default: 2)max_search_depth
: Maximum number of reflection and search iterations (default: 2)planner_provider
: Model provider for planning phase (default: "anthropic", but can be any provider from supported integrations withinit_chat_model
as listed here)planner_model
: Specific model for planning (default: "claude-3-7-sonnet-latest")planner_model_kwargs
: Additional parameter for planner_modelwriter_provider
: Model provider for writing phase (default: "anthropic", but can be any provider from supported integrations withinit_chat_model
as listed here)writer_model
: Model for writing the report (default: "claude-3-5-sonnet-latest")writer_model_kwargs
: Additional parameter for writer_modelsearch_api
: API to use for web searches (default: "tavily", options include "perplexity", "exa", "arxiv", "pubmed", "linkup")
The multi-agent implementation uses a supervisor-researcher architecture:
- Supervisor Agent: Manages the overall research process, plans sections, and assembles the final report
- Researcher Agents: Multiple independent agents work in parallel, each responsible for researching and writing a specific section
- Parallel Processing: All sections are researched simultaneously, significantly reducing report generation time
- Specialized Tool Design: Each agent has access to specific tools for its role (search for researchers, section planning for supervisors)
- Search and MCP Support: Works with Tavily/DuckDuckGo for web search, MCP servers for local/external data access, or can operate without search tools using only MCP tools
This implementation focuses on efficiency and parallelization, making it ideal for faster report generation with less direct user involvement.
You can customize the multi-agent implementation through several parameters:
supervisor_model
: Model for the supervisor agent (default: "anthropic:claude-3-5-sonnet-latest")researcher_model
: Model for researcher agents (default: "anthropic:claude-3-5-sonnet-latest")number_of_queries
: Number of search queries to generate per section (default: 2)search_api
: API to use for web searches (default: "tavily", options include "duckduckgo", "none")ask_for_clarification
: Whether the supervisor should ask clarifying questions before research (default: false) - Important: Set totrue
to enable the Question tool for the supervisor agentmcp_server_config
: Configuration for MCP servers (optional)mcp_prompt
: Additional instructions for using MCP tools (optional)mcp_tools_to_include
: Specific MCP tools to include (optional)
The multi-agent implementation (src/open_deep_research/multi_agent.py
) supports MCP servers to extend research capabilities beyond web search. MCP tools are available to research agents alongside or instead of traditional search tools, enabling access to local files, databases, APIs, and other data sources.
Note: MCP support is currently only available in the multi-agent (src/open_deep_research/multi_agent.py
) implementation, not in the workflow-based workflow implementation (src/open_deep_research/graph.py
).
- Tool Integration: MCP tools are seamlessly integrated with existing search and section-writing tools
- Research Agent Access: Only research agents (not supervisors) have access to MCP tools
- Flexible Configuration: Use MCP tools alone or combined with web search
- Disable Default Search: Set
search_api: "none"
to disable web search tools entirely - Custom Prompts: Add specific instructions for using MCP tools
config = {
"configurable": {
"search_api": "none", # Use "tavily" or "duckduckgo" to combine with web search
"mcp_server_config": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/path/to/your/files"
],
"transport": "stdio"
}
},
"mcp_prompt": "Step 1: Use the `list_allowed_directories` tool to get the list of allowed directories. Step 2: Use the `read_file` tool to read files in the allowed directory.",
"mcp_tools_to_include": ["list_allowed_directories", "list_directory", "read_file"] # Optional: specify which tools to include
}
}
MCP server config:
{
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/rlm/Desktop/Code/open_deep_research/src/open_deep_research/files"
],
"transport": "stdio"
}
}
MCP prompt:
CRITICAL: You MUST follow this EXACT sequence when using filesystem tools:
1. FIRST: Call `list_allowed_directories` tool to discover allowed directories
2. SECOND: Call `list_directory` tool on a specific directory from step 1 to see available files
3. THIRD: Call `read_file` tool to read specific files found in step 2
DO NOT call `list_directory` or `read_file` until you have first called `list_allowed_directories`. You must discover the allowed directories before attempting to browse or read files.
MCP tools:
list_allowed_directories
list_directory
read_file
Example test topic and follow-up feedback that you can provide that will reference the included file:
Topic:
I want an overview of vibe coding
Follow-up to the question asked by the research agent:
I just want a single section report on vibe coding that highlights an interesting / fun example
Resulting trace:
https://smith.langchain.com/public/d871311a-f288-4885-8f70-440ab557c3cf/r
mcp_server_config
: Dictionary defining MCP server configurations (see langchain-mcp-adapters examples)mcp_prompt
: Optional instructions added to research agent prompts for using MCP toolsmcp_tools_to_include
: Optional list of specific MCP tool names to include (if not set, all tools from all servers are included)search_api
: Set to"none"
to use only MCP tools, or keep existing search APIs to combine both
- Local Documentation: Access project documentation, code files, or knowledge bases
- Database Queries: Connect to databases for specific data retrieval
- API Integration: Access external APIs and services
- File Analysis: Read and analyze local files during research
The MCP integration allows research agents to incorporate local knowledge and external data sources into their research process, creating more comprehensive and context-aware reports.
Not all search APIs support additional configuration parameters. Here are the ones that do:
- Exa:
max_characters
,num_results
,include_domains
,exclude_domains
,subpages
- Note:
include_domains
andexclude_domains
cannot be used together - Particularly useful when you need to narrow your research to specific trusted sources, ensure information accuracy, or when your research requires using specified domains (e.g., academic journals, government sites)
- Provides AI-generated summaries tailored to your specific query, making it easier to extract relevant information from search results
- Note:
- ArXiv:
load_max_docs
,get_full_documents
,load_all_available_meta
- PubMed:
top_k_results
,email
,api_key
,doc_content_chars_max
- Linkup:
depth
Example with Exa configuration:
thread = {"configurable": {"thread_id": str(uuid.uuid4()),
"search_api": "exa",
"search_api_config": {
"num_results": 5,
"include_domains": ["nature.com", "sciencedirect.com"]
},
# Other configuration...
}}
(1) You can use models supported with the init_chat_model()
API. See full list of supported integrations here.
(2) The workflow planner and writer models need to support structured outputs: Check whether structured outputs are supported by the model you are using here.
(3) The agent models need to support tool calling: Ensure tool calling is well supoorted; tests have been done with Claude 3.7, o3, o3-mini, and gpt4.1. See here.
(4) With Groq, there are token per minute (TPM) limits if you are on the on_demand
service tier:
- The
on_demand
service tier has a limit of6000 TPM
- You will want a paid plan for section writing with Groq models
(5) deepseek-R1
is not strong at function calling, which the assistant uses to generate structured outputs for report sections and report section grading. See example traces here.
- Consider providers that are strong at function calling such as OpenAI, Anthropic, and certain OSS models like Groq's
llama-3.3-70b-versatile
. - If you see the following error, it is likely due to the model not being able to produce structured outputs (see trace):
groq.APIError: Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.
(6) Follow [here[(#75 (comment)) to use with OpenRouter.
(7) For working with local models via Ollama, see here.
Open Deep Research includes two comprehensive evaluation systems to assess report quality and performance:
A developer-friendly testing framework that provides immediate feedback during development and testing cycles.
- Rich Console Output: Formatted tables, progress indicators, and color-coded results
- Binary Pass/Fail Testing: Clear success/failure criteria for CI/CD integration
- LangSmith Integration: Automatic experiment tracking and logging
- Flexible Configuration: Extensive CLI options for different testing scenarios
- Real-time Feedback: Live output during test execution
The system evaluates reports against 9 comprehensive quality dimensions:
- Topic relevance (overall and section-level)
- Structure and logical flow
- Introduction and conclusion quality
- Proper use of structural elements (headers, citations)
- Markdown formatting compliance
- Citation quality and source attribution
- Overall research depth and accuracy
# Run all agents with default settings
python tests/run_test.py --all
# Test specific agent with custom models
python tests/run_test.py --agent multi_agent \
--supervisor-model "anthropic:claude-3-7-sonnet-latest" \
--search-api tavily
# Test with OpenAI o3 models
python tests/run_test.py --all \
--supervisor-model "openai:o3" \
--researcher-model "openai:o3" \
--planner-provider "openai" \
--planner-model "o3" \
--writer-provider "openai" \
--writer-model "o3" \
--eval-model "openai:o3" \
--search-api "tavily"
tests/run_test.py
: Main test runner with rich CLI interfacetests/test_report_quality.py
: Core test implementationtests/conftest.py
: Pytest configuration and CLI options
A comprehensive batch evaluation system designed for detailed analysis and comparative studies.
- Multi-dimensional Scoring: Four specialized evaluators with 1-5 scale ratings
- Weighted Criteria: Detailed scoring with customizable weights for different quality aspects
- Dataset-driven Evaluation: Batch processing across multiple test cases
- Performance Optimization: Caching with extended TTL for evaluator prompts
- Professional Reporting: Structured analysis with improvement recommendations
-
Overall Quality (7 weighted criteria):
- Research depth and source quality (20%)
- Analytical rigor and critical thinking (15%)
- Structure and organization (20%)
- Practical value and actionability (10%)
- Balance and objectivity (15%)
- Writing quality and clarity (10%)
- Professional presentation (10%)
-
Relevance: Section-by-section topic relevance analysis with strict criteria
-
Structure: Assessment of logical flow, formatting, and citation practices
-
Groundedness: Evaluation of alignment with retrieved context and sources
# Run comprehensive evaluation on LangSmith datasets
python tests/evals/run_evaluate.py
tests/evals/run_evaluate.py
: Main evaluation scripttests/evals/evaluators.py
: Four specialized evaluator functionstests/evals/prompts.py
: Detailed evaluation prompts for each dimensiontests/evals/target.py
: Report generation workflows
Use Pytest System for:
- Development and debugging cycles
- CI/CD pipeline integration
- Quick model comparison experiments
- Interactive testing with immediate feedback
- Gate-keeping before production deployments
Use LangSmith System for:
- Comprehensive model evaluation across datasets
- Research and analysis of system performance
- Detailed performance profiling and benchmarking
- Comparative studies between different configurations
- Production monitoring and quality assurance
Both evaluation systems complement each other and provide comprehensive coverage for different use cases and development stages.
Follow the quickstart to start LangGraph server locally.
You can easily deploy to LangGraph Platform.