This repository contains following projects used for a small LangGraph + MCP example and other examples.
-
my-chat-ui— Next.js + Turbo monorepo UI for chat (frontend). -
my-chat-ui-deep-agent- A simple front-end in Next.js built on top of themy-chat-uito add Deep Agent specific UI components (reference: https://github.com/langchain-ai/langgraphjs/tree/main/examples/ui-react).However use this instead-
deep-agents-ui- A separate cloned project (not committed into this repo) that serves the same purpose asmy-chat-ui-deep-agentbut is maintained independently. It is ignored by Git via.gitignore.` -
langgraph-mcp/mcp-server— FastMCP HTTP server exposing tools via MCP. -
langgraph-mcp/agents— It has the following agents:-
A simple LangGraph agent that loads tools from the MCP server and uses Ollama LLM.
-
A simple DeepAgent implementation with a research sub-agent, hotel search sub-agent and a main agent that can call the research agent or hotel search agent as a tool. Both agents use the same MCP server for tools and Ollama for LLM. Reference: https://github.com/langchain-ai/deepagents/tree/main/examples/deep_research
-
A DeepAgent with Skill implementation that builds on top of the DeepAgent by adding a skill that can be called by the main agent.
Skills References:
-
- Start the Ollama model (local model server).
- Start the MCP server.
- Start the agent (it loads tools from the MCP server).
- Start the chat UI.
-
Prereq: Install Ollama (https://ollama.com). Replace
ministral-3:14bwith your desired model name if needed. -
Pull the model locally:
ollama pull ministral-3:14b
-
Run the model as a local service (starts the model so
ChatOllamacan connect):ollama run ministral-3:14b
-
Purpose: exposes tools (e.g., weather, hotel search) over an MCP HTTP endpoint at
http://localhost:8000/mcp. -
Initialize and run:
cd langgraph-mcp/mcp-server python -m venv .venv source .venv/bin/activate pip install -e . python -m langgraph_mcp_server.server
Notes:
- The server uses
uvicorninternally and will print the HTTP and MCP endpoint on startup.
-
Purpose: a LangGraph ReAct agent that loads tools from the MCP server and queries the Ollama model.
-
Initialize and run:
cd langgraph-mcp/agents python -m venv .venv source .venv/bin/activate pip install -e . # Run the agent module (it builds the graph and connects to MCP model) langgraph dev
-
If the project provides a
.env.example, copy it and set any secrets before running:cp .env.example .env # edit .env as needed
-
Purpose: user-facing web app (Next.js / Turbo monorepo) that connects to the backend services.
-
Initialize and run (requires Node.js and
pnpm):# from repository root cd my-chat-ui pnpm install pnpm dev
-
The
devscript runs the monorepo dev servers (Turbo) used by the app.
- Pull and connect to a cloud Ollama model (Recommended to use Ollama Cloud model such as
nemotron-3-super:cloud,gpt-oss:120b-cloudinstead of local model server). - Start the MCP server.
- Start the deep agent (it loads tools from the MCP server).
- Start the chat UI.
-
Prereq:
- Install Ollama (https://ollama.com).
- Create an account on Ollama Cloud (https://signin.ollama.com).
- Create an API key in the Ollama Cloud dashboard and set it in your
.envfile asOLLAMA_API_KEY=your_ollama_api_key_here.
-
Sign in to Ollama Cloud from your terminal (Ref. this if needed more information: https://docs.ollama.com/cloud#running-cloud-models):
ollama signin
Authenticate by logging in with your credentials. This allows you to pull and run cloud-hosted models.
-
Pull the model locally:
ollama pull nemotron-3-super:cloud
- Prereq: Create an account on Tavily (https://tavily.com) and get your API key. Set it in your
.envfile asTAVILY_API_KEY=your_tavily_api_key_here.
-
Purpose: exposes tools (e.g., weather, hotel search) over an MCP HTTP endpoint at
http://localhost:8000/mcp. -
Initialize and run:
cd langgraph-mcp/mcp-server python -m venv .venv source .venv/bin/activate pip install -e . python -m langgraph_mcp_server.server
Notes:
- The server uses
uvicorninternally and will print the HTTP and MCP endpoint on startup.
-
Purpose: a simple Deep Agent implementation with a research sub-agent, hotel search sub-agent, and a main agent that can call the research agent or hotel search agent as a tool. Both agents use the same MCP server for tools and Ollama for LLM.
-
Initialize and run:
cd langgraph-mcp/agents python -m venv .venv source .venv/bin/activate pip install -e . # Run the agent module (it builds the graph and connects to MCP model) langgraph dev
-
Uses the
deep-agents-uiproject instead of themy-chat-ui-deep-agentin this repo. It is a separate cloned project that serves the same purpose but is maintained independently by Langchain community. It is ignored by Git via.gitignore. -
Initialize and run:
git clone https://github.com/langchain-ai/deep-agents-ui.git cd deep-agents-ui yarn install yarn dev
Follow the same steps as of Running The Example Deep Agent.
On the Deep Agent Chat UI, set deep_agent_with_skill as the agent endpoint instead of deep_agent to connect to the Deep Agent with Skill implementation.
Prereq: Ensure Docker Desktop is installed and running on your machine (if want to run Langfuse locally).
-
Steps to install & set up Langfuse locally can be found here: https://langfuse.com/self-hosting/deployment/docker-compose#get-started
-
Once set up, start it by running
docker compose upin the directory where you set up Langfuse. This will make the Langfuse dashboard available athttp://localhost:3000. -
Sign-up for an account on Langfuse and set up your Langfuse account and get your secret and public keys.
-
Set Langfuse secret and public keys in your
.envfile (refer to.env.examplefile for key names and add your key values there) before running the agent. -
Refer the agent and deep agent code for how to integrate Langfuse tracing in your agent. The code examples in this repo use a
LangfuseCallbackHandlerthat sends traces to the local Langfuse instance.- More details in Langfuse integration with Langgraph and Langgraph Dev Server can be found here: https://langfuse.com/guides/cookbook/integration_langgraph
- Once your
docker compose upcommand is running for Langfuse, it will be available athttp://localhost:3000. - Open http://localhost:3000 in your browser to access the Langfuse dashboard and view traces from your agent.
- Sign-up for an account or log in if you already have one.
- Login to Langfuse
- Go to Settings -> New LLM Connection -> LLM Adapter -> Choose
OpenAIfrom the list -> Provider Name:Ollama Cloud(or any name) -> API Key:your Ollama API key. -> API Base URL:https://ollama.com/api/v1, -> Show Adavanced settings -> Custom Models: Add Model Name:nemotron-3-super:cloud(or your desired model), then save the connection.
- The UI sends user requests to your backend (typically the MCP server or an MCP-aware gateway). The agent connects to the MCP server at
http://localhost:8000/mcpto load tools and orchestrates the LLM (Ollama) to answer queries. The MCP server is independent and should be started before the agent so the agent can fetch tool definitions.
- If the agent cannot load tools, verify the MCP server is running and reachable at
http://localhost:8000/mcp. - If the agent fails to initialize the LLM, confirm Ollama is running and the model name matches the one configured (the agent code in this repo uses
ministral-3:14b).
- Each subproject has its own
README.mdwith more details.
That's it — start Ollama, then the MCP server, then the agent, then the UI.