Skip to content

kunupat/langgraph-code-examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Langgraph Code Examples

This repository contains following projects used for a small LangGraph + MCP example and other examples.

Overview

  1. my-chat-ui — Next.js + Turbo monorepo UI for chat (frontend).

  2. my-chat-ui-deep-agent- A simple front-end in Next.js built on top of the my-chat-ui to add Deep Agent specific UI components (reference: https://github.com/langchain-ai/langgraphjs/tree/main/examples/ui-react).

    However use this instead- deep-agents-ui - A separate cloned project (not committed into this repo) that serves the same purpose as my-chat-ui-deep-agent but is maintained independently. It is ignored by Git via .gitignore.`

  3. langgraph-mcp/mcp-server — FastMCP HTTP server exposing tools via MCP.

  4. langgraph-mcp/agents — It has the following agents:

    1. A simple LangGraph agent that loads tools from the MCP server and uses Ollama LLM.

    2. A simple DeepAgent implementation with a research sub-agent, hotel search sub-agent and a main agent that can call the research agent or hotel search agent as a tool. Both agents use the same MCP server for tools and Ollama for LLM. Reference: https://github.com/langchain-ai/deepagents/tree/main/examples/deep_research

    3. A DeepAgent with Skill implementation that builds on top of the DeepAgent by adding a skill that can be called by the main agent.

      Skills References:

Running The Simple ReAct Agent

Quick start (recommended run order)

  1. Start the Ollama model (local model server).
  2. Start the MCP server.
  3. Start the agent (it loads tools from the MCP server).
  4. Start the chat UI.

1. Ollama ministral-3:14b model

  • Prereq: Install Ollama (https://ollama.com). Replace ministral-3:14b with your desired model name if needed.

  • Pull the model locally:

     ollama pull ministral-3:14b
  • Run the model as a local service (starts the model so ChatOllama can connect):

     ollama run ministral-3:14b

2. MCP server (langgraph-mcp/mcp-server)

  • Purpose: exposes tools (e.g., weather, hotel search) over an MCP HTTP endpoint at http://localhost:8000/mcp.

  • Initialize and run:

     cd langgraph-mcp/mcp-server
     python -m venv .venv
     source .venv/bin/activate
     pip install -e .
     python -m langgraph_mcp_server.server

Notes:

  • The server uses uvicorn internally and will print the HTTP and MCP endpoint on startup.

3. Agent (langgraph-mcp/agents)

  • Purpose: a LangGraph ReAct agent that loads tools from the MCP server and queries the Ollama model.

  • Initialize and run:

     cd langgraph-mcp/agents
     python -m venv .venv
     source .venv/bin/activate
     pip install -e .
     # Run the agent module (it builds the graph and connects to MCP model)
     langgraph dev
  • If the project provides a .env.example, copy it and set any secrets before running:

     cp .env.example .env
     # edit .env as needed

4. Chat UI (my-chat-ui)

  • Purpose: user-facing web app (Next.js / Turbo monorepo) that connects to the backend services.

  • Initialize and run (requires Node.js and pnpm):

     # from repository root
     cd my-chat-ui
     pnpm install
     pnpm dev
  • The dev script runs the monorepo dev servers (Turbo) used by the app.

Running The Example Deep Agent

Quick start (recommended run order)

  1. Pull and connect to a cloud Ollama model (Recommended to use Ollama Cloud model such as nemotron-3-super:cloud, gpt-oss:120b-cloud instead of local model server).
  2. Start the MCP server.
  3. Start the deep agent (it loads tools from the MCP server).
  4. Start the chat UI.

1) Ollama cloud model

  • Prereq:

    • Install Ollama (https://ollama.com).
    • Create an account on Ollama Cloud (https://signin.ollama.com).
    • Create an API key in the Ollama Cloud dashboard and set it in your .env file as OLLAMA_API_KEY=your_ollama_api_key_here.
  • Sign in to Ollama Cloud from your terminal (Ref. this if needed more information: https://docs.ollama.com/cloud#running-cloud-models):

     ollama signin

    Authenticate by logging in with your credentials. This allows you to pull and run cloud-hosted models.

  • Pull the model locally:

     ollama pull nemotron-3-super:cloud

2. Tavily Setup

  • Prereq: Create an account on Tavily (https://tavily.com) and get your API key. Set it in your .env file as TAVILY_API_KEY=your_tavily_api_key_here.

3. MCP server (langgraph-mcp/mcp-server)

  • Purpose: exposes tools (e.g., weather, hotel search) over an MCP HTTP endpoint at http://localhost:8000/mcp.

  • Initialize and run:

     cd langgraph-mcp/mcp-server
     python -m venv .venv
     source .venv/bin/activate
     pip install -e .
     python -m langgraph_mcp_server.server

Notes:

  • The server uses uvicorn internally and will print the HTTP and MCP endpoint on startup.

4. Deep Agent (langgraph-mcp/agents)

  • Purpose: a simple Deep Agent implementation with a research sub-agent, hotel search sub-agent, and a main agent that can call the research agent or hotel search agent as a tool. Both agents use the same MCP server for tools and Ollama for LLM.

  • Initialize and run:

     cd langgraph-mcp/agents
     python -m venv .venv
     source .venv/bin/activate
     pip install -e .
     # Run the agent module (it builds the graph and connects to MCP model)
     langgraph dev

5. Deep Agents UI

  • Uses the deep-agents-ui project instead of the my-chat-ui-deep-agent in this repo. It is a separate cloned project that serves the same purpose but is maintained independently by Langchain community. It is ignored by Git via .gitignore.

  • Reference: https://github.com/langchain-ai/deep-agents-ui

  • Initialize and run:

     git clone https://github.com/langchain-ai/deep-agents-ui.git
     cd deep-agents-ui
     yarn install
     yarn dev

Running Deep Agent with Skill

Follow the same steps as of Running The Example Deep Agent.

On the Deep Agent Chat UI, set deep_agent_with_skill as the agent endpoint instead of deep_agent to connect to the Deep Agent with Skill implementation.

Langfuse Configuration (Optional)

Prereq: Ensure Docker Desktop is installed and running on your machine (if want to run Langfuse locally).

  1. Steps to install & set up Langfuse locally can be found here: https://langfuse.com/self-hosting/deployment/docker-compose#get-started

  2. Once set up, start it by running docker compose up in the directory where you set up Langfuse. This will make the Langfuse dashboard available at http://localhost:3000.

  3. Sign-up for an account on Langfuse and set up your Langfuse account and get your secret and public keys.

  4. Set Langfuse secret and public keys in your .env file (refer to .env.example file for key names and add your key values there) before running the agent.

  5. Refer the agent and deep agent code for how to integrate Langfuse tracing in your agent. The code examples in this repo use a LangfuseCallbackHandler that sends traces to the local Langfuse instance.

Accessing Langfuse Dashboard

  • Once your docker compose up command is running for Langfuse, it will be available at http://localhost:3000.
  • Open http://localhost:3000 in your browser to access the Langfuse dashboard and view traces from your agent.
  • Sign-up for an account or log in if you already have one.

Configuring Ollama Cloud Models in Langfuse Playground

  1. Login to Langfuse
  2. Go to Settings -> New LLM Connection -> LLM Adapter -> Choose OpenAI from the list -> Provider Name: Ollama Cloud (or any name) -> API Key: your Ollama API key. -> API Base URL: https://ollama.com/api/v1, -> Show Adavanced settings -> Custom Models: Add Model Name: nemotron-3-super:cloud (or your desired model), then save the connection.

Integration summary

  • The UI sends user requests to your backend (typically the MCP server or an MCP-aware gateway). The agent connects to the MCP server at http://localhost:8000/mcp to load tools and orchestrates the LLM (Ollama) to answer queries. The MCP server is independent and should be started before the agent so the agent can fetch tool definitions.

Troubleshooting

  • If the agent cannot load tools, verify the MCP server is running and reachable at http://localhost:8000/mcp.
  • If the agent fails to initialize the LLM, confirm Ollama is running and the model name matches the one configured (the agent code in this repo uses ministral-3:14b).

Contributing

  • Each subproject has its own README.md with more details.

That's it — start Ollama, then the MCP server, then the agent, then the UI.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors