Skip to content

Serverless AI agent using LangChain.js and Model Context Protocol (MCP) integration to order burgers from a burger restaurant

License

Notifications You must be signed in to change notification settings

Azure-Samples/mcp-agent-langchainjs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

AI Agent with MCP tools using LangChain.js

Open in Codespaces Join Azure AI Community Discord Deployment time
Build Status dev.to blog post walkthrough Node version TypeScript License

⭐ If you like this sample, star it on GitHub β€” it helps a lot!

Overview β€’ Architecture β€’ Getting started β€’ Deploy to Azure β€’ Run locally β€’ MCP tools β€’ Resources

Animation showing the agent in action

Overview

This project demonstrates how to build AI agents that can interact with real-world APIs using the Model Context Protocol (MCP). It features a complete burger ordering system with a serverless API, web interfaces, and an MCP server that enables AI agents to browse menus, place orders, and track order status. The agent uses LangChain.js to handle LLM reasoning and tool calling. The system consists of multiple interconnected services, as detailed in the Architecture section below.

The system is hosted on Azure Static Web Apps (web apps) and Azure Functions (API and MCP servers), with Azure Cosmos DB for NoSQL for data storage. You can use it as a starting point for building your own AI agents.

Key features

  • LangChain.js agent with tool calling via MCP (Streamable HTTP transport)
  • Multi-service, end‑to‑end architecture (web UIs, APIs, MCP server)
  • User authentication with sessions history
  • 100% serverless architecture, for cost-effective scaling
  • Single-command deployment using Infrastructure as Code (IaC)

Architecture

Building AI applications can be complex and time-consuming, but using LangChain.js and Azure serverless technologies allows to greatly simplify the process. This application is a AI agent that can be access through different interfaces (web app, CLI) and that can call tools through MCP to interact with a burger ordering API.

Architecture diagram

The application is made from these main components:

Component Folder Purpose
Agent Web App packages/agent-webapp Chat interface + conversation rendering
Agent API packages/agent-api LangChain.js agent + chat state + MCP client
Burger API packages/burger-api Core burger & order management web API
Burger MCP Server packages/burger-mcp Exposes burger API as MCP tools
Burger Web App packages/burger-webapp Live orders visualization
Infrastructure infra Bicep templates (IaC)

Additionally, these support components are included:

Component Folder Purpose
Agent CLI packages/agent-cli Command-line interface LangChain.js agent and MCP client
Data generation packages/burger-data Scripts to (re)generate burgers data & images

Getting started

There are multiple ways to get started with this project. The quickest way is to use GitHub Codespaces that provides a preconfigured environment for you. Alternatively, you can set up your local environment following the instructions below.

Use GitHub Codespaces

You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:

Open in GitHub Codespaces

Use a VSCode dev container

A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.

You will also need to have Docker installed on your machine to run the container.

Open in Dev Containers

Use your local environment

You need to install following tools to work on your local machine:

  • Node.js LTS
  • Azure Developer CLI 1.19+
  • Git
  • PowerShell 7+ (for Windows users only)
    • Important: Ensure you can run pwsh.exe from a PowerShell command. If this fails, you likely need to upgrade PowerShell.
    • Instead of Powershell, you can also use Git Bash or WSL to run the Azure Developer CLI commands.

Then you can get the project code:

  1. Fork the project to create your own copy of this repository.

  2. On your forked repository, select the Code button, then the Local tab, and copy the URL of your forked repository.

    Screenshot showing how to copy the repository URL

  3. Open a terminal and run this command to clone the repo: git clone <your-repo-url>

Deploy to Azure

Prerequisites

Deploy with Azure Developer CLI

  1. Open a terminal and navigate to the root of the project
  2. Authenticate with Azure by running azd auth login
  3. Run azd up to deploy the application to Azure. This will provision Azure resources and deploy all services
    • You will be prompted to select a base location for the resources
    • The deployment process will take a few minutes

Once deployment is complete, you'll see the URLs of all deployed services in the terminal.

Cost estimation

Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can use the Azure pricing calculator with pre-configured estimations to get an idea of the costs: Azure Pricing Calculator.

Clean up resources

To clean up all the Azure resources created by this sample:

azd down --purge

Run locally

After setting up your environment and provisioned the Azure resources, you can run the entire application locally:

# Install dependencies for all services
npm install

# Start all services locally
npm start

Starting the different services may take some time, you need to wait until you see the following message in the terminal: πŸš€ All services ready πŸš€

This will start:

Note

When running locally without having deployed the application, the servers will use in-memory storage, so any data will be lost when you stop the servers. After a successful deployment, the servers will use Azure Cosmos DB for persistent storage.

You can then open the Agent web app and ask things like:

  • What spicy burgers do you have?
  • Order two Classic Cheeseburgers with extra bacon.
  • Show my recent orders

The agent will decide which MCP tool(s) to call, then come up with a response.

Available scripts

This project uses npm workspaces to manage multiple packages in a single repository. You can run scripts from the root folder that will apply to all packages, or you can run scripts for individual packages as indicated in their respective README files.

Common scripts (run from repo root):

Action Command
Start everything npm start
Build all npm run build
Lint npm run lint
Fix lint npm run lint:fix
Format npm run format

MCP tools

The Burger MCP server provides these tools for AI agents:

Tool Name Description
get_burgers Get a list of all burgers in the menu
get_burger_by_id Get a specific burger by its ID
get_toppings Get a list of all toppings in the menu
get_topping_by_id Get a specific topping by its ID
get_topping_categories Get a list of all topping categories
get_orders Get a list of all orders in the system
get_order_by_id Get a specific order by its ID
place_order Place a new order with burgers (requires userId, optional nickname)
delete_order_by_id Cancel an order if it has not yet been started (status must be pending, requires userId)

Testing the MCP Server

Using the MCP Inspector

You can test the MCP server using the MCP Inspector:

  1. Install and start MCP Inspector:

    npx -y @modelcontextprotocol/inspector
  2. In your browser, open the MCP Inspector (the URL will be shown in the terminal)

  3. Configure the connection:

    • Transport: Streamable HTTP or SSE
    • URL: http://localhost:3000/mcp (for Streamable HTTP) or http://localhost:3000/sse (for legacy SSE)
  4. Click Connect and explore the available tools

Using GitHub Copilot

To use the MCP server in local mode with GitHub Copilot, create a local .vscode/mcp.json configuration file in your project root:

{
  "servers": {
    "burger-mcp": {
      "type": "stdio",
      "command": "npm",
      "args": ["run", "start:local", "--workspace=burger-mcp"]
    }
  }
}

If you open that file

Then, you can use GitHub Copilot in agent mode to interact with the MCP server. For example, you can ask questions like "What burgers are available?" or "Place an order for a vegan burger" and Copilot will use the MCP server to provide answers or perform actions.

Tip

Copilot models can behave differently regarding tools usage, so if you don't see it calling the burger-mcp tools, you can explicitly mention using the Bruger MCP server by adding #burger-mcp in your prompt.

Resources

Here are some resources to learn more about the technologies used in this project:

You can also find more Azure AI samples here.

Troubleshooting

If you encounter issues while running or deploying this sample:

  1. Dependencies: Ensure all required tools are installed and up to date
  2. Ports: Make sure required ports (3000, 4280, 5173, 5174, 7071, 7072) are not in use
  3. Azure Developer CLI: Verify you're authenticated with azd auth login
  4. Node.js version: Ensure you're using Node.js 22 or higher

For more detailed troubleshooting, check the individual README files in each service directory.

Built for AI Agents

This project has been optimized for use with AI agents like GitHub Copilot. This includes:

  • Built-in context engineering provided with AGENTS.md files to help AI agents understand and extend the codebase.
  • Reusable prompts for common tasks.
  • Custom instructions tailored for each service of the project.
  • Custom Codebase Explorer chat mode for Copilot, to help you explore and understand the codebase.

To learn how to set up and use GitHub Copilot with this repository, check out the docs/copilot.md guide.

Getting Help

If you get stuck or have any questions about building AI apps, join:

Azure AI Foundry Discord

If you have product feedback or errors while building visit:

Azure AI Foundry Developer Forum

About

Serverless AI agent using LangChain.js and Model Context Protocol (MCP) integration to order burgers from a burger restaurant

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published