Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
---
title: Deploy an MCP server on a Raspberry Pi 5 and interact with it using the AI agent
title: Deploy an MCP server on a Raspberry Pi 5 and interact with it using an AI agent

draft: true
cascade:
draft: true

minutes_to_complete: 30

who_is_this_for: This Learning Path targets LLM and IoT developers who already know their way around Large Language Model (LLM) concepts and networking. It walks you through deploying a lightweight Model Context Protocol (MCP) server on a Raspberry Pi 5 and shows you how to interact with it via the OpenAI-Agent SDK.
who_is_this_for: This Learning Path targets LLM and IoT developers who are familiar with Large Language Model (LLM) concepts and networking. You will learn how to deploy a lightweight Model Context Protocol (MCP) server on a Raspberry Pi 5 and interact with it via the OpenAI-Agent SDK.

learning_objectives:
- Understand how to Deploy a lightweight Model Context Protocol (MCP) server on Raspberry Pi 5
- Deploy a lightweight Model Context Protocol (MCP) server on Raspberry Pi 5
- Design and register custom tools for the AI Agent
- Create custom endpoints
- Learn uv — a fast, efficient Python package manager
- Learn about uv — a fast, efficient Python package manager

prerequisites:
- Rapberry Pi
- A [Raspberry Pi 5](https://www.raspberrypi.com/products/raspberry-pi-5/)
- Basic understanding of Python and prompt engineering.
- Understanding of LLM and AI Agent fundamentals

Expand All @@ -34,6 +34,11 @@ tools_software_languages:

operatingsystems:
- Linux
### Cross-platform metadata only
shared_path: true
shared_between:
- iot
- embedded-and-microcontrollers

further_reading:
- resource:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ Think of it as USB-C for AI: once a tool or data source speaks MCP, any complian
### Why use MCP?
- **Plug-and-play integrations:** A growing catalog of pre-built MCP servers (filesystem, shell, vector stores, web-scraping, etc.) gives your agent instant super-powers with zero custom glue code.

- **Model/vendor agnostic:** Because the protocol lives outside the model, you can swap GPT-4, Claude, or your own fine-tuned model without touching the integration layer.
- **Model/vendor agnostic:** Because the protocol lives outside the model, you can swap models like GPT-4, Claude, or your own fine-tuned model without touching the integration layer.

- **Security by design:** MCP encourages running servers inside your own infrastructure, so sensitive data never leaves the perimeter unless you choose.

- **Cross-ecosystem momentum:** Recent roll-outs—from an official C# SDK to Wix’s production MCP server and Microsoft’s Azure support—show the spec is gathering real-world traction.
- **Cross-ecosystem momentum:** Recent roll-outs—from an official C# SDK to Wix’s production MCP server and Microsoft’s Azure support—show the MCP spec is gathering real-world traction.

### High-level architecture
![mcp server](./mcp.png)
Expand All @@ -32,51 +32,3 @@ Think of it as USB-C for AI: once a tool or data source speaks MCP, any complian
Learn more about AI Agents in the [AI Agent on CPU learning path](https://learn.arm.com/learning-paths/servers-and-cloud-computing/ai-agent-on-cpu/).
{{% /notice %}}

## UV: The Fast, All-in-One Python Package Manager

**uv** is a next-generation, Rust-based package manager that unifies pip, virtualenv, Poetry, and more—offering 10×–100× faster installs, built-in virtual environment handling, robust lockfiles, and full compatibility with the Python ecosystem.

### Install uv
- macOS / Linux
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
- Windows
```bash
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
```

### Initialize a Project
1. Create & enter your project folder:
```bash
mkdir my-project && cd my-project
```
2. Run
```bash
uv init
```

This scaffolds:
- .venv/ (auto-created virtual environment)
- pyproject.toml (project metadata & dependencies)
- .python-version (pinned interpreter)
- README.md, .gitignore, and a sample main.py

### Install Dependencies
- Add one or more packages to your project:
```bash
uv add requests numpy pandas
```
> Updates both pyproject.toml and the lockfile (uv.lock)

- Remove a package (and its unused sub-deps):
```bash
uv remove numpy
```

- Install from an existing requirements.txt (e.g. when migrating):
```bash:
uv pip install -r requirements.txt
```

All installs happen inside your project’s .venv, and UV’s lockfile guarantees repeatable environments.
Original file line number Diff line number Diff line change
@@ -1,31 +1,49 @@
---
title: Build & Run an AI Agent on Your Workstation
title: Build & Run an AI Agent on your development machine
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

In this section you will learn how to setup an AI Agent on your development machine. You will then connect your MCP server running on the Raspberry Pi 5 to it.

These commands were tested on an Linux Arm development machine.

### Create an AI Agent and point it at your Pi's MCP Server
1. Bootstrap the Agent Project
1. Install `uv` on your development machine:

```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```
2. Create a directory for the Agent:
```bash
# create & enter folder
mkdir mcp-agent && cd mcp-agent
```
2. scaffold with **uv**
3. Setup the directory to use `uv`:
```bash
uv init
```
3. install **OpenAI Agents SDK** + **dotenv**

This command adds:
- .venv/ (auto-created virtual environment)
- pyproject.toml (project metadata & dependencies)
- .python-version (pinned interpreter)
- README.md, .gitignore, and a sample main.py

4. Install **OpenAI Agents SDK** + **dotenv**
```bash
uv add openai-agents python-dotenv
```
4. Create a `.env` file with your OpenAI key:
5. Create a `.env` file with your OpenAI key:
```bash
echo -n "OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>" > .env
```

### Write the Agent Client (main.py)
### Write the Python script for the Agent Client

Use a file editor of your choice and replace the content of the sample `main.py` with the content shown below:

```python
import asyncio, os
from dotenv import load_dotenv
Expand All @@ -38,7 +56,7 @@ from agents import Agent, Runner, set_default_openai_key
from agents.mcp import MCPServerSse
from agents.model_settings import ModelSettings

async def run(mcp_server: list[MCPServer]):
async def run(mcp_server: list[MCPServerSse]):
set_default_openai_key(os.getenv("OPENAI_API_KEY"))

agent = Agent(
Expand Down Expand Up @@ -70,10 +88,14 @@ if __name__ == "__main__":
```

### Execute the Agent

You are now ready to the run the agent and test it with your running MCP server:

Run the `main.py` python script:
```bash
uv run main.py
```
You should see output like:
The output should look like:
```output
Running: What is the CPU temperature?
Response: The current CPU temperature is 48.8°C.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,26 +6,25 @@ weight: 3
layout: learningpathall
---

## Expose Raspberry Pi MCP Server via ngrok
## Setup an MCP Server on Raspberry Pi 5

This guide shows you how to:
In this section you will learn how to:

1. Install **uv** (the Rust-powered Python manager)
2. Bootstrap a simple **MCP** server on your Raspberry Pi that reads the CPU temperature and searches the weather data
3. Expose it to the internet with **ngrok**

### Prerequisites

- A **Raspberry Pi 5** (or other ARMv8 Pi) running Raspberry Pi OS (64-bit)
- Basic familiarity with Python and the terminal
1. Install uv (the Rust-powered Python package manager)
2. Bootstrap a simple MCP server on your Raspberry Pi 5 that reads the CPU temperature and searches the weather data
3. Expose the MCP server to the internet with **ngrok**

You will run all the commands shown below on your Raspberry Pi 5 running Raspberry Pi OS (64-bit)

#### 1. Install uv
On Raspberry Pi Terminal:
On Raspberry Pi Terminal, install `uv`:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```

**uv** is a next-generation, Rust-based package manager that unifies pip, virtualenv, Poetry, and more—offering 10×–100× faste
r installs, built-in virtual environment handling, robust lockfiles, and full compatibility with the Python ecosystem.

{{% notice Note %}}
After the script finishes, restart your terminal so that the uv command is on your PATH.
{{% /notice %}}
Expand All @@ -36,22 +35,28 @@ After the script finishes, restart your terminal so that the uv command is on yo
mkdir mcp
cd mcp
```
2. Initialize with uv (this creates pyproject.toml, .venv/, etc.):
2. Initialize with `uv`:
```bash
uv init
```
This command adds:
- .venv/ (auto-created virtual environment)
- pyproject.toml (project metadata & dependencies)
- .python-version (pinned interpreter)
- README.md, .gitignore, and a sample main.py

3. Install the dependencies:
```uv
```bash
uv pip install fastmcp==2.2.10
uv add requests
```

#### 3. Write Your MCP Server (server.py)
1. Create the server file:
#### 3. Build your MCP Server
1. Create a python file for your MCP server named `server.py`:
```bash
touch server.py
```
2. Edit `server.py` with the following contents:
2. Use a file editor of your choice and copy the following content into `server.py`:
```bash
import subprocess, re
from mcp.server.fastmcp import FastMCP
Expand Down Expand Up @@ -89,13 +94,28 @@ if __name__ == "__main__":
```

#### 4. Run the MCP Server

Run the python script to deploy the MCP server:

```python
uv run server.py
```
By default, FastMCP will listen on port **8000** and serve your tools via **Server-Sent Events (SSE)**.
By default, FastMCP will listen on port 8000 and serve your tools via Server-Sent Events (SSE).

The output should look like:

```output
INFO: Started server process [2666]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
```

#### 5. Install & Configure ngrok
1. Add ngrok’s APT repo and install:

You will now use ngrok to expose your locally running MCP server to the public internet over HTTPS.

1. Add ngrok’s repo to the apt package manager and install:
```bash
curl -sSL https://ngrok-agent.s3.amazonaws.com/ngrok.asc \
| sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null \
Expand All @@ -104,12 +124,16 @@ curl -sSL https://ngrok-agent.s3.amazonaws.com/ngrok.asc \
&& sudo apt update \
&& sudo apt install ngrok
```
The ngrok agent authenticates with an authtoken. You will need to authenticate your account with the token which is available on the [ngrok dashboard](https://dashboard.ngrok.com/get-started/your-authtoken).

2. Authenticate your account:
```bash
ngrok config add-authtoken <YOUR_NGROK_AUTHTOKEN>
```
3. Expose port 8000:
Replace `YOUR_NGROK_AUTHTOKEN` with your token from the ngrok dashboard.

3. Expose the port 8000:
```bash
ngrok http 8000
```
4. Copy the generated HTTPS URL (e.g. `https://abcd1234.ngrok-free.app`)—you’ll use this as your MCP endpoint.
4. Copy the generated HTTPS URL (e.g. `https://abcd1234.ngrok-free.app`)—you’ll use this as your MCP endpoint.