Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -339,3 +339,48 @@ curl localhost:8080/v1/models | jq .
]
}
```

### Configuring MCP Servers

Lightspeed developers can quickly enable external tool calling using MCP servers in LCS. MCP (Model Context Protocol) is a standard for exposing external tools in a structured way so AI agents can call them reliably. An MCP server hosts one or more tools and exposes them over a network endpoint. In LCS, the AI agent can leverage these servers to execute tools: LCS routes tool calls to the appropriate MCP server and uses the tool output to generate more accurate responses.

Each MCP server provides a list of tools along with structured metadata, including name, description, and inputSchema. Using the standard `tools/list` method, LCS automatically fetches this metadata so the AI agent can evaluate user prompts and dynamically select the appropriate tool for a given request. For more details, see the [MCP documentation](https://modelcontextprotocol.io/docs/learn/architecture#how-this-works-in-ai-applications).

The following step-by-step guide shows how to set up and integrate MCP servers into LCS:

#### Step 1: Run your MCP servers
MCP servers host one or more tools and expose them over a network endpoint. They can be run locally for development or hosted externally for production.

#### Step 2: Configure LCS to know about your MCP servers
MCP servers must be defined in the `mcp_servers` section of your `lightspeed-stack.yaml`.
Example (all MCP servers running locally):

```yaml
mcp_servers:
- name: "filesystem-tools"
provider_id: "model-context-protocol"
url: "http://localhost:3000"
- name: "git-tools"
provider_id: "model-context-protocol"
url: "http://localhost:3001"
- name: "database-tools"
provider_id: "model-context-protocol"
url: "http://localhost:3002"
```

**Important**: Only MCP servers defined in the `lightspeed-stack.yaml` configuration are available to the AI agents. Tools configured in the llama-stack `run.yaml` are not accessible to LCS agents.

#### Step 3: Pass authentication or metadata via MCP headers (optional)

Some MCP servers require authentication tokens, API keys, or other metadata. These can be passed **per request** using the `MCP-HEADERS` HTTP header. LCS will forward these headers when invoking the tool, allowing the MCP server to authenticate requests or receive additional context.
Example:

```bash
curl -X POST "http://localhost:8080/v1/query" \
-H "Content-Type: application/json" \
-H "MCP-HEADERS: {\"filesystem-tools\": {\"Authorization\": \"Bearer token123\"}}" \
-d '{"query": "List files in /tmp"}'
```

#### Step 4: Verify connectivity
Comment on lines +373 to +385
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Warn about secrets in logs when using MCP-HEADERS.

With access_log: true (Line 258), reverse proxies/app servers may log request headers. Passing bearer tokens in MCP‑HEADERS can leak secrets.

Add a note after the curl example:

+> Note: If your deployment logs request headers (for example, `access_log: true` or upstream proxy defaults), MCP-HEADERS may expose credentials in logs. In production, disable access logging, ensure header redaction, or pass credentials via a secure store referenced by the MCP server.

Optionally verify redaction in code:


🏁 Script executed:

#!/bin/bash
# Check for explicit redaction of MCP-HEADERS in request/response logging.
rg -n -C3 'MCP-HEADERS|access_log|Authorization' --type py --type ts --type go

Length of output: 55265


Warn about secrets in logs when using MCP-HEADERS.

access_log is enabled by default (src/models/config.py) and the uvicorn runner sets access_log=True (src/runners/uvicorn.py); extract_mcp_headers reads MCP-HEADERS without redaction (src/utils/mcp_headers.py). Add the doc note below after the curl example and ensure header redaction or disable header logging in production.

+> Note: If your deployment logs request headers (for example, `access_log: true` or upstream proxy defaults), MCP-HEADERS may expose credentials in logs. In production, disable access logging, ensure header redaction, or pass credentials via a secure store referenced by the MCP server.

Optional quick check:

#!/bin/bash
# Check for explicit redaction of MCP-HEADERS or Authorization in logging.
rg -n -C3 'MCP-HEADERS|access_log|Authorization|request.headers' --type py --type ts --type go

Files to review: src/models/config.py (access_log default), src/runners/uvicorn.py (access_log=True), src/utils/mcp_headers.py (extracts MCP-HEADERS), src/app/endpoints/query.py & src/app/endpoints/streaming_query.py (code injecting Authorization into MCP headers).

After starting the MCP servers and updating `lightspeed-stack.yaml`, test by sending a prompt to the AI agent. LCS evaluates the prompt against available tools’ metadata, selects the appropriate tool, calls the corresponding MCP server, and uses the result to generate more accurate agent response.
Loading