-
Notifications
You must be signed in to change notification settings - Fork 54
LCORE-226: How to register MCP servers with LCORE #390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -87,6 +87,7 @@ Llama Stack Client. It is a library available for Python, Swift, Node.js or | |
| Kotlin, which "wraps" the REST API stack in a suitable way, which is easier for | ||
| many applications. | ||
|
|
||
|
|
||
|  | ||
|
|
||
|
|
||
|
|
@@ -114,8 +115,45 @@ user_data_collection: | |
| transcripts_storage: "/tmp/data/transcripts" | ||
| ``` | ||
|
|
||
| ### MCP Server and Tool Configuration | ||
|
|
||
| **Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config. | ||
|
|
||
| #### Configuring MCP Servers | ||
|
|
||
| MCP (Model Context Protocol) servers provide tools and capabilities to the AI agents. These are configured in the `mcp_servers` section of your `lightspeed-stack.yaml`: | ||
|
|
||
| ```yaml | ||
| mcp_servers: | ||
| - name: "filesystem-tools" | ||
| provider_id: "model-context-protocol" | ||
| url: "http://localhost:3000" | ||
| - name: "git-tools" | ||
| provider_id: "model-context-protocol" | ||
| url: "http://localhost:3001" | ||
| - name: "database-tools" | ||
| provider_id: "model-context-protocol" | ||
| url: "http://localhost:3002" | ||
| ``` | ||
|
|
||
| **Important**: Only MCP servers defined in the `lightspeed-stack.yaml` configuration are available to the agents. Tools configured in the llama-stack `run.yaml` are not accessible to lightspeed-core agents. | ||
|
|
||
| #### Configuring MCP Headers | ||
|
|
||
| MCP headers allow you to pass authentication tokens, API keys, or other metadata to MCP servers. These are configured **per request** via the `MCP-HEADERS` HTTP header: | ||
|
|
||
| ```bash | ||
| curl -X POST "http://localhost:8080/v1/query" \ | ||
| -H "Content-Type: application/json" \ | ||
| -H "MCP-HEADERS: {\"filesystem-tools\": {\"Authorization\": \"Bearer token123\"}}" \ | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Should
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ldjebran I believe we use upper case from Ansible Lightspeed (as, IIRC, the header was contributed by us). Changing case could break us? Unless other code in
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @manstis we are already using uppercase when sending the header just as the curl command above, do not think this should affect anyone as soon as they can be converted to lowercase or looking in a case insensitive way. forcing clients to have headers in lowercase will break us for sure.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. curl actually send |
||
| -d '{"query": "List files in /tmp"}' | ||
| ``` | ||
|
|
||
|
|
||
| ### Llama Stack project and configuration | ||
|
|
||
| **Note**: The `run.yaml` configuration is currently an implementation detail. In the future, all configuration will be available directly from the lightspeed-core config. | ||
|
|
||
| To run Llama Stack in separate process, you need to have all dependencies installed. The easiest way how to do it is to create a separate repository with Llama Stack project file `pyproject.toml` and Llama Stack configuration file `run.yaml`. The project file might look like: | ||
|
|
||
| ```toml | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the
provider_id: "model-context-protocol"necessary? As it is llama-stack "thing", can it be added automatically?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I wonder if you can have two MCPs with the same name and different providers ? If this is supported by llama-stack I fear we will need to expose it as well ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit complicated because the
provider_idneeds to match an id defined intool_runtime. By default we just have the following defined in therun.yml:The providers supported today are here
But
nameistoolgroup_idtoday which must be unique no matter the provider id (see: https://github.com/lightspeed-core/lightspeed-stack/blob/main/src/utils/common.py#L56 ).If we want to allow for same name but different provider we can generate a toolgroup_id or something but that sounds messy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is another MCP provider in
lightspeed-providers. This is why we asked for the provider to be a parameter too.