diff --git a/CLAUDE.md b/CLAUDE.md index 9ac8a23..ecf8449 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,6 +1,6 @@ ## Project -The general purpose of mcp-optimizer is to develop a MCP server that acts as an intelligent intermediary between AI clients and multiple MCP servers. The MCP Optimizer server addresses the challenge of managing large numbers of MCP tools by providing semantic tool discovery, caching, and unified access through a single endpoint. +The general purpose of the ToolHive MCP Optimizer is to develop a MCP server that acts as an intelligent intermediary between AI clients and multiple MCP servers. The MCP Optimizer server addresses the challenge of managing large numbers of MCP tools by providing semantic tool discovery, caching, and unified access through a single endpoint. ## Technical considerations - Use uv as package manager. `uv add ` for adding a package. `uv add --dev` for development packages for linting and testing diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 75079db..bcd7218 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,9 +1,10 @@ # Contributing to `mcp-optimizer` -First off, thank you for taking the time to contribute to mcp-optimizer! :+1: :tada: mcp-optimizer -is released under the Apache 2.0 license. If you would like to contribute -something or want to hack on the code, this document should help you get -started. You can find some hints for starting development in mcp-optimizer's +First off, thank you for taking the time to contribute to the ToolHive MCP +Optimizer! :+1: :tada: MCP Optimizer is released under the Apache 2.0 license. +If you would like to contribute something or want to hack on the code, this +document should help you get started. You can find some hints for starting +development in mcp-optimizer's [README](https://github.com/StacklokLabs/mcp-optimizer/blob/main/README.md). ## Table of contents @@ -26,8 +27,8 @@ report unacceptable behavior to ## Reporting security vulnerabilities -If you think you have found a security vulnerability in mcp-optimizer please DO NOT -disclose it publicly until we've had a chance to fix it. Please don't report +If you think you have found a security vulnerability in mcp-optimizer please DO +NOT disclose it publicly until we've had a chance to fix it. Please don't report security vulnerabilities using GitHub issues; instead, please follow this [process](https://github.com/StacklokLabs/mcp-optimizer/blob/main/SECURITY.md) @@ -52,18 +53,20 @@ are a great place to start! ### Pull request process - All commits must include a Signed-off-by trailer at the end of each commit -message to indicate that the contributor agrees to the Developer Certificate of -Origin. For additional details, check out the [DCO instructions](dco.md). + message to indicate that the contributor agrees to the Developer Certificate + of Origin. For additional details, check out the [DCO instructions](dco.md). - Create an issue outlining the fix or feature. -- Fork the mcp-optimizer repository to your own GitHub account and clone it locally. +- Fork the mcp-optimizer repository to your own GitHub account and clone it + locally. - Hack on your changes. - Correctly format your commit messages, see [Commit message guidelines](#commit-message-guidelines) below. - Open a PR by ensuring the title and its description reflect the content of the PR. - Ensure that CI passes, if it fails, fix the failures. -- Every pull request requires a review from the core mcp-optimizer team before merging. +- Every pull request requires a review from the core mcp-optimizer team before + merging. - Once approved, all of your commits will be squashed into a single commit with your PR title. diff --git a/README.md b/README.md index 237239f..ddd68b9 100644 --- a/README.md +++ b/README.md @@ -1,13 +1,16 @@ -# MCP-Optimizer +# ToolHive MCP Optimizer -An intelligent intermediary MCP server that provides semantic tool discovery, caching, and unified access to multiple MCP servers through a single endpoint. +An intelligent intermediary MCP server that provides semantic tool discovery, +caching, and unified access to multiple MCP servers through a single endpoint. ## Features -- **Semantic Tool Discovery**: Intelligently discover and route requests to appropriate MCP tools +- **Semantic Tool Discovery**: Intelligently discover and route requests to + appropriate MCP tools - **Unified Access**: Single endpoint to access multiple MCP servers - **Tool Management**: Manage large numbers of MCP tools seamlessly -- **Group Filtering**: Filter tool discovery by ToolHive groups for multi-environment support +- **Group Filtering**: Filter tool discovery by ToolHive groups for + multi-environment support ## Requirements for Development @@ -16,28 +19,35 @@ An intelligent intermediary MCP server that provides semantic tool discovery, ca ## Usage -MCP-Optimizer server is meant to be run with ToolHive. It will automatically discover the MCP workloads running in ToolHive and their tools. +MCP Optimizer is meant to be run with ToolHive. It will automatically discover +the MCP workloads running in ToolHive and their tools. ### Prerequisites -- [ToolHive UI](https://docs.stacklok.com/toolhive/tutorials/quickstart-ui#step-1-install-the-toolhive-ui) (version >= 0.6.0) -- [ToolHive CLI](https://docs.stacklok.com/toolhive/tutorials/quickstart-cli#step-1-install-toolhive) (version >= 0.3.1) +- [ToolHive UI](https://docs.stacklok.com/toolhive/tutorials/quickstart-ui#step-1-install-the-toolhive-ui) + (version >= 0.6.0) +- [ToolHive CLI](https://docs.stacklok.com/toolhive/tutorials/quickstart-cli#step-1-install-toolhive) + (version >= 0.3.1) ### Setup ToolHive UI must be running for the setup. -#### 1. Run MCP-Optimizer server in a dedicated group in ToolHive. +#### 1. Run MCP Optimizer in a dedicated group in ToolHive + +We need to run MCP Optimizer in a dedicated group to configure AI clients +(Cursor, Claude Desktop, etc.) for this group only. This ensures that clients +are configured with only the MCP Optimizer server, while other MCP servers +remain installed but not configured in the AI clients. -We need to run MCP-Optimizer in a dedicated group to configure AI clients (Cursor, Claude Desktop, etc.) for this group only. This ensures that clients are configured with only the MCP-Optimizer server, while other MCP servers remain installed but not configured in the AI clients. ```bash # 1. Create the group thv group create optim -# 2. Run MCP-Optimizer in the dedicated group +# 2. Run MCP Optimizer in the dedicated group thv run --group optim mcp-optimizer ``` -#### 2. Configure MCP-Optimizer server with your favorite AI client. +#### 2. Configure MCP Optimizer with your favorite AI client ```bash # thv client register --group optim. Example: @@ -57,8 +67,11 @@ thv ls ### Run -Now you should be able to use MCP-Optimizer in the chat of the configured client. Examples: +Now you should be able to use MCP Optimizer in the chat of the configured +client. Examples: + - With the Github MCP server installed, get a GitHub issue details + ```markdown Get the details of GitHub issue 1911 from stacklok/toolhive repo ``` @@ -67,7 +80,7 @@ Get the details of GitHub issue 1911 from stacklok/toolhive repo ### Runtime Mode -MCP-Optimizer supports two runtime modes for deploying and managing MCP servers: +MCP Optimizer supports two runtime modes for deploying and managing MCP servers: - **docker** (default): Run MCP servers as Docker containers - **k8s**: Run MCP servers as Kubernetes workloads @@ -75,6 +88,7 @@ MCP-Optimizer supports two runtime modes for deploying and managing MCP servers: Configuration is case-insensitive (e.g., `K8S`, `Docker`, `k8s` are all valid). **Quick Start:** + ```bash # Run in Kubernetes mode via environment variable export RUNTIME_MODE=k8s @@ -87,31 +101,41 @@ mcpo --runtime-mode k8s mcpo --runtime-mode docker ``` -For detailed documentation on runtime modes, including code examples, see [docs/runtime-modes.md](docs/runtime-modes.md). +For detailed documentation on runtime modes, including code examples, see +[docs/runtime-modes.md](docs/runtime-modes.md). ### Group Filtering -MCP-Optimizer supports filtering tool discovery by ToolHive groups. This is useful when running multiple MCP servers across different environments (production, staging, development, etc.) and you want to limit tool discovery to specific groups. +MCP Optimizer supports filtering tool discovery by ToolHive groups. This is +useful when running multiple MCP servers across different environments +(production, staging, development, etc.) and you want to limit tool discovery to +specific groups. **Quick Start:** + ```bash # Only discover tools from production and staging groups export ALLOWED_GROUPS="production,staging" mcp-optimizer ``` -For detailed documentation on group filtering, including usage examples, see [docs/group-filtering.md](docs/group-filtering.md). +For detailed documentation on group filtering, including usage examples, see +[docs/group-filtering.md](docs/group-filtering.md). ### Connection Resilience -MCP-Optimizer automatically handles ToolHive connection failures with intelligent retry logic. If ToolHive (`thv serve`) restarts on a different port, MCP-Optimizer will: +MCP Optimizer automatically handles ToolHive connection failures with +intelligent retry logic. If ToolHive (`thv serve`) restarts on a different port, +MCP Optimizer will: 1. **Detect the connection failure** across all ToolHive API operations 2. **Automatically rescan** for ToolHive on the new port 3. **Retry with exponential backoff** (1s → 2s → 4s → ... up to 60s) -4. **Gracefully exit** after exhausting all retries (default: 100 attempts over ~100 minutes) +4. **Gracefully exit** after exhausting all retries (default: 100 attempts over + ~100 minutes) **Configuration:** + ```bash # Customize retry behavior via environment variables export TOOLHIVE_MAX_RETRIES=150 # Max retry attempts (default: 100) @@ -122,13 +146,21 @@ export TOOLHIVE_MAX_BACKOFF=120.0 # Maximum delay in seconds (default: mcp-optimizer --toolhive-max-retries 15 --toolhive-initial-backoff 2.0 ``` -This ensures MCP-Optimizer remains operational even when ToolHive restarts, minimizing service interruptions. For detailed information on configuration options, testing scenarios, and troubleshooting, see [docs/connection-resilience.md](docs/connection-resilience.md). +This ensures MCP Optimizer remains operational even when ToolHive restarts, +minimizing service interruptions. For detailed information on configuration +options, testing scenarios, and troubleshooting, see +[docs/connection-resilience.md](docs/connection-resilience.md). ### Environment Variables -- `RUNTIME_MODE`: Runtime mode for MCP servers (`docker` or `k8s`, default: `docker`) -- `ALLOWED_GROUPS`: Comma-separated list of ToolHive group names to filter tool lookups (default: no filtering) -- `TOOLHIVE_MAX_RETRIES`: Maximum retry attempts on connection failure (default: `100`, range: 1-500) -- `TOOLHIVE_INITIAL_BACKOFF`: Initial retry backoff delay in seconds (default: `1.0`, range: 0.1-10.0) -- `TOOLHIVE_MAX_BACKOFF`: Maximum retry backoff delay in seconds (default: `60.0`, range: 1.0-300.0) +- `RUNTIME_MODE`: Runtime mode for MCP servers (`docker` or `k8s`, default: + `docker`) +- `ALLOWED_GROUPS`: Comma-separated list of ToolHive group names to filter tool + lookups (default: no filtering) +- `TOOLHIVE_MAX_RETRIES`: Maximum retry attempts on connection failure (default: + `100`, range: 1-500) +- `TOOLHIVE_INITIAL_BACKOFF`: Initial retry backoff delay in seconds (default: + `1.0`, range: 0.1-10.0) +- `TOOLHIVE_MAX_BACKOFF`: Maximum retry backoff delay in seconds (default: + `60.0`, range: 1.0-300.0) - Additional configuration options can be found in `src/mcp_optimizer/config.py` diff --git a/docs/connection-resilience.md b/docs/connection-resilience.md index a57b98c..5af98f4 100644 --- a/docs/connection-resilience.md +++ b/docs/connection-resilience.md @@ -1,16 +1,16 @@ # Connection Resilience -MCP-Optimizer includes robust connection retry logic to handle scenarios where ToolHive (`thv serve`) becomes unavailable or restarts on a different port. +MCP Optimizer includes robust connection retry logic to handle scenarios where ToolHive (`thv serve`) becomes unavailable or restarts on a different port. ## Overview -When ToolHive restarts (e.g., after a crash or manual restart), it may bind to a different port within the configured range. MCP-Optimizer automatically detects connection failures and attempts to rediscover and reconnect to ToolHive, minimizing service interruptions. +When ToolHive restarts (e.g., after a crash or manual restart), it may bind to a different port within the configured range. MCP Optimizer automatically detects connection failures and attempts to rediscover and reconnect to ToolHive, minimizing service interruptions. ## Features ### 1. Automatic Port Rediscovery -When a connection failure is detected, MCP-Optimizer: +When a connection failure is detected, MCP Optimizer: - Rescans the configured port range (default: 50000-50100) - Tries the initially configured port first (if specified) - Updates its internal connection details when a new port is found @@ -28,7 +28,7 @@ To avoid overwhelming the system during extended outages: - Default: 100 retry attempts (approximately 100 minutes) - Configurable range: 1-500 attempts -- After exhausting retries, MCP-Optimizer logs a critical error and exits gracefully +- After exhausting retries, MCP Optimizer logs a critical error and exits gracefully ### 4. Comprehensive Coverage @@ -65,7 +65,7 @@ mcp-optimizer \ ### Configuration File -These settings follow the standard MCP-Optimizer configuration hierarchy: +These settings follow the standard MCP Optimizer configuration hierarchy: 1. CLI options (highest priority) 2. Environment variables 3. Default values (lowest priority) @@ -78,13 +78,13 @@ These settings follow the standard MCP-Optimizer configuration hierarchy: # Terminal 1: Start ToolHive thv serve -# Terminal 2: Start MCP-Optimizer with defaults +# Terminal 2: Start MCP Optimizer with defaults mcp-optimizer ``` **Behavior:** -- MCP-Optimizer connects to ToolHive -- If ToolHive restarts, MCP-Optimizer retries for ~3-4 minutes +- MCP Optimizer connects to ToolHive +- If ToolHive restarts, MCP Optimizer retries for ~3-4 minutes - Exits if ToolHive doesn't come back online ### Example 2: Extended Retry Window @@ -128,7 +128,7 @@ This tests the automatic port rediscovery feature: thv serve # Note the port (e.g., 50001) -# Terminal 2: Start MCP-Optimizer +# Terminal 2: Start MCP Optimizer mcp-optimizer # Terminal 1: Kill ToolHive @@ -140,7 +140,7 @@ thv serve ``` **Expected Result:** -- MCP-Optimizer detects connection failure +- MCP Optimizer detects connection failure - Logs: "ToolHive connection failed" - Logs: "Attempting to rediscover ToolHive port" - Logs: "Successfully rediscovered ToolHive on new port" @@ -154,7 +154,7 @@ This tests the exponential backoff and ultimate failure handling: # Terminal 1: Start ToolHive thv serve -# Terminal 2: Start MCP-Optimizer +# Terminal 2: Start MCP Optimizer mcp-optimizer # Terminal 1: Kill ToolHive (don't restart) @@ -162,7 +162,7 @@ pkill -f 'thv serve' ``` **Expected Result:** -- MCP-Optimizer detects connection failure +- MCP Optimizer detects connection failure - Retries with increasing delays - Logs each attempt with backoff time - After 100 attempts (~100 minutes), logs critical error @@ -176,7 +176,7 @@ This tests initial connection retry: # Make sure ToolHive is not running pkill -f 'thv serve' -# Start MCP-Optimizer +# Start MCP Optimizer mcp-optimizer # In another terminal, start ToolHive within retry window @@ -184,14 +184,14 @@ thv serve ``` **Expected Result:** -- MCP-Optimizer attempts initial connection +- MCP Optimizer attempts initial connection - Retries with exponential backoff - If ToolHive starts during retry window: connects and continues - If ToolHive doesn't start: exits after max retries ## Logging -MCP-Optimizer provides detailed logging at each stage: +MCP Optimizer provides detailed logging at each stage: ### Connection Failure ``` @@ -255,7 +255,7 @@ The polling manager also implements connection resilience: ## Troubleshooting -### Problem: MCP-Optimizer exits too quickly +### Problem: MCP Optimizer exits too quickly **Solution:** Increase retry attempts and/or backoff delays: ```bash @@ -263,7 +263,7 @@ export TOOLHIVE_MAX_RETRIES=20 export TOOLHIVE_MAX_BACKOFF=120.0 ``` -### Problem: MCP-Optimizer takes too long to fail +### Problem: MCP Optimizer takes too long to fail **Solution:** Decrease retry attempts: ```bash diff --git a/docs/group-filtering.md b/docs/group-filtering.md index ed6854c..20e8e63 100644 --- a/docs/group-filtering.md +++ b/docs/group-filtering.md @@ -1,6 +1,6 @@ # Group Filtering -MCP-Optimizer supports filtering tool lookups by ToolHive groups, allowing you to restrict which tools are discoverable based on their group membership. +MCP Optimizer supports filtering tool lookups by ToolHive groups, allowing you to restrict which tools are discoverable based on their group membership. ## Overview @@ -51,9 +51,9 @@ mcp-optimizer ## How It Works -1. **Server Group Assignment**: When MCP-Optimizer ingests MCP servers from ToolHive, it automatically captures and stores each server's group information. +1. **Server Group Assignment**: When MCP Optimizer ingests MCP servers from ToolHive, it automatically captures and stores each server's group information. -2. **Tool Discovery**: When searching for tools using `find_tool`, `list_tools`, or `search_registry`, MCP-Optimizer: +2. **Tool Discovery**: When searching for tools using `find_tool`, `list_tools`, or `search_registry`, MCP Optimizer: - Checks if group filtering is configured via the `ALLOWED_GROUPS` environment variable - Filters the search to only include tools from servers in the specified groups - Returns only matching tools @@ -84,7 +84,7 @@ Group filtering is applied at the database query level, affecting: ## Integration with ToolHive -MCP-Optimizer automatically discovers and respects ToolHive group assignments: +MCP Optimizer automatically discovers and respects ToolHive group assignments: ```bash # Create groups in ToolHive @@ -95,7 +95,7 @@ thv group create staging thv run --group production github thv run --group staging postgres -# Configure MCP-Optimizer to only see production tools +# Configure MCP Optimizer to only see production tools export ALLOWED_GROUPS="production" mcp-optimizer ``` @@ -103,5 +103,5 @@ mcp-optimizer ## See Also - [ToolHive Groups Documentation](https://docs.stacklok.com/toolhive/) -- [MCP-Optimizer README](../README.md) +- [MCP Optimizer README](../README.md) diff --git a/docs/helm-deployment.md b/docs/helm-deployment.md index 9227cb8..12b2b08 100644 --- a/docs/helm-deployment.md +++ b/docs/helm-deployment.md @@ -1,10 +1,10 @@ -# Deploying MCP-Optimizer with Helm +# Deploying MCP Optimizer with Helm -This guide walks you through deploying MCP-Optimizer in Kubernetes using the Helm chart. +This guide walks you through deploying MCP Optimizer in Kubernetes using the Helm chart. ## Overview -The MCP-Optimizer Helm chart deploys MCP-Optimizer as an MCPServer Custom Resource Definition (CRD) in your Kubernetes cluster. The ToolHive operator then manages the actual deployment, creating a pod that runs the MCP-Optimizer container. +The MCP Optimizer Helm chart deploys MCP Optimizer as an MCPServer Custom Resource Definition (CRD) in your Kubernetes cluster. The ToolHive operator then manages the actual deployment, creating a pod that runs the MCP Optimizer container. ## Architecture @@ -16,7 +16,7 @@ The MCP-Optimizer Helm chart deploys MCP-Optimizer as an MCPServer Custom Resour │ │ toolhive-system namespace │ │ │ │ │ │ │ │ ┌──────────────┐ ┌──────────────┐ │ │ -│ │ │ MCP-Optimizer │ │ ToolHive │ │ │ +│ │ │ MCP Optimizer │ │ ToolHive │ │ │ │ │ │ MCPServer │────────>│ Operator │ │ │ │ │ │ (CRD) │ │ │ │ │ │ │ └──────────────┘ └──────┬───────┘ │ │ @@ -24,7 +24,7 @@ The MCP-Optimizer Helm chart deploys MCP-Optimizer as an MCPServer Custom Resour │ │ │ creates │ │ │ │ v │ │ │ │ ┌──────────────┐ │ │ -│ │ │ MCP-Optimizer │ │ │ +│ │ │ MCP Optimizer │ │ │ │ │ │ Pod │ │ │ │ │ │ │ │ │ │ │ │ queries K8s │ │ │ @@ -96,7 +96,7 @@ kubectl get crd mcpservers.toolhive.stacklok.dev ### Basic Installation -Install MCP-Optimizer with default values: +Install MCP Optimizer with default values: ```bash cd /path/to/mcp-optimizer @@ -138,7 +138,7 @@ helm install mcp-optimizer ./helm/mcp-optimizer \ ```yaml mcpserver: image: - repository: ghcr.io/your-org/mcp-optimizer + repository: ghcr.io/stackloklabs/mcp-optimizer tag: "0.1.0" pullPolicy: IfNotPresent ``` @@ -227,7 +227,7 @@ kubectl logs -l toolhive.stacklok.dev/mcpserver=mcp-optimizer -n toolhive-system kubectl logs -l toolhive.stacklok.dev/mcpserver=mcp-optimizer -n toolhive-system --tail=100 ``` -### Test MCP-Optimizer Functionality +### Test MCP Optimizer Functionality 1. **Check that mcp-optimizer discovers MCPServers:** @@ -293,7 +293,7 @@ helm rollback mcp-optimizer 2 -n toolhive-system ## Uninstallation -To remove MCP-Optimizer: +To remove MCP Optimizer: ```bash helm uninstall mcp-optimizer -n toolhive-system @@ -445,7 +445,7 @@ helm install postgres bitnami/postgresql \ -n default ``` -2. Configure MCP-Optimizer to use PostgreSQL: +2. Configure MCP Optimizer to use PostgreSQL: ```yaml database: type: postgresql @@ -484,7 +484,7 @@ metadata: spec: project: default source: - repoURL: https://github.com/your-org/mcp-optimizer + repoURL: https://github.com/StacklokLabs/mcp-optimizer targetRevision: main path: helm/mcp-optimizer helm: @@ -510,7 +510,7 @@ spec: ## Support For help and support: -- GitHub Issues: https://github.com/your-org/mcp-optimizer/issues -- Documentation: https://github.com/your-org/mcp-optimizer/docs +- GitHub Issues: https://github.com/StacklokLabs/mcp-optimizer/issues +- Documentation: https://github.com/StacklokLabs/mcp-optimizer/docs - ToolHive Support: https://github.com/stacklok/toolhive/issues diff --git a/docs/in-cluster-authentication.md b/docs/in-cluster-authentication.md index aa2ddd1..a2c043b 100644 --- a/docs/in-cluster-authentication.md +++ b/docs/in-cluster-authentication.md @@ -1,10 +1,10 @@ # In-Cluster Authentication -MCP-Optimizer automatically handles authentication when running inside a Kubernetes cluster by using the Kubernetes service account system. +MCP Optimizer automatically handles authentication when running inside a Kubernetes cluster by using the Kubernetes service account system. ## How It Works -When MCP-Optimizer runs as a Pod in Kubernetes: +When MCP Optimizer runs as a Pod in Kubernetes: ### 1. Service Account Token @@ -13,7 +13,7 @@ Kubernetes automatically mounts a service account token into every Pod at: /var/run/secrets/kubernetes.io/serviceaccount/token ``` -MCP-Optimizer detects this file and uses it for authentication: +MCP Optimizer detects this file and uses it for authentication: ```python # Automatically loaded token = Path("/var/run/secrets/kubernetes.io/serviceaccount/token").read_text() @@ -27,7 +27,7 @@ The cluster's CA certificate is mounted at: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt ``` -MCP-Optimizer uses this for SSL verification when communicating with the Kubernetes API server: +MCP Optimizer uses this for SSL verification when communicating with the Kubernetes API server: ```python # Automatically configured verify_ssl = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" @@ -39,7 +39,7 @@ Kubernetes injects environment variables into every Pod: - `KUBERNETES_SERVICE_HOST` - The API server hostname - `KUBERNETES_SERVICE_PORT_HTTPS` - The HTTPS port (typically 443) -MCP-Optimizer automatically constructs the API server URL from these: +MCP Optimizer automatically constructs the API server URL from these: ```python # Automatically done when RUNTIME_MODE=k8s and in-cluster api_url = f"https://{KUBERNETES_SERVICE_HOST}:{KUBERNETES_SERVICE_PORT_HTTPS}" @@ -120,7 +120,7 @@ Standard Kubernetes environment variables (automatically injected): | `KUBERNETES_SERVICE_PORT_HTTPS` | HTTPS port | `443` | | `KUBERNETES_SERVICE_PORT` | Fallback port | `443` | -MCP-Optimizer configuration (optional overrides): +MCP Optimizer configuration (optional overrides): | Variable | Description | Default | Auto-detected? | |----------|-------------|---------|----------------| @@ -169,7 +169,7 @@ roleRef: EOF ``` -### Step 2: Deploy MCP-Optimizer +### Step 2: Deploy MCP Optimizer ```yaml apiVersion: apps/v1 diff --git a/docs/kubernetes-integration.md b/docs/kubernetes-integration.md index d2c639d..093c5d5 100644 --- a/docs/kubernetes-integration.md +++ b/docs/kubernetes-integration.md @@ -1,17 +1,17 @@ # Kubernetes Integration and Installation Guide -This guide covers deploying MCP-Optimizer and MCP servers in Kubernetes using the ToolHive operator, including configuration, installation, and client integration. +This guide covers deploying MCP Optimizer and MCP servers in Kubernetes using the ToolHive operator, including configuration, installation, and client integration. ## Table of Contents - [Overview](#overview) - [Quick Start](#quick-start) - [Prerequisites](#prerequisites) -- [MCP-Optimizer Kubernetes Mode](#mcp-optimizer-kubernetes-mode) +- [MCP Optimizer Kubernetes Mode](#mcp-optimizer-kubernetes-mode) - [Installing ToolHive Operator](#installing-toolhive-operator) -- [Building MCP-Optimizer Image](#building-mcp-optimizer-image) +- [Building MCP Optimizer Image](#building-mcp-optimizer-image) - [Installing MCP Servers](#installing-mcp-servers) -- [Deploying MCP-Optimizer](#deploying-mcp-optimizer) +- [Deploying MCP Optimizer](#deploying-mcp-optimizer) - [Connecting Clients](#connecting-clients) - [RBAC Configuration](#rbac-configuration) - [MCPServer CRD Mapping](#mcpserver-crd-mapping) @@ -20,7 +20,7 @@ This guide covers deploying MCP-Optimizer and MCP servers in Kubernetes using th ## Overview -MCP-Optimizer supports running in Kubernetes mode, where it queries MCPServer Custom Resource Definitions (CRDs) instead of the Docker-based workloads API. This enables: +MCP Optimizer supports running in Kubernetes mode, where it queries MCPServer Custom Resource Definitions (CRDs) instead of the Docker-based workloads API. This enables: - Native Kubernetes integration with MCPServer CRDs - Automatic service discovery and tool aggregation @@ -38,7 +38,7 @@ helm upgrade -i toolhive-operator-crds oci://ghcr.io/stacklok/toolhive/toolhive- helm upgrade -i toolhive-operator oci://ghcr.io/stacklok/toolhive/toolhive-operator \ -n toolhive-system --create-namespace -# 2. Deploy MCP-Optimizer +# 2. Deploy MCP Optimizer # Option A: From OCI registry (recommended) helm install mcp-optimizer oci://ghcr.io/stacklok/mcp-optimizer/mcp-optimizer -n toolhive-system @@ -71,15 +71,15 @@ Read the sections below for detailed explanations, configuration options, and tr - Kubernetes cluster (v1.19+) - `kubectl` configured to access your cluster -- Helm 3.x (for operator and MCP-Optimizer installation) +- Helm 3.x (for operator and MCP Optimizer installation) - ToolHive operator (installation steps below) -- Docker (for building local MCP-Optimizer image) +- Docker (for building local MCP Optimizer image) -## MCP-Optimizer Kubernetes Mode +## MCP Optimizer Kubernetes Mode ### Configuration -MCP-Optimizer uses environment variables for Kubernetes mode configuration: +MCP Optimizer uses environment variables for Kubernetes mode configuration: ```bash # Required: Set runtime mode to k8s @@ -124,7 +124,7 @@ Easiest way to test Kubernetes integration locally: #### Mode 2: Running In-Cluster (Production) -When MCP-Optimizer is deployed as a pod in Kubernetes, everything is automatically configured: +When MCP Optimizer is deployed as a pod in Kubernetes, everything is automatically configured: - **Runtime mode**: Set to `k8s` via environment variable - **API server URL**: Automatically constructed from `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT_HTTPS` @@ -136,7 +136,7 @@ When MCP-Optimizer is deployed as a pod in Kubernetes, everything is automatical #### Mode 3: Remote Access with kubeconfig -For running MCP-Optimizer outside the cluster with cluster access: +For running MCP Optimizer outside the cluster with cluster access: ```bash export RUNTIME_MODE=k8s @@ -167,13 +167,13 @@ kubectl get crd mcpservers.toolhive.stacklok.dev kubectl get pods -n toolhive-system ``` -## Building MCP-Optimizer Image (Optional for Local Development) +## Building MCP Optimizer Image (Optional for Local Development) By default, the Helm chart uses published container images from `ghcr.io/stacklok/mcp-optimizer`. For most users, you can skip this section and proceed directly to deployment. ### For Local Development -If you need to test local changes to MCP-Optimizer, you can build and use a local image: +If you need to test local changes to MCP Optimizer, you can build and use a local image: ```bash # From the mcp-optimizer repository root @@ -309,16 +309,16 @@ After successful installation: - Transport: stdio - Requires: GITHUB_PERSONAL_ACCESS_TOKEN secret -## Deploying MCP-Optimizer +## Deploying MCP Optimizer -MCP-Optimizer aggregates all MCP servers in the cluster and provides unified tool discovery. +MCP Optimizer aggregates all MCP servers in the cluster and provides unified tool discovery. ### Using OCI Registry (Recommended) -Install MCP-Optimizer from the published Helm chart in the OCI registry: +Install MCP Optimizer from the published Helm chart in the OCI registry: ```bash -# Deploy MCP-Optimizer from OCI registry +# Deploy MCP Optimizer from OCI registry helm install mcp-optimizer oci://ghcr.io/stacklok/mcp-optimizer/mcp-optimizer -n toolhive-system ``` @@ -340,7 +340,7 @@ helm install mcp-optimizer oci://ghcr.io/stacklok/mcp-optimizer/mcp-optimizer \ You can also install from the source repository: ```bash -# Deploy MCP-Optimizer from local chart directory +# Deploy MCP Optimizer from local chart directory helm install mcp-optimizer ./helm/mcp-optimizer -n toolhive-system ``` @@ -368,16 +368,16 @@ helm install mcp-optimizer oci://ghcr.io/stacklok/mcp-optimizer/mcp-optimizer \ --set mcpserver.image.tag=v1.0.0 ``` -### Verify MCP-Optimizer Installation +### Verify MCP Optimizer Installation ```bash -# Check MCP-Optimizer MCPServer resource +# Check MCP Optimizer MCPServer resource kubectl get mcpserver mcp-optimizer -n toolhive-system -# Check MCP-Optimizer pod +# Check MCP Optimizer pod kubectl get pods -n toolhive-system -l app.kubernetes.io/name=mcp-optimizer -# Check MCP-Optimizer logs to see discovered tools +# Check MCP Optimizer logs to see discovered tools kubectl logs -n toolhive-system mcp-optimizer-0 --tail=50 | grep -E "fetch|github|successful|total_tools" ``` @@ -391,7 +391,7 @@ Workload ingestion with cleanup completed failed=0 successful=2 total_tools=51 ### Verify Proxy Mode -MCP-Optimizer must use `streamable-http` proxy mode for proper Cursor integration: +MCP Optimizer must use `streamable-http` proxy mode for proper Cursor integration: ```bash kubectl get mcpserver mcp-optimizer -n toolhive-system -o jsonpath='{.spec.proxyMode}' && echo @@ -407,9 +407,9 @@ kubectl patch mcpserver mcp-optimizer -n toolhive-system --type=merge -p '{"spec ### Connecting Cursor -To use MCP-Optimizer from Cursor, expose the service locally and configure Cursor's MCP settings. +To use MCP Optimizer from Cursor, expose the service locally and configure Cursor's MCP settings. -#### Step 1: Port Forward MCP-Optimizer Service +#### Step 1: Port Forward MCP Optimizer Service In a terminal, run and keep this command running: @@ -452,11 +452,11 @@ Try asking Cursor: - "Fetch the content from https://example.com" - "Get GitHub issue #123 from stacklok/toolhive" -MCP-Optimizer will automatically discover and route requests to the appropriate MCP servers. +MCP Optimizer will automatically discover and route requests to the appropriate MCP servers. ### Alternative: Direct Connection to GitHub -If you want to connect directly to the GitHub MCP server without MCP-Optimizer: +If you want to connect directly to the GitHub MCP server without MCP Optimizer: #### Port Forward GitHub Service @@ -514,7 +514,7 @@ Configuration file: `~/Library/Application Support/Claude/claude_desktop_config. ## RBAC Configuration -When running in-cluster, MCP-Optimizer requires appropriate RBAC permissions to read MCPServer resources: +When running in-cluster, MCP Optimizer requires appropriate RBAC permissions to read MCPServer resources: ```yaml apiVersion: v1 @@ -546,11 +546,11 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` -The MCP-Optimizer Helm chart automatically creates these resources when `rbac.create: true` (default). +The MCP Optimizer Helm chart automatically creates these resources when `rbac.create: true` (default). ## MCPServer CRD Mapping -MCP-Optimizer converts MCPServer CRDs to internal Workload models with the following field mappings: +MCP Optimizer converts MCPServer CRDs to internal Workload models with the following field mappings: | MCPServer Field | Workload Field | Notes | |----------------|---------------|-------| @@ -580,7 +580,7 @@ kubectl logs -n toolhive-system fetch-0 # GitHub server kubectl logs -n toolhive-system github-0 -# MCP-Optimizer +# MCP Optimizer kubectl logs -n toolhive-system mcp-optimizer-0 --tail=100 ``` @@ -623,7 +623,7 @@ If tools are not being discovered: - Check the MCPServer status.tools field: `kubectl get mcpserver -o jsonpath='{.status.tools}'` - Ensure the MCP server pod is running: `kubectl get pods -l toolhive=true` - Check MCP server logs for errors -- Verify MCP-Optimizer is polling correctly: Check MCP-Optimizer logs for ingestion messages +- Verify MCP Optimizer is polling correctly: Check MCP Optimizer logs for ingestion messages ### Verification Commands @@ -634,7 +634,7 @@ kubectl get mcpserver -n toolhive-system # Check all MCP server pods kubectl get pods -n toolhive-system | grep -E "fetch|github|mcp-optimizer" -# Check MCP-Optimizer ingestion status +# Check MCP Optimizer ingestion status kubectl logs -n toolhive-system mcp-optimizer-0 --tail=100 | grep "Workload ingestion with cleanup completed" ``` @@ -686,10 +686,10 @@ Mismatches between these fields will cause connection failures. ### Group Filtering -MCP-Optimizer supports filtering MCPServers by group labels. This is useful for isolating servers by environment or team: +MCP Optimizer supports filtering MCPServers by group labels. This is useful for isolating servers by environment or team: ```yaml -# In MCP-Optimizer deployment, set environment variable: +# In MCP Optimizer deployment, set environment variable: - name: ALLOWED_GROUPS value: "development,production" ``` @@ -716,7 +716,7 @@ For detailed information about in-cluster authentication mechanisms, service acc - Review the [ToolHive documentation](https://github.com/stacklok/toolhive) - Learn about [MCPServer CRD configuration options](https://github.com/stacklok/toolhive/tree/main/deploy/operator) - Explore additional MCP servers in the [examples directory](../examples/mcp-servers/) -- Deploy MCP-Optimizer from [OCI registry (releases)](https://github.com/StacklokLabs/mcp-optimizer/releases) or [local Helm chart](../helm/mcp-optimizer/) +- Deploy MCP Optimizer from [OCI registry (releases)](https://github.com/StacklokLabs/mcp-optimizer/releases) or [local Helm chart](../helm/mcp-optimizer/) ## See Also diff --git a/docs/runtime-modes.md b/docs/runtime-modes.md index 5d4d4b3..6fe36eb 100644 --- a/docs/runtime-modes.md +++ b/docs/runtime-modes.md @@ -1,6 +1,6 @@ # Runtime Modes -MCP-Optimizer supports two runtime modes for deploying and managing MCP servers: +MCP Optimizer supports two runtime modes for deploying and managing MCP servers: ## Modes diff --git a/examples/mcp-servers/README.md b/examples/mcp-servers/README.md index 216d2b8..50278fb 100644 --- a/examples/mcp-servers/README.md +++ b/examples/mcp-servers/README.md @@ -6,7 +6,7 @@ This directory contains example MCPServer manifests for deploying MCP servers wi ## Overview -These examples demonstrate how to deploy MCP servers that work with MCP-Optimizer. MCP-Optimizer automatically discovers these servers and aggregates their tools into a unified interface. +These examples demonstrate how to deploy MCP servers that work with MCP Optimizer. MCP Optimizer automatically discovers these servers and aggregates their tools into a unified interface. ## Prerequisites @@ -14,9 +14,9 @@ Before deploying these example servers, you must: 1. ✅ Have a Kubernetes cluster with the ToolHive operator installed 2. ✅ Have `kubectl` configured to access your cluster -3. ✅ **Have MCP-Optimizer deployed** (see [Deploying MCP-Optimizer](../../docs/kubernetes-integration.md#deploying-mcp-optimizer)) +3. ✅ **Have MCP Optimizer deployed** (see [Deploying MCP Optimizer](../../docs/kubernetes-integration.md#deploying-mcp-optimizer)) -**Important:** MCP-Optimizer must be running to aggregate and expose the tools from these servers. If you haven't deployed MCP-Optimizer yet, follow the [complete installation guide](../../docs/kubernetes-integration.md) first. +**Important:** MCP Optimizer must be running to aggregate and expose the tools from these servers. If you haven't deployed MCP Optimizer yet, follow the [complete installation guide](../../docs/kubernetes-integration.md) first. ## Quick Start @@ -41,13 +41,13 @@ kubectl get mcpserver github -n toolhive-system ### 3. Verify Deployment -Check that MCP-Optimizer discovers the deployed servers: +Check that MCP Optimizer discovers the deployed servers: ```bash # Check that all MCPServers are running kubectl get mcpserver -n toolhive-system -# Check MCP-Optimizer logs for server discovery +# Check MCP Optimizer logs for server discovery kubectl logs -n toolhive-system mcp-optimizer-0 --tail=50 | grep -E "fetch|github|total_tools" ``` @@ -59,14 +59,14 @@ Using sse client for workload workload=github Workload ingestion with cleanup completed failed=0 successful=2 total_tools=50 ``` -This confirms that MCP-Optimizer has successfully discovered and ingested the tools from both servers. +This confirms that MCP Optimizer has successfully discovered and ingested the tools from both servers. ### 4. Test the Connection -To test MCP-Optimizer and the example servers from your local machine: +To test MCP Optimizer and the example servers from your local machine: ```bash -# Port forward MCP-Optimizer service +# Port forward MCP Optimizer service kubectl port-forward -n toolhive-system svc/mcp-optimizer-proxy 9900:9900 ``` @@ -81,7 +81,7 @@ curl -s http://localhost:9900/mcp \ -d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}}}' ``` -You should see a successful initialization response from MCP-Optimizer. +You should see a successful initialization response from MCP Optimizer. For client configuration (Cursor, VSCode, Claude Desktop), see [Connecting Clients](../../docs/kubernetes-integration.md#connecting-clients). @@ -95,8 +95,8 @@ For client configuration (Cursor, VSCode, Claude Desktop), see [Connecting Clien For the complete installation and configuration guide, see the [Kubernetes Integration and Installation Guide](../../docs/kubernetes-integration.md), which covers: - **Getting Started**: Complete installation flow from scratch -- **Building Images**: How to build or use MCP-Optimizer images -- **Deploying MCP-Optimizer**: Helm installation with local or remote images +- **Building Images**: How to build or use MCP Optimizer images +- **Deploying MCP Optimizer**: Helm installation with local or remote images - **Transport Configuration**: streamable-http, SSE, and stdio transports - **Client Connections**: Cursor, VSCode, and Claude Desktop setup - **RBAC Configuration**: Service accounts and permissions diff --git a/helm/README.md b/helm/README.md index face2b5..d54321b 100644 --- a/helm/README.md +++ b/helm/README.md @@ -1,12 +1,12 @@ -# MCP-Optimizer Helm Charts +# MCP Optimizer Helm Charts -This directory contains Helm charts for deploying MCP-Optimizer in Kubernetes. +This directory contains Helm charts for deploying MCP Optimizer in Kubernetes. ## Available Charts ### mcp-optimizer -The main chart for deploying MCP-Optimizer as an MCPServer resource in a Kubernetes cluster with ToolHive operator. +The main chart for deploying MCP Optimizer as an MCPServer resource in a Kubernetes cluster with ToolHive operator. - **Location**: `./mcp-optimizer/` - **Documentation**: See [mcp-optimizer/README.md](./mcp-optimizer/README.md) @@ -40,7 +40,7 @@ helm upgrade -i toolhive-operator \ --version latest ``` -### Installing MCP-Optimizer +### Installing MCP Optimizer ```bash # From the mcp-optimizer repository root @@ -92,6 +92,6 @@ See the [mcp-optimizer Chart README](./mcp-optimizer/README.md). ## Support For issues and questions: -- GitHub Issues: https://github.com/your-org/mcp-optimizer/issues -- Documentation: https://github.com/your-org/mcp-optimizer/docs +- GitHub Issues: https://github.com/StacklokLabs/mcp-optimizer/issues +- Documentation: https://github.com/StacklokLabs/mcp-optimizer/docs diff --git a/helm/mcp-optimizer/Chart.yaml b/helm/mcp-optimizer/Chart.yaml index 05adfba..1c0011d 100644 --- a/helm/mcp-optimizer/Chart.yaml +++ b/helm/mcp-optimizer/Chart.yaml @@ -1,6 +1,6 @@ apiVersion: v2 name: mcp-optimizer -description: A Helm chart for deploying MCP-Optimizer MCP Server in Kubernetes +description: A Helm chart for deploying MCP Optimizer MCP Server in Kubernetes type: application version: 0.1.0 appVersion: "0.1.0" @@ -13,5 +13,5 @@ home: https://github.com/StacklokLabs/mcp-optimizer sources: - https://github.com/StacklokLabs/mcp-optimizer maintainers: - - name: MCP-Optimizer Team + - name: MCP Optimizer Team diff --git a/helm/mcp-optimizer/QUICKSTART.md b/helm/mcp-optimizer/QUICKSTART.md index e304fe2..785d722 100644 --- a/helm/mcp-optimizer/QUICKSTART.md +++ b/helm/mcp-optimizer/QUICKSTART.md @@ -1,6 +1,6 @@ -# MCP-Optimizer Helm Chart - Quick Start +# MCP Optimizer Helm Chart - Quick Start -A quick reference guide for deploying MCP-Optimizer with Helm. +A quick reference guide for deploying MCP Optimizer with Helm. ## Prerequisites Check diff --git a/helm/mcp-optimizer/README.md b/helm/mcp-optimizer/README.md index 7f7beaf..31daae2 100644 --- a/helm/mcp-optimizer/README.md +++ b/helm/mcp-optimizer/README.md @@ -1,6 +1,6 @@ -# MCP-Optimizer Helm Chart +# MCP Optimizer Helm Chart -This Helm chart deploys MCP-Optimizer as an MCPServer resource in a Kubernetes cluster with ToolHive operator installed. +This Helm chart deploys MCP Optimizer as an MCPServer resource in a Kubernetes cluster with ToolHive operator installed. ## Prerequisites @@ -47,7 +47,7 @@ helm uninstall mcp-optimizer -n toolhive-system ## Configuration -The following table lists the configurable parameters of the MCP-Optimizer chart and their default values. +The following table lists the configurable parameters of the MCP Optimizer chart and their default values. ### MCPServer Configuration @@ -288,10 +288,10 @@ For production deployments, consider: ## Additional Resources -- [MCP-Optimizer Documentation](https://github.com/your-org/mcp-optimizer) +- [MCP Optimizer Documentation](https://github.com/StacklokLabs/mcp-optimizer) - [ToolHive Documentation](https://github.com/stacklok/toolhive) - [Kubernetes Integration Guide](/docs/kubernetes-integration.md) ## License -This Helm chart is licensed under the same license as the MCP-Optimizer project. +This Helm chart is licensed under the same license as the MCP Optimizer project. diff --git a/helm/mcp-optimizer/examples/values-development.yaml b/helm/mcp-optimizer/examples/values-development.yaml index 7c5dd00..c87df9f 100644 --- a/helm/mcp-optimizer/examples/values-development.yaml +++ b/helm/mcp-optimizer/examples/values-development.yaml @@ -1,4 +1,4 @@ -# Example values file for MCP-Optimizer in development environment +# Example values file for MCP Optimizer in development environment # This configuration is optimized for local development and testing mcpserver: diff --git a/helm/mcp-optimizer/examples/values-single-namespace.yaml b/helm/mcp-optimizer/examples/values-single-namespace.yaml index 392a62b..f8a3d64 100644 --- a/helm/mcp-optimizer/examples/values-single-namespace.yaml +++ b/helm/mcp-optimizer/examples/values-single-namespace.yaml @@ -1,4 +1,4 @@ -# Example values file for MCP-Optimizer configured to query a single namespace +# Example values file for MCP Optimizer configured to query a single namespace # This is useful when you want to limit mcp-optimizer's scope mcpserver: @@ -6,7 +6,7 @@ mcpserver: namespace: toolhive-system image: - repository: ghcr.io/your-org/mcp-optimizer + repository: ghcr.io/stackloklabs/mcp-optimizer tag: "0.1.0" env: diff --git a/helm/mcp-optimizer/examples/values-with-group-filtering.yaml b/helm/mcp-optimizer/examples/values-with-group-filtering.yaml index f85512e..34a40a5 100644 --- a/helm/mcp-optimizer/examples/values-with-group-filtering.yaml +++ b/helm/mcp-optimizer/examples/values-with-group-filtering.yaml @@ -1,4 +1,4 @@ -# Example values file for MCP-Optimizer with group filtering +# Example values file for MCP Optimizer with group filtering # This limits mcp-optimizer to only discover MCP servers in specific groups mcpserver: @@ -6,7 +6,7 @@ mcpserver: namespace: toolhive-system image: - repository: ghcr.io/your-org/mcp-optimizer + repository: ghcr.io/stackloklabs/mcp-optimizer tag: "0.1.0" resources: diff --git a/helm/mcp-optimizer/examples/values-with-persistence.yaml b/helm/mcp-optimizer/examples/values-with-persistence.yaml index e74fd37..96c7151 100644 --- a/helm/mcp-optimizer/examples/values-with-persistence.yaml +++ b/helm/mcp-optimizer/examples/values-with-persistence.yaml @@ -1,4 +1,4 @@ -# Example values file for MCP-Optimizer with persistence enabled +# Example values file for MCP Optimizer with persistence enabled # This configuration is suitable for production use mcpserver: diff --git a/helm/mcp-optimizer/templates/NOTES.txt b/helm/mcp-optimizer/templates/NOTES.txt index 0ea7c95..da40418 100644 --- a/helm/mcp-optimizer/templates/NOTES.txt +++ b/helm/mcp-optimizer/templates/NOTES.txt @@ -7,7 +7,7 @@ To learn more about the release, try: $ helm status {{ .Release.Name }} -n {{ .Values.mcpserver.namespace }} $ helm get all {{ .Release.Name }} -n {{ .Values.mcpserver.namespace }} -MCP-Optimizer has been deployed as an MCPServer resource in the {{ .Values.mcpserver.namespace }} namespace. +MCP Optimizer has been deployed as an MCPServer resource in the {{ .Values.mcpserver.namespace }} namespace. The MCPServer resource will be managed by the ToolHive operator, which will create: - A Deployment for the mcp-optimizer container @@ -61,6 +61,6 @@ Configuration: - Image: {{ include "mcp-optimizer.image" . }} For more information, visit: -- MCP-Optimizer Documentation: https://github.com/your-org/mcp-optimizer +- MCP Optimizer Documentation: https://github.com/StacklokLabs/mcp-optimizer - ToolHive Documentation: https://github.com/stacklok/toolhive diff --git a/helm/mcp-optimizer/values.yaml b/helm/mcp-optimizer/values.yaml index cb2c8a7..6291e7e 100644 --- a/helm/mcp-optimizer/values.yaml +++ b/helm/mcp-optimizer/values.yaml @@ -2,7 +2,7 @@ # This is a YAML-formatted file. # Declare variables to be passed into your templates. -# MCP-Optimizer MCPServer configuration +# MCP Optimizer MCPServer configuration mcpserver: # Name of the MCPServer resource name: mcp-optimizer diff --git a/specs/001-fix-remote-workload-matching/quickstart.md b/specs/001-fix-remote-workload-matching/quickstart.md index f4c5d15..8be1aef 100644 --- a/specs/001-fix-remote-workload-matching/quickstart.md +++ b/specs/001-fix-remote-workload-matching/quickstart.md @@ -10,7 +10,7 @@ This quickstart guide helps you test and validate the remote workload matching f ## Prerequisites - ToolHive running (Docker or Kubernetes mode) -- MCP-Optimizer development environment set up +- MCP Optimizer development environment set up - Access to ToolHive API (default: localhost:8080) ## Test Scenario 1: Remote Workload with Custom Name diff --git a/specs/001-fix-remote-workload-matching/spec.md b/specs/001-fix-remote-workload-matching/spec.md index e9ed16a..e1c64c6 100644 --- a/specs/001-fix-remote-workload-matching/spec.md +++ b/specs/001-fix-remote-workload-matching/spec.md @@ -9,7 +9,7 @@ ### User Story 1 - Stable Remote Workload Ingestion (Priority: P1) -When a remote MCP workload is running in ToolHive, the MCP-Optimizer system should correctly match it to its registry entry based on URL rather than name, preventing unintended deletion of correctly deployed workloads. +When a remote MCP workload is running in ToolHive, the MCP Optimizer system should correctly match it to its registry entry based on URL rather than name, preventing unintended deletion of correctly deployed workloads. **Why this priority**: This is the core bug fix that prevents data loss and ensures system stability. Without this fix, remote workloads cannot be reliably used in production. diff --git a/src/mcp_optimizer/db/exceptions.py b/src/mcp_optimizer/db/exceptions.py index d446e9f..5ddb1d3 100644 --- a/src/mcp_optimizer/db/exceptions.py +++ b/src/mcp_optimizer/db/exceptions.py @@ -1,4 +1,4 @@ -"""Database exceptions for MCP-Optimizer.""" +"""Database exceptions for MCP Optimizer.""" from mcp_optimizer.db.models import RegistryServer diff --git a/src/mcp_optimizer/polling_manager.py b/src/mcp_optimizer/polling_manager.py index 0bb9ec4..3bad3b9 100644 --- a/src/mcp_optimizer/polling_manager.py +++ b/src/mcp_optimizer/polling_manager.py @@ -2,7 +2,7 @@ Polling manager for periodic MCP server discovery and synchronization. This module provides functionality to periodically poll ToolHive for workload changes, -detect server status changes, and synchronize the MCP-Optimizer database accordingly. +detect server status changes, and synchronize the MCP Optimizer database accordingly. """ import asyncio diff --git a/src/mcp_optimizer/server.py b/src/mcp_optimizer/server.py index 05d6ac0..348cac9 100644 --- a/src/mcp_optimizer/server.py +++ b/src/mcp_optimizer/server.py @@ -32,7 +32,7 @@ class McpOptimizerError(Exception): - """Base exception class for MCP-Optimizer errors.""" + """Base exception class for MCP Optimizer errors.""" pass