Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ An [MCP](https://modelcontextprotocol.io/) server implementation of Couchbase th
- Upsert a document by ID to a specified scope and collection
- Delete a document by ID from a specified scope and collection
- Run a [SQL++ query](https://www.couchbase.com/sqlplusplus/) on a specified scope
- Queries are automatically scoped to the specified bucket and scope, so use collection names directly (e.g., use `SELECT * FROM users` instead of `SELECT * FROM bucket.scope.users`)
- There is an option in the MCP server, `CB_MCP_READ_ONLY_QUERY_MODE` that is set to true by default to disable running SQL++ queries that change the data or the underlying collection structure. Note that the documents can still be updated by ID.
- Get the status of the MCP server
- Check the cluster credentials by connecting to the cluster
Expand Down
10 changes: 9 additions & 1 deletion src/tools/query.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,15 @@ def get_schema_for_collection(
def run_sql_plus_plus_query(
ctx: Context, bucket_name: str, scope_name: str, query: str
) -> list[dict[str, Any]]:
"""Run a SQL++ query on a scope and return the results as a list of JSON objects."""
"""Run a SQL++ query on a scope and return the results as a list of JSON objects.

The query will be run on the specified scope in the specified bucket.
The query should use collection names directly without bucket/scope prefixes, as the scope context is automatically set.

Example:
query = "SELECT * FROM users WHERE age > 18"
# Incorrect: "SELECT * FROM bucket.scope.users WHERE age > 18"
"""
cluster = get_cluster_connection(ctx)

bucket = connect_to_bucket(cluster, bucket_name)
Expand Down