Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 56 additions & 40 deletions docs/openapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,46 +14,62 @@ Lightspeed Core Service (LCS) service API specification.

## List of REST API endpoints

| Method | Path | Description |
|--------|---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|
| GET | `/` | Returns the static HTML index page |
| GET | `/v1/info` | Returns the service name, version and Llama-stack version |
| GET | `/v1/models` | List of available models |
| GET | `/v1/tools` | Consolidated list of available tools from all configured MCP servers |
| GET | `/v1/mcp-auth/client-options` | List of MCP servers configured to accept client-provided authorization tokens, along with the header names where clients should provide these tokens |
| GET | `/v1/mcp-servers` | List all registered MCP servers |
| POST | `/v1/mcp-servers` | Register an MCP server dynamically at runtime |
| DELETE | `/v1/mcp-servers/{name}` | Unregister a dynamically registered MCP server |
| GET | `/v1/shields` | List of available shields from the Llama Stack service |
| GET | `/v1/providers` | List all available providers grouped by API type |
| GET | `/v1/providers/{provider_id}` | Retrieve a single provider identified by its unique ID |
| GET | `/v1/rags` | List all available RAGs |
| GET | `/v1/rags/{rag_id}` | Retrieve a single RAG identified by its unique ID |
| POST | `/v1/query` | Processes a POST request to a query endpoint, forwarding the user's query to a selected Llama Stack LLM and returning the generated response |
| POST | `/v1/streaming_query` | Streaming response using Server-Sent Events (SSE) format with content type text/event-stream |
| GET | `/v1/config` | Returns the current service configuration |
| POST | `/v1/feedback` | Processes a user feedback submission, storing the feedback and returning a confirmation response |
| GET | `/v1/feedback/status` | Return the current enabled status of the feedback functionality |
| PUT | `/v1/feedback/status` | Change the feedback status: enables or disables it |
| GET | `/v1/conversations` | Retrieve all conversations for the authenticated user |
| GET | `/v1/conversations/{conversation_id}` | Retrieve a conversation by ID using Conversations API |
| DELETE | `/v1/conversations/{conversation_id}` | Delete a conversation by ID using Conversations API |
| PUT | `/v1/conversations/{conversation_id}` | Update a conversation metadata using Conversations API |
| GET | `/v2/conversations` | Retrieve all conversations for the authenticated user |
| GET | `/v2/conversations/{conversation_id}` | Retrieve a conversation identified by its ID |
| DELETE | `/v2/conversations/{conversation_id}` | Delete a conversation identified by its ID |
| PUT | `/v2/conversations/{conversation_id}` | Update a conversation topic summary by ID |
| POST | `/v1/infer` | Serves requests from the RHEL Lightspeed Command Line Assistant (CLA) |
| POST | `/v1/responses` | Handle request to the /responses endpoint using Responses API (LCORE specification) |
| GET | `/readiness` | Returns service readiness state |
| GET | `/liveness` | Returns liveness status of the service |
| POST | `/authorized` | Returns the authenticated user's ID and username |
| GET | `/metrics` | Returns the latest Prometheus metrics in a form of plain text |
| GET | `/.well-known/agent-card.json` | Serve the A2A Agent Card at the well-known location |
| GET | `/.well-known/agent.json` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| GET | `/a2a` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| POST | `/a2a` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| GET | `/a2a/health` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| Method | Path | Description |
|--------|-------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------|
| GET | `/` | Returns the static HTML index page |
| GET | `/v1/info` | Returns the service name, version and Llama-stack version |
| GET | `/v1/models` | List of available models |
| GET | `/v1/tools` | Consolidated list of available tools from all configured MCP servers |
| GET | `/v1/mcp-auth/client-options` | List of MCP servers configured to accept client-provided authorization tokens, along with the header names where clients should provide these tokens |
| GET | `/v1/mcp-servers` | List all registered MCP servers |
| POST | `/v1/mcp-servers` | Register an MCP server dynamically at runtime |
| DELETE | `/v1/mcp-servers/{name}` | Unregister a dynamically registered MCP server |
| GET | `/v1/shields` | List of available shields from the Llama Stack service |
| GET | `/v1/providers` | List all available providers grouped by API type |
| GET | `/v1/providers/{provider_id}` | Retrieve a single provider identified by its unique ID |
| GET | `/v1/prompts/` | List prompts |
| POST | `/v1/prompts/` | Create prompt |
Comment on lines +30 to +31
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use canonical prompt paths without trailing slash.

The table lists /v1/prompts/, but the implemented endpoint is /v1/prompts (src/app/endpoints/prompts.py:139-165). Please document the canonical path to avoid redirect-dependent client behavior, especially for POST.

Proposed doc fix
-| GET    | `/v1/prompts/`                                        | List prompts                                                                                                                                         |
-| POST   | `/v1/prompts/`                                        | Create prompt                                                                                                                                        |
+| GET    | `/v1/prompts`                                         | List prompts                                                                                                                                         |
+| POST   | `/v1/prompts`                                         | Create prompt                                                                                                                                        |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| GET | `/v1/prompts/` | List prompts |
| POST | `/v1/prompts/` | Create prompt |
| GET | `/v1/prompts` | List prompts |
| POST | `/v1/prompts` | Create prompt |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/openapi.md` around lines 30 - 31, Update the OpenAPI docs to use the
canonical endpoint paths without trailing slashes: change `/v1/prompts/` to
`/v1/prompts` for both GET and POST entries so they match the implemented route
(`/v1/prompts`) in the prompts endpoint; ensure the table rows are updated to
avoid redirect-dependent client behavior for POST requests.

| GET | `/v1/prompts/{prompt_id}` | Get prompt |
| PUT | `/v1/prompts/{prompt_id}` | Update prompt |
| DELETE | `/v1/prompts/{prompt_id}` | Delete prompt |
| GET | `/v1/rags` | List all available RAGs |
| GET | `/v1/rags/{rag_id}` | Retrieve a single RAG identified by its unique ID |
| GET | `/v1/vector-stores` | List Vector Stores |
| POST | `/v1/vector-stores` | Create Vector Store |
| GET | `/v1/vector-stores/{vector_store_id}` | Get Vector Store |
| PUT | `/v1/vector-stores/{vector_store_id}` | Update Vector Store |
| DELETE | `/v1/vector-stores/{vector_store_id}` | Delete Vector Store |
| POST | `/v1/files` | Create File |
| POST | `/v1/vector-stores/{vector_store_id}/files` | Add File To Vector Store |
| GET | `/v1/vector-stores/{vector_store_id}/files` | List Vector Store Files |
| GET | `/v1/vector-stores/{vector_store_id}/files/{file_id}` | Get Vector Store File |
| DELETE | `/v1/vector-stores/{vector_store_id}/files/{file_id}` | Delete Vector Store File |
| POST | `/v1/query` | Processes a POST request to a query endpoint, forwarding the user's query to a selected Llama Stack LLM and returning the generated response |
| POST | `/v1/streaming_query` | Streaming response using Server-Sent Events (SSE) format with content type text/event-stream |
| POST | `/v1/streaming_query/interrupt` | Streaming Query Interrupt Endpoint Handler |
| GET | `/v1/config` | Returns the current service configuration |
| POST | `/v1/feedback` | Processes a user feedback submission, storing the feedback and returning a confirmation response |
| GET | `/v1/feedback/status` | Return the current enabled status of the feedback functionality |
| PUT | `/v1/feedback/status` | Change the feedback status: enables or disables it |
| GET | `/v1/conversations` | Retrieve all conversations for the authenticated user |
| GET | `/v1/conversations/{conversation_id}` | Retrieve a conversation by ID using Conversations API |
| DELETE | `/v1/conversations/{conversation_id}` | Delete a conversation by ID using Conversations API |
| PUT | `/v1/conversations/{conversation_id}` | Update a conversation metadata using Conversations API |
| GET | `/v2/conversations` | Retrieve all conversations for the authenticated user |
| GET | `/v2/conversations/{conversation_id}` | Retrieve a conversation identified by its ID |
| DELETE | `/v2/conversations/{conversation_id}` | Delete a conversation identified by its ID |
| PUT | `/v2/conversations/{conversation_id}` | Update a conversation topic summary by ID |
| POST | `/v1/infer` | Serves requests from the RHEL Lightspeed Command Line Assistant (CLA) |
| POST | `/v1/responses` | Handle request to the /responses endpoint using Responses API (LCORE specification) |
| GET | `/readiness` | Returns service readiness state |
| GET | `/liveness` | Returns liveness status of the service |
| POST | `/authorized` | Returns the authenticated user's ID and username |
| GET | `/metrics` | Returns the latest Prometheus metrics in a form of plain text |
| GET | `/.well-known/agent-card.json` | Serve the A2A Agent Card at the well-known location |
| GET | `/.well-known/agent.json` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| GET | `/a2a` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| POST | `/a2a` | Handle A2A JSON-RPC requests following the A2A protocol specification |
| GET | `/a2a/health` | Handle A2A JSON-RPC requests following the A2A protocol specification |


## GET `/`
Expand Down
Loading