Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions 1_developer/_2_rest/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,14 @@ The following endpoints are available in LM Studio's v1 REST API.
</table>

## Inference endpoint comparison
The table below compares the features of LM Studio's `api/v1/chat` endpoint with the OpenAI-compatible `v1/responses` and `v1/chat/completions` endpoints.
The table below compares the features of LM Studio's `/api/v1/chat` endpoint with the OpenAI-compatible `/v1/responses` and `/v1/chat/completions` endpoints.
<table class="flexible-cols">
<thead>
<tr>
<th>Feature</th>
<th><code>api/v1/chat</code></th>
<th><code>v1/responses</code></th>
<th><code>v1/chat/completions</code></th>
<th><code>/api/v1/chat</code></th>
<th><code>/v1/responses</code></th>
<th><code>/v1/chat/completions</code></th>
</tr>
</thead>
<tbody>
Expand Down
2 changes: 1 addition & 1 deletion 1_developer/_2_rest/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ See the full [chat](/docs/developer/rest/chat) docs for more details.
## Use MCP servers via API


Enable the model interact with ephemeral Model Context Protocol (MCP) servers in `api/v1/chat` by specifying servers in the `integrations` field.
Enable the model interact with ephemeral Model Context Protocol (MCP) servers in `/api/v1/chat` by specifying servers in the `integrations` field.

```lms_code_snippet
variants:
Expand Down
2 changes: 1 addition & 1 deletion 1_developer/_2_rest/streaming-events.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ index: 4

Streaming events let you render chat responses incrementally over Server‑Sent Events (SSE). When you call `POST /api/v1/chat` with `stream: true`, the server emits a series of named events that you can consume. These events arrive in order and may include multiple deltas (for reasoning and message content), tool call boundaries and payloads, and any errors encountered. The stream always begins with `chat.start` and concludes with `chat.end`, which contains the aggregated result equivalent to a non‑streaming response.

List of event types that can be sent in an `api/v1/chat` response stream:
List of event types that can be sent in an `/api/v1/chat` response stream:
- `chat.start`
- `model_load.start`
- `model_load.progress`
Expand Down