From 5e7c1570770887aa0aaaf7c040091d14920118d2 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Wed, 10 Sep 2025 19:03:27 +0200 Subject: [PATCH 01/10] review of all chat pages except reference --- learn/chat/conversational_search.mdx | 100 +++----------- learn/chat/getting_started_with_chat.mdx | 169 ++++++++++++++--------- reference/api/chats.mdx | 12 +- 3 files changed, 128 insertions(+), 153 deletions(-) diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index 773cc2a224..ae33ad183c 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -1,34 +1,25 @@ --- -title: Conversational search -sidebarTitle: Conversational search -description: Learn how to implement AI-powered conversational search using Meilisearch's chat feature +title: What is conversational search? +description: Conversational search is an AI-powered feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes --- -Meilisearch's chat completions feature enables AI-powered conversational search, allowing users to ask questions in natural language and receive direct answers based on your indexed content. This feature transforms the traditional search experience into an interactive dialogue. +## What is conversational search? - -This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it: +In conversational search interfaces, users ask questions in everyday language instead of using keywords, and receive complete answers rather than links to articles. -```sh -curl \ - -X PATCH 'MEILISEARCH_URL/experimental-features/' \ - -H 'Content-Type: application/json' \ - --data-binary '{ - "chatCompletions": true - }' -``` - +## When to use chat completions vs traditional search -## What is conversational search? +Use conversational search when: -Conversational search interfaces allow users to: +- Users need easy-to-read answers to specific questions +- You are handling informational-dense content, such as software documentation and knowledge bases +- Natural language interaction improves user experience -- Ask questions in natural language instead of using keywords -- Receive direct answers rather than just document links -- Maintain context across multiple questions -- Get responses grounded in your actual content +Use traditional search when: -This approach bridges the gap between traditional search and modern AI experiences, making information more accessible and intuitive to find. +- Users need to browse multiple options, such as an ecommerce website +- Approximative answers are not acceptable +- Your users need very quick responses ## How chat completions differs from traditional search @@ -43,73 +34,24 @@ This approach bridges the gap between traditional search and modern AI experienc 1. User asks a question in natural language 2. Meilisearch retrieves relevant documents 3. AI generates a direct answer based on those documents -4. User can ask follow-up questions - -## When to use chat completions vs traditional search - -### Use conversational search when: -- Users need direct answers to specific questions -- Content is informational (documentation, knowledge bases, FAQs) -- Users benefit from follow-up questions -- Natural language interaction improves user experience +## Implementation strategies -### Use traditional search when: +### Retrieval Augmented Generation (RAG) -- Users need to browse multiple options -- Results require comparison (e-commerce products, listings) -- Exact matching is critical -- Response time is paramount +In the majority of cases, you should use the [`/chats` route](/reference/api/chats) to build a Retrieval Augmented Generation (RAG) pipeline. RAGs excel when working with unstructured data and emphasise high-quality responses. -## Use chat completions to implement RAG pipelines - -The chat completions feature implements a complete Retrieval Augmented Generation (RAG) pipeline in a single API endpoint. Meilisearch's chat completions consolidates RAG creation into one streamlined process: +Meilisearch's chat completions consolidates RAG creation into a single process: 1. **Query understanding**: automatically transforms questions into search parameters 2. **Hybrid retrieval**: combines keyword and semantic search for better relevancy 3. **Answer generation**: uses your chosen LLM to generate responses 4. **Context management**: maintains conversation history by constantly pushing the full conversation to the dedicated tool -### Alternative: MCP integration - -When integrating Meilisearch with AI assistants and automation tools, consider using [Meilisearch's Model Context Protocol (MCP) server](/guides/ai/mcp). MCP enables standardized tool integration across various AI platforms and applications. - -## Architecture overview - -Chat completions operate through workspaces, which are isolated configurations for different use cases. Each workspace can: - -- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm) -- Apply custom prompts -- Access specific indexes based on API keys -- Maintain separate conversation contexts - -### Key components - -1. **Chat endpoint**: `/chats/{workspace}/chat/completions` - - OpenAI-compatible interface - - Supports streaming responses - - Handles tool calling for index searches - -2. **Workspace settings**: `/chats/{workspace}/settings` - - Configure LLM provider and model - - Set system prompts - - Manage API credentials - -3. **Index integration**: - - Automatically searches relevant indexes - - Uses existing Meilisearch search capabilities - - Respects API key permissions - -## Security considerations - -The chat completions feature integrates with Meilisearch's existing security model: +Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch. -- **API key permissions**: chat only accesses indexes visible to the provided API key -- **Tenant tokens**: support for multi-tenant applications -- **LLM credentials**: stored securely in workspace settings -- **Content isolation**: responses based only on indexed content +### Model Context Protocol (MCP) -## Next steps +An alternative method is using a Model Context Protocol (MCP) server. MCPs are designed for broader uses that go beyond answering questions, but can be useful in contexts where having up-to-date data is more important than comprehensive answers. -- [Get started with chat completions implementation](/learn/chat/getting_started_with_chat) -- [Explore the chat completions API reference](/reference/api/chats) +Follow the [dedicated MCP guide](/guides/ai/mcp) if you want to implement it in your application. diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index 51c5960a69..b011fb7bf2 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -1,10 +1,33 @@ --- title: Getting started with conversational search -sidebarTitle: Getting started with chat -description: Learn how to implement AI-powered conversational search in your application +description: This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application --- -This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. +Chat completions have three key components: index integration, workspace configuration, and the chat interface. + +operate through workspaces, which are isolated configurations for different use cases. Each workspace can: + +- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm) +- Apply custom prompts +- Access specific indexes based on API keys +- Maintain separate conversation contexts + +### Key components + +1. **Chat endpoint**: `/chats/{workspace}/chat/completions` + - OpenAI-compatible interface + - Supports streaming responses + - Handles tool calling for index searches + +2. **Workspace settings**: `/chats/{workspace}/settings` + - Configure LLM provider and model + - Set system prompts + - Manage API credentials + +3. **Index integration**: + - Automatically searches relevant indexes + - Uses existing Meilisearch search capabilities + - Respects API key permissions ## Prerequisites @@ -14,7 +37,9 @@ Before starting, ensure you have: - An API key from an LLM provider - At least one index with searchable content -## Enable the chat completions feature +## Setup + +### Enable the chat completions feature First, enable the chat completions experimental feature: @@ -28,7 +53,7 @@ curl \ }' ``` -## Find your chat API key +### Find your chat API key When Meilisearch runs with a master key on an instance created after v1.15.1, it automatically generates a "Default Chat API Key" with `chatCompletions` and `search` permissions on all indexes. Check if you have the key using: @@ -39,7 +64,7 @@ curl http://localhost:7700/keys \ Look for the key with the description "Default Chat API Key" Use this key when querying the `/chats` endpoint. -### Troubleshooting: Missing default chat API key +#### Troubleshooting: Missing default chat API key If your instance does not have a Default Chat API Key, create one manually: @@ -59,42 +84,44 @@ curl \ ## Configure your indexes for chat -Each index that you want to be searchable through chat needs specific configuration: +After activating the `/chats` route and obtaining an API key with chat access, you must configure the indexes your conversational interface has access to. + +Configure the `chat` settings for each index you want to be searchable via chat UI: ```bash curl \ - -X PATCH 'http://localhost:7700/indexes/movies/settings' \ + -X PATCH 'http://localhost:7700/indexes/INDEX_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "chat": { - "description": "A comprehensive movie database containing titles, descriptions, genres, and release dates to help users find movies", + "description": "A comprehensive database of TYPE_OF_DOCUMENT containing titles, descriptions, genres, and release dates to help users searching for TYPE_OF_DOCUMENT", "documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}", - "documentTemplateMaxBytes": 400, - "searchParameters": {} + "documentTemplateMaxBytes": 400 } }' ``` - -The `description` field helps the LLM understand what data is in the index, improving search relevance. - +- `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers +- `documentTemplate` defines which document fields Meilisearch will send the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance ## Configure a chat completions workspace -Create a workspace with your LLM provider settings. Here are examples for different providers: +The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. For example, you may have one workspace for publicly visible data, and another for data only available for logged in users. + +Create a workspace setting your LLM provider as its `source`: ```bash openAi curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "openAi", - "apiKey": "sk-abc...", - "baseUrl": "https://api.openai.com/v1", + "apiKey": "PROVIDER_API_KEY", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -103,13 +130,13 @@ curl \ ```bash azureOpenAi curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "azureOpenAi", - "apiKey": "your-azure-key", - "baseUrl": "https://your-resource.openai.azure.com", + "apiKey": "PROVIDER_API_KEY", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -118,12 +145,12 @@ curl \ ```bash mistral curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "mistral", - "apiKey": "your-mistral-key", + "apiKey": "PROVIDER_API_KEY", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -132,12 +159,12 @@ curl \ ```bash gemini curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "gemini", - "apiKey": "your-gemini-key", + "apiKey": "PROVIDER_API_KEY", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -146,12 +173,12 @@ curl \ ```bash vLlm curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "vLlm", - "baseUrl": "http://localhost:8000", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -160,24 +187,29 @@ curl \ +Which fields are mandatory will depend on your chosen provider `source`. In most cases, you will have to provide an `apiKey` to access the provider. + +`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. + +`prompts.system` gives the conversational search bot the baseline context of your users and their questions. + ## Send your first chat completions request -Now you can start a conversation. Note the `-N` flag for handling streaming responses: +You have finished configuring your conversational search agent. Use `curl` in your terminal to confirm everything is working. Sending a streaming query to the chat completions API route: ```bash curl -N \ - -X POST 'http://localhost:7700/chats/my-assistant/chat/completions' \ - -H 'Authorization: Bearer ' \ + -X POST 'http://localhost:7700/chats/WORKSPACE_NAME/chat/completions' \ + -H 'Authorization: Bearer MEILISEARCH_API_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ - "model": "gpt-3.5-turbo", + "model": "PROVIDER_MODEL_UID", "messages": [ { "role": "user", - "content": "What movies do you have about space exploration?" + "content": "USER_PROMPT" } ], - "stream": true, "tools": [ { "type": "function", @@ -197,14 +229,31 @@ curl -N \ }' ``` -Take particular note of the `tools` array. These settings are optional, but greatly improve user experience: +- `model` is mandatory and must indicate a model supported by your chosen `source` +- `messages` contains the messages exchanged between the conversational search agent and the user +- `tools` sets up two optional but highly [recommended tools](/learn/chat/chat_tooling_reference): + - `_meiliSearchProgress`: shows users what searches are being performed + - `_meiliSearchSources`: displays the actual documents used to generate responses + +If Meilisearch returns a stream of data containing the chat agent response, you have correctly configured Meilisearch for conversational search: + +```sh +data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"content":"Meilisearch"},"finish_reason":null}]} +``` + +If Meilisearch returns an error, consult the [troubleshooting section](#troubleshooting) to understand diagnose and fix the issues you encountered. + +## Next steps + +In this article, you have seen how to activate the chats completion route, prepare your indexes to serve as a base for your AI agent, and performed your first conversational search. -- **`_meiliSearchProgress`**: shows users what searches are being performed -- **`_meiliSearchSources`**: displays the actual documents used to generate responses +In most cases, that is only the beginning of adding conversational search to your application. Next, you are most likely going to want to add a graphical user interface to your application. -## Build a chat interface using the OpenAI SDK +### Building a chat interface using the OpenAI SDK -Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK: +Creating a full chat interface is out of scope for this tutorial, but here is one important tip. + +Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI: @@ -212,14 +261,13 @@ Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official import OpenAI from 'openai'; const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', - apiKey: 'YOUR_CHAT_API_KEY', + baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', + apiKey: 'PROVIDER_API_KEY', }); const completion = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, + model: 'PROVIDER_MODEL_UID', + messages: [{ role: 'user', content: 'USER_PROMPT' }] }); for await (const chunk of completion) { @@ -231,14 +279,13 @@ for await (const chunk of completion) { from openai import OpenAI client = OpenAI( - base_url="http://localhost:7700/chats/my-assistant", + base_url="http://localhost:7700/chats/WORKSPACE_NAME", api_key="YOUR_CHAT_API_KEY" ) stream = client.chat.completions.create( model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "What is Meilisearch?"}], - stream=True, + messages=[{"role": "user", "content": "USER_PROMPT"}] ) for chunk in stream: @@ -250,14 +297,13 @@ for chunk in stream: import OpenAI from 'openai'; const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', + baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', apiKey: 'YOUR_CHAT_API_KEY', }); const stream = await client.chat.completions.create({ model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, + messages: [{ role: 'user', content: 'USER_PROMPT' }] }); for await (const chunk of stream) { @@ -270,7 +316,7 @@ for await (const chunk of stream) { ### Error handling -When using the OpenAI SDK with Meilisearch's chat completions endpoint, errors from the streamed responses are natively handled by OpenAI. This means you can use the SDK's built-in error handling mechanisms without additional configuration: +Use the OpenAI SDK's built-in functionality to handle errors without additional configuration: @@ -278,7 +324,7 @@ When using the OpenAI SDK with Meilisearch's chat completions endpoint, errors f import OpenAI from 'openai'; const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', + baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', apiKey: 'MEILISEARCH_KEY', }); @@ -302,7 +348,7 @@ try { from openai import OpenAI client = OpenAI( - base_url="http://localhost:7700/chats/my-assistant", + base_url="http://localhost:7700/chats/WORKSPACE_NAME", api_key="MEILISEARCH_KEY" ) @@ -360,14 +406,14 @@ except Exception as error: 1. Check workspace configuration: ```bash - curl http://localhost:7700/chats/my-assistant/settings \ + curl http://localhost:7700/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" ``` 2. Update with valid API key: ```bash - curl -X PATCH http://localhost:7700/chats/my-assistant/settings \ + curl -X PATCH http://localhost:7700/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" \ -H "Content-Type: application/json" \ -d '{"apiKey": "your-valid-api-key"}' @@ -381,18 +427,3 @@ except Exception as error: - Include `_meiliSearchProgress` and `_meiliSearchSources` tools in your request - Ensure indexes have proper chat descriptions configured - -#### "stream: false is not supported" error - -**Cause:** Trying to use non-streaming responses - -**Solution:** - -- Always set `"stream": true` in your requests -- Non-streaming responses are not yet supported - -## Next steps - -- Explore [advanced chat API features](/reference/api/chats) -- Learn about [conversational search concepts](/learn/chat/conversational_search) -- Review [security best practices](/learn/security/basic_security) \ No newline at end of file diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index 7b11a158ed..e16e66f3cc 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -23,10 +23,12 @@ curl \ ## Authorization -When working with a secure Meilisearch instance, Use an API key with access to both the `search` and `chatCompletions` actions, such as the default chat API key. +When working with a secure Meilisearch instance, Use an API key with access to both the `search` and `chatCompletions` actions, such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with access to the required actions. Chat queries only search indexes its API key can access. The default chat API key has access to all indexes. To limit chat access to specific indexes, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. +LLM credentials used to querying your AI provider are stored securely in the chat workspace settings. + ## Chat workspace object ```json @@ -175,13 +177,13 @@ curl \ -### Update chat workspace settings +### Create a chat workspace and update chat workspace settings Configure the LLM provider and settings for a chat workspace. -If the workspace does not exist, querying this endpoint will create it. +If a workspace does not exist, querying this endpoint will create it. #### Path parameters @@ -368,10 +370,10 @@ Create a chat completion using Meilisearch's OpenAI-compatible interface. The en | :------------- | :------ | :------- | :--------------------------------------------------------------------------- | | **`model`** | String | Yes | Model to use and will be related to the source LLM in the workspace settings | | **`messages`** | Array | Yes | Array of message objects with `role` and `content` | -| **`stream`** | Boolean | No | Enable streaming responses (default: `true`) | +| **`stream`** | Boolean | No | Enable streaming responses. Must be `true` if specified | -Currently, only streaming responses (`stream: true`) are supported. +Meilisearch chat completions only supports streaming responses (`stream: true`). ### Message object From 0ad95a521ae70f15cafb4363d51c6f2729a1f772 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Wed, 10 Sep 2025 17:04:22 +0000 Subject: [PATCH 02/10] Update code samples [skip ci] --- .../code_samples_rename_an_index_1.mdx | 9 +++++ ...les_search_parameter_reference_media_1.mdx | 34 +++++++++++++++++++ .../code_samples_typo_tolerance_guide_5.mdx | 4 +++ 3 files changed, 47 insertions(+) create mode 100644 snippets/samples/code_samples_rename_an_index_1.mdx diff --git a/snippets/samples/code_samples_rename_an_index_1.mdx b/snippets/samples/code_samples_rename_an_index_1.mdx new file mode 100644 index 0000000000..c47acef515 --- /dev/null +++ b/snippets/samples/code_samples_rename_an_index_1.mdx @@ -0,0 +1,9 @@ + + +```bash cURL +curl \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_A' \ + -H 'Content-Type: application/json' \ + --data-binary '{ "uid": "INDEX_B" }' +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_search_parameter_reference_media_1.mdx b/snippets/samples/code_samples_search_parameter_reference_media_1.mdx index 980280dd8c..0e6f821877 100644 --- a/snippets/samples/code_samples_search_parameter_reference_media_1.mdx +++ b/snippets/samples/code_samples_search_parameter_reference_media_1.mdx @@ -18,6 +18,40 @@ curl \ }' ``` +```javascript JS +client.index('INDEX_NAME').search('a futuristic movie', { + hybrid: { + embedder: 'EMBEDDER_NAME' + }, + media: { + textAndPoster: { + text: 'a futuristic movie', + image: { + mime: 'image/jpeg', + data: 'base64EncodedImageData' + } + } + } +}) +``` + +```php PHP +$client->index('INDEX_NAME')->search('a futuristic movie', [ + 'hybrid' => [ + 'embedder' => 'EMBEDDER_NAME' + ], + 'media' => [ + 'textAndPoster' => [ + 'text' => 'a futuristic movie', + 'image' => [ + 'mime' => 'image/jpeg', + 'data' => 'base64EncodedImageData' + ] + ] + ] +]); +``` + ```go Go client.Index("INDEX_NAME").Search("", &meilisearch.SearchRequest{ Hybrid: &meilisearch.SearchRequestHybrid{ diff --git a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx index 55582c22bf..51bf8e70e6 100644 --- a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx +++ b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx @@ -27,6 +27,10 @@ $client->index('movies')->updateTypoTolerance([ ]); ``` +```ruby Ruby +index('books').update_typo_tolerance({ disable_on_numbers: true }) +``` + ```go Go client.Index("movies").UpdateTypoTolerance(&meilisearch.TypoTolerance{ DisableOnNumbers: true From fc1f58b6b9007d264cd8fe0ca8bb0b91465d4488 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Thu, 11 Sep 2025 16:05:19 +0200 Subject: [PATCH 03/10] partial update of `/chats` reference --- reference/api/chats.mdx | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index e16e66f3cc..0d3bd4d493 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -118,27 +118,29 @@ Get information about a workshop. ```json { - "source": "openAi", + "source": "PROVIDER", "orgId": null, "projectId": null, "apiVersion": null, "deploymentId": null, "baseUrl": null, - "apiKey": "sk-abc...", + "apiKey": "PROVIDER_API_KEY", "prompts": { - "system": "You are a helpful assistant that answers questions based on the provided context." + "system": "Description of the general search context" } } ``` #### The prompts object -| Name | Type | Description | -| :------------------------ | :----- | :---------------------------------------------------------------- | -| **`system`** | String | A prompt added to the start of the conversation to guide the LLM | -| **`searchDescription`** | String | A prompt to explain what the internal search function does | -| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it | -| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | +```json +{ + "system": "Description of the general search context", + "searchDescription": "Description of internal search processes", + "searchQParam": "Description of expected input inside `q`", + "searchIndexUidParam": "Description of content inside each accessible index" +} +``` ### Get chat workspace settings @@ -204,6 +206,15 @@ If a workspace does not exist, querying this endpoint will create it. | **`apiKey`** | String | API key for the LLM provider (optional for vLlm) | | **`prompts`** | Object | Prompts object containing system prompts and other configuration | +##### Prompt parameters + +| Name | Type | Description | +| :------------------------ | :----- | :---------------------------------------------------------------- | +| **`system`** | String | A prompt added to the start of the conversation to guide the LLM | +| **`searchDescription`** | String | A prompt to explain what the internal search function does | +| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it | +| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | + #### Request body ```json From dd301d6df13b4c714728ca5e2b2e4a7104e36e15 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Thu, 2 Oct 2025 17:07:36 +0200 Subject: [PATCH 04/10] improve explanation and tutorial --- learn/chat/conversational_search.mdx | 10 +- learn/chat/getting_started_with_chat.mdx | 156 ++++------------------- 2 files changed, 25 insertions(+), 141 deletions(-) diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index ae33ad183c..efa898979b 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -3,11 +3,7 @@ title: What is conversational search? description: Conversational search is an AI-powered feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes --- -## What is conversational search? - -In conversational search interfaces, users ask questions in everyday language instead of using keywords, and receive complete answers rather than links to articles. - -## When to use chat completions vs traditional search +## When to use conversational vs traditional search Use conversational search when: @@ -21,7 +17,7 @@ Use traditional search when: - Approximative answers are not acceptable - Your users need very quick responses -## How chat completions differs from traditional search +## How conversational search usage differs from traditional search ### Traditional search workflow @@ -41,7 +37,7 @@ Use traditional search when: In the majority of cases, you should use the [`/chats` route](/reference/api/chats) to build a Retrieval Augmented Generation (RAG) pipeline. RAGs excel when working with unstructured data and emphasise high-quality responses. -Meilisearch's chat completions consolidates RAG creation into a single process: +Meilisearch's chat completions API consolidates RAG creation into a single process: 1. **Query understanding**: automatically transforms questions into search parameters 2. **Hybrid retrieval**: combines keyword and semantic search for better relevancy diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index b011fb7bf2..49d114eb30 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -1,33 +1,9 @@ --- title: Getting started with conversational search -description: This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application +description: This article walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. --- -Chat completions have three key components: index integration, workspace configuration, and the chat interface. - -operate through workspaces, which are isolated configurations for different use cases. Each workspace can: - -- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm) -- Apply custom prompts -- Access specific indexes based on API keys -- Maintain separate conversation contexts - -### Key components - -1. **Chat endpoint**: `/chats/{workspace}/chat/completions` - - OpenAI-compatible interface - - Supports streaming responses - - Handles tool calling for index searches - -2. **Workspace settings**: `/chats/{workspace}/settings` - - Configure LLM provider and model - - Set system prompts - - Manage API credentials - -3. **Index integration**: - - Automatically searches relevant indexes - - Uses existing Meilisearch search capabilities - - Respects API key permissions +To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create chat workspaces targeting different use-cases, and building a chat interface. ## Prerequisites @@ -82,11 +58,9 @@ curl \ }' ``` -## Configure your indexes for chat +## Configure your indexes -After activating the `/chats` route and obtaining an API key with chat access, you must configure the indexes your conversational interface has access to. - -Configure the `chat` settings for each index you want to be searchable via chat UI: +After activating the `/chats` route and obtaining an API key with chat permissions, configure the `chat` settings for each index you want to be searchable via chat UI: ```bash curl \ @@ -103,17 +77,23 @@ curl \ ``` - `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers -- `documentTemplate` defines which document fields Meilisearch will send the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance +- `documentTemplate` defines the document data Meilisearch sends to the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance ## Configure a chat completions workspace -The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. For example, you may have one workspace for publicly visible data, and another for data only available for logged in users. +The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. Each workspace can: + +- Use different embedding providers (openAi, azureOpenAi, mistral, gemini, vLlm) +- Establish separate conversation contexts via baseline prompts +- Access a specific set of indexes + +For example, you may have one workspace for publicly visible data, and another for data only available for logged in users. Create a workspace setting your LLM provider as its `source`: -```bash openAi +```bash OpenAI curl \ -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ @@ -128,7 +108,7 @@ curl \ }' ``` -```bash azureOpenAi +```bash Azure OpenAI curl \ -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ @@ -143,7 +123,7 @@ curl \ }' ``` -```bash mistral +```bash Mistral curl \ -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ @@ -157,7 +137,7 @@ curl \ }' ``` -```bash gemini +```bash Gemini curl \ -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ @@ -171,7 +151,7 @@ curl \ }' ``` -```bash vLlm +```bash vLLM curl \ -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ @@ -191,11 +171,11 @@ Which fields are mandatory will depend on your chosen provider `source`. In most `baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. -`prompts.system` gives the conversational search bot the baseline context of your users and their questions. +`prompts.system` gives the conversational search bot the baseline context of your users and their questions. The `prompts` object accepts a few other fields that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results. ## Send your first chat completions request -You have finished configuring your conversational search agent. Use `curl` in your terminal to confirm everything is working. Sending a streaming query to the chat completions API route: +You have finished configuring your conversational search agent. To test everything is working as expected, send a streaming `curl` query to the chat completions API route: ```bash curl -N \ @@ -251,11 +231,9 @@ In most cases, that is only the beginning of adding conversational search to you ### Building a chat interface using the OpenAI SDK -Creating a full chat interface is out of scope for this tutorial, but here is one important tip. +Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI. -Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI: - - +Integrating Meiliearch and the OpenAI SDK with JavaScript would look lik this: ```javascript JavaScript import OpenAI from 'openai'; @@ -275,97 +253,7 @@ for await (const chunk of completion) { } ``` -```python Python -from openai import OpenAI - -client = OpenAI( - base_url="http://localhost:7700/chats/WORKSPACE_NAME", - api_key="YOUR_CHAT_API_KEY" -) - -stream = client.chat.completions.create( - model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "USER_PROMPT"}] -) - -for chunk in stream: - if chunk.choices[0].delta.content is not None: - print(chunk.choices[0].delta.content, end="") -``` - -```typescript TypeScript -import OpenAI from 'openai'; - -const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', - apiKey: 'YOUR_CHAT_API_KEY', -}); - -const stream = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'USER_PROMPT' }] -}); - -for await (const chunk of stream) { - const content = chunk.choices[0]?.delta?.content || ''; - process.stdout.write(content); -} -``` - - - -### Error handling - -Use the OpenAI SDK's built-in functionality to handle errors without additional configuration: - - - -```javascript JavaScript -import OpenAI from 'openai'; - -const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', - apiKey: 'MEILISEARCH_KEY', -}); - -try { - const stream = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, - }); - - for await (const chunk of stream) { - console.log(chunk.choices[0]?.delta?.content || ''); - } -} catch (error) { - // OpenAI SDK automatically handles streaming errors - console.error('Chat completion error:', error); -} -``` - -```python Python -from openai import OpenAI - -client = OpenAI( - base_url="http://localhost:7700/chats/WORKSPACE_NAME", - api_key="MEILISEARCH_KEY" -) - -try: - stream = client.chat.completions.create( - model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "What is Meilisearch?"}], - stream=True, - ) - - for chunk in stream: - if chunk.choices[0].delta.content is not None: - print(chunk.choices[0].delta.content, end="") -except Exception as error: - # OpenAI SDK automatically handles streaming errors - print(f"Chat completion error: {error}") -``` +Take particular note of the last lines, which output the streamed responses to the browser console. In a real-life application, you would instead print the response chunks to the user interface. From 6cf183425d3fdeb3eb7ee11be8b07ae3be9396f3 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Thu, 2 Oct 2025 15:08:17 +0000 Subject: [PATCH 05/10] Update code samples [skip ci] --- snippets/samples/code_samples_facet_search_3.mdx | 3 ++- .../code_samples_geosearch_guide_filter_usage_4.mdx | 9 +++++++++ .../samples/code_samples_get_vector_store_settings_1.mdx | 7 +++++++ .../code_samples_getting_started_add_documents.mdx | 6 +++--- ...s_primary_field_guide_update_document_primary_key.mdx | 4 +++- .../samples/code_samples_ranking_score_threshold.mdx | 8 ++++++++ snippets/samples/code_samples_rename_an_index_1.mdx | 7 +++++++ .../code_samples_reset_vector_store_settings_1.mdx | 7 +++++++ snippets/samples/code_samples_swap_indexes_2.mdx | 9 +++++++++ snippets/samples/code_samples_typo_tolerance_guide_5.mdx | 6 ++++++ snippets/samples/code_samples_update_an_index_1.mdx | 4 +++- snippets/samples/code_samples_update_an_index_2.mdx | 8 ++++++++ .../code_samples_update_vector_store_settings_1.mdx | 9 +++++++++ 13 files changed, 81 insertions(+), 6 deletions(-) create mode 100644 snippets/samples/code_samples_geosearch_guide_filter_usage_4.mdx create mode 100644 snippets/samples/code_samples_get_vector_store_settings_1.mdx create mode 100644 snippets/samples/code_samples_ranking_score_threshold.mdx create mode 100644 snippets/samples/code_samples_reset_vector_store_settings_1.mdx create mode 100644 snippets/samples/code_samples_swap_indexes_2.mdx create mode 100644 snippets/samples/code_samples_update_an_index_2.mdx create mode 100644 snippets/samples/code_samples_update_vector_store_settings_1.mdx diff --git a/snippets/samples/code_samples_facet_search_3.mdx b/snippets/samples/code_samples_facet_search_3.mdx index 70c2bcdc9d..64fee76d99 100644 --- a/snippets/samples/code_samples_facet_search_3.mdx +++ b/snippets/samples/code_samples_facet_search_3.mdx @@ -49,7 +49,8 @@ client.Index("books").FacetSearch(&meilisearch.FacetSearchRequest{ ```csharp C# var query = new SearchFacetsQuery() { - FacetQuery = "c" + FacetQuery = "c", + ExhaustiveFacetCount: true }; await client.Index("books").FacetSearchAsync("genres", query); ``` diff --git a/snippets/samples/code_samples_geosearch_guide_filter_usage_4.mdx b/snippets/samples/code_samples_geosearch_guide_filter_usage_4.mdx new file mode 100644 index 0000000000..629a301559 --- /dev/null +++ b/snippets/samples/code_samples_geosearch_guide_filter_usage_4.mdx @@ -0,0 +1,9 @@ + + +```bash cURL +curl \ + -X POST 'MEILISEARCH_URL/indexes/restaurants/search' \ + -H 'Content-type:application/json' \ + --data-binary '{ "filter": "_geoPolygon([45.494181, 9.214024], [45.449484, 9.179175], [45.449486, 9.179177])" }' +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_get_vector_store_settings_1.mdx b/snippets/samples/code_samples_get_vector_store_settings_1.mdx new file mode 100644 index 0000000000..58c324a6a3 --- /dev/null +++ b/snippets/samples/code_samples_get_vector_store_settings_1.mdx @@ -0,0 +1,7 @@ + + +```bash cURL +curl \ + -X GET 'MEILISEARCH_URL/indexes/INDEX_UID/settings/vector-store' +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_getting_started_add_documents.mdx b/snippets/samples/code_samples_getting_started_add_documents.mdx index 9a2a53b200..e3b2e68adb 100644 --- a/snippets/samples/code_samples_getting_started_add_documents.mdx +++ b/snippets/samples/code_samples_getting_started_add_documents.mdx @@ -78,14 +78,14 @@ $client->index('movies')->addDocuments($movies); // // com.meilisearch.sdk // meilisearch-java -// 0.15.0 +// 0.16.1 // pom // // For Gradle // Add the following line to the `dependencies` section of your `build.gradle`: // -// implementation 'com.meilisearch.sdk:meilisearch-java:0.15.0' +// implementation 'com.meilisearch.sdk:meilisearch-java:0.16.1' // In your .java file: import com.meilisearch.sdk; @@ -192,7 +192,7 @@ namespace Meilisearch_demo ```text Rust // In your .toml file: [dependencies] - meilisearch-sdk = "0.29.1" + meilisearch-sdk = "0.30.0" # futures: because we want to block on futures futures = "0.3" # serde: required if you are going to use documents diff --git a/snippets/samples/code_samples_primary_field_guide_update_document_primary_key.mdx b/snippets/samples/code_samples_primary_field_guide_update_document_primary_key.mdx index 9581144fe7..5b10f2f6d2 100644 --- a/snippets/samples/code_samples_primary_field_guide_update_document_primary_key.mdx +++ b/snippets/samples/code_samples_primary_field_guide_update_document_primary_key.mdx @@ -30,7 +30,9 @@ client.index('books').update(primary_key: 'title') ``` ```go Go -client.Index("books").UpdateIndex("title") +client.Index("books").UpdateIndex(&meilisearch.UpdateIndexRequestParams{ + PrimaryKey: "title", +}) ``` ```csharp C# diff --git a/snippets/samples/code_samples_ranking_score_threshold.mdx b/snippets/samples/code_samples_ranking_score_threshold.mdx new file mode 100644 index 0000000000..afa4dcc327 --- /dev/null +++ b/snippets/samples/code_samples_ranking_score_threshold.mdx @@ -0,0 +1,8 @@ + + +```dart Dart +await client + .index('INDEX_NAME') + .search('badman', SearchQuery(rankingScoreThreshold: 0.2)); +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_rename_an_index_1.mdx b/snippets/samples/code_samples_rename_an_index_1.mdx index c47acef515..cb2efca970 100644 --- a/snippets/samples/code_samples_rename_an_index_1.mdx +++ b/snippets/samples/code_samples_rename_an_index_1.mdx @@ -6,4 +6,11 @@ curl \ -H 'Content-Type: application/json' \ --data-binary '{ "uid": "INDEX_B" }' ``` + +```rust Rust +curl \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_A' \ + -H 'Content-Type: application/json' \ + --data-binary '{ "uid": "INDEX_B" }' +``` \ No newline at end of file diff --git a/snippets/samples/code_samples_reset_vector_store_settings_1.mdx b/snippets/samples/code_samples_reset_vector_store_settings_1.mdx new file mode 100644 index 0000000000..77213c8703 --- /dev/null +++ b/snippets/samples/code_samples_reset_vector_store_settings_1.mdx @@ -0,0 +1,7 @@ + + +```bash cURL +curl \ + -X DELETE 'MEILISEARCH_URL/indexes/INDEX_UID/settings/vector-store' +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_swap_indexes_2.mdx b/snippets/samples/code_samples_swap_indexes_2.mdx new file mode 100644 index 0000000000..402cc310f8 --- /dev/null +++ b/snippets/samples/code_samples_swap_indexes_2.mdx @@ -0,0 +1,9 @@ + + +```go Go +client.SwapIndexes([]SwapIndexesParams{ + {Indexes: []string{"indexA", "indexB"}, Rename: true}, + {Indexes: []string{"indexX", "indexY"}, Rename: true}, +}) +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx index 51bf8e70e6..be212b7fd6 100644 --- a/snippets/samples/code_samples_typo_tolerance_guide_5.mdx +++ b/snippets/samples/code_samples_typo_tolerance_guide_5.mdx @@ -27,6 +27,12 @@ $client->index('movies')->updateTypoTolerance([ ]); ``` +```java Java +TypoTolerance typoTolerance = new TypoTolerance(); +typoTolerance.setDisableOnNumbers(true); +client.index("movies").updateTypoToleranceSettings(typoTolerance); +``` + ```ruby Ruby index('books').update_typo_tolerance({ disable_on_numbers: true }) ``` diff --git a/snippets/samples/code_samples_update_an_index_1.mdx b/snippets/samples/code_samples_update_an_index_1.mdx index d962419a7a..54a2eccf40 100644 --- a/snippets/samples/code_samples_update_an_index_1.mdx +++ b/snippets/samples/code_samples_update_an_index_1.mdx @@ -28,7 +28,9 @@ client.index('movies').update(primary_key: 'movie_id') ``` ```go Go -client.Index("movies").UpdateIndex("id") +client.Index("movies").UpdateIndex(&meilisearch.UpdateIndexRequestParams{ + PrimaryKey: "id", +}) ``` ```csharp C# diff --git a/snippets/samples/code_samples_update_an_index_2.mdx b/snippets/samples/code_samples_update_an_index_2.mdx new file mode 100644 index 0000000000..af5eddaf0e --- /dev/null +++ b/snippets/samples/code_samples_update_an_index_2.mdx @@ -0,0 +1,8 @@ + + +```go Go +client.Index("movies").UpdateIndex(&meilisearch.UpdateIndexRequestParams{ + UID: "movies_index_rename", +}) +``` + \ No newline at end of file diff --git a/snippets/samples/code_samples_update_vector_store_settings_1.mdx b/snippets/samples/code_samples_update_vector_store_settings_1.mdx new file mode 100644 index 0000000000..ea8d93c626 --- /dev/null +++ b/snippets/samples/code_samples_update_vector_store_settings_1.mdx @@ -0,0 +1,9 @@ + + +```bash cURL +curl \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_UID/settings/vector-store' \ + -H 'Content-Type: application/json' \ + --data-binary '"experimental"' +``` + \ No newline at end of file From 2417ebcd52fab9349d3c98d7e33040e5610d5ed8 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Thu, 2 Oct 2025 19:35:59 +0200 Subject: [PATCH 06/10] improve reference, add hallucination warnings --- learn/chat/conversational_search.mdx | 4 + learn/chat/getting_started_with_chat.mdx | 4 + reference/api/chats.mdx | 102 ++++++++++++++++------- 3 files changed, 82 insertions(+), 28 deletions(-) diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index efa898979b..707e2de30b 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -46,6 +46,10 @@ Meilisearch's chat completions API consolidates RAG creation into a single proce Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch. + +Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. + + ### Model Context Protocol (MCP) An alternative method is using a Model Context Protocol (MCP) server. MCPs are designed for broader uses that go beyond answering questions, but can be useful in contexts where having up-to-date data is more important than comprehensive answers. diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index 49d114eb30..7671fa9d20 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -29,6 +29,10 @@ curl \ }' ``` + +Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. + + ### Find your chat API key When Meilisearch runs with a master key on an instance created after v1.15.1, it automatically generates a "Default Chat API Key" with `chatCompletions` and `search` permissions on all indexes. Check if you have the key using: diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index 0d3bd4d493..7c578336f5 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -21,14 +21,16 @@ curl \ ``` + +Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. + + ## Authorization -When working with a secure Meilisearch instance, Use an API key with access to both the `search` and `chatCompletions` actions, such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with access to the required actions. +When implementing conversational search, use an API key with access to both the `search` and `chatCompletions` actions such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with access to the required actions. Chat queries only search indexes its API key can access. The default chat API key has access to all indexes. To limit chat access to specific indexes, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. -LLM credentials used to querying your AI provider are stored securely in the chat workspace settings. - ## Chat workspace object ```json @@ -89,7 +91,7 @@ List all chat workspaces. Results can be paginated by using the `offset` and `li -Get information about a workshop. +Get information about a workspace. ### Path parameters @@ -131,16 +133,60 @@ Get information about a workshop. } ``` -#### The prompts object +#### `source` -```json -{ - "system": "Description of the general search context", - "searchDescription": "Description of internal search processes", - "searchQParam": "Description of expected input inside `q`", - "searchIndexUidParam": "Description of content inside each accessible index" -} -``` +**Type**: String
+**Default value**: N/A
+**Description**: Name of the chosen embeddings provider. Must be one of: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` + +#### `orgId` + +**Type**: String
+**Default value**: N/A
+**Description**: Organization ID used to access the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `projectId` + +**Type**: String
+**Default value**: N/A
+**Description**: Project ID used to access the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `apiVersion` + +**Type**: String
+**Default value**: N/A
+**Description**: API version used by the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `deploymentId` + +**Type**: String
+**Default value**: N/A
+**Description**: Deployment ID used by the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `baseUrl` + +**Type**: String
+**Default value**: N/A
+**Description**: Base URL Meilisearch should target when sending requests to the embeddings provider. Required for Azure OpenAI and vLLM + +#### `apiKey` + +**Type**: String
+**Default value**: N/A
+**Description**: API key to access the LLM provider. Optional for vLLM, mandatory for all other providers + +#### `prompts` + +**Type**: Object
+**Default value**: N/A
+**Description**: Prompts giving baseline context to the conversational agent. + +The prompts object accepts the following fields: + +- `prompts.system`: Default prompt giving the general usage context of the conversational search agent. Example: "You are a helpful bot answering questions on how to use Meilisearch" +- `prompts.searchDescription`: An internal description of the Meilisearch chat tools. Use it to instruct the agent on how and when to use the configured tools. Example: "Tool for retrieving relevant documents. Use it when users ask for factual information, past records, or resources that might exist in indexed content." +- `prompts.QParam`: Description of expected user input and the desired output. Example: "Users will ask about Meilisearch. Provide short and direct keyword-style queries." +- `prompts.IndexUidParam`: Instructions describing each index the agent has access to and how to use them. Example: "If user asks about code or API or parameters, use the index called `documentation`." ### Get chat workspace settings @@ -189,31 +235,31 @@ If a workspace does not exist, querying this endpoint will create it. #### Path parameters -| Name | Type | Description | -| :-------------- | :----- | :----------------------------------- | +| Name | Type | Description | +| :------------------ | :----- | :----------------------------------- | | **`workspace_uid`** | String | The workspace identifier | #### Settings parameters | Name | Type | Description | | :---------------- | :----- | :---------------------------------------------------------------------------- | -| **`source`** | String | LLM source: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` | -| **`orgId`** | String | Organization ID for the LLM provider (required for azureOpenAi) | -| **`projectId`** | String | Project ID for the LLM provider | -| **`apiVersion`** | String | API version for the LLM provider (required for azureOpenAi) | -| **`deploymentId`**| String | Deployment ID for the LLM provider (required for azureOpenAi) | -| **`baseUrl`** | String | Base URL for the provider (required for azureOpenAi and vLlm) | -| **`apiKey`** | String | API key for the LLM provider (optional for vLlm) | -| **`prompts`** | Object | Prompts object containing system prompts and other configuration | +| [`source`](#source) | String | LLM source: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` | +| [`orgId`](#orgid) | String | Organization ID for the LLM provider | +| [`projectId`](#projectid) | String | Project ID for the LLM provider | +| [`apiVersion`](#apiversion) | String | API version for the LLM provider | +| [`deploymentId`](#deploymentid) | String | Deployment ID for the LLM provider | +| [`baseUrl`](#baseurl) | String | Base URL for the provider | +| [`apiKey`](#apikey) | String | API key for the LLM provider | +| [`prompts`](#prompts) | Object | Prompts object containing system prompts and other configuration | ##### Prompt parameters | Name | Type | Description | | :------------------------ | :----- | :---------------------------------------------------------------- | -| **`system`** | String | A prompt added to the start of the conversation to guide the LLM | -| **`searchDescription`** | String | A prompt to explain what the internal search function does | -| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it | -| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | +| [`system]` (#prompts) | String | A prompt added to the start of the conversation to guide the LLM | +| [`searchDescription`](#prompts) | String | A prompt to explain what the internal search function does | +| [`searchQParam`](#prompts) | String | A prompt to explain what the `q` parameter of the search function does and how to use it | +| [`searchIndexUidParam`](#prompts) | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | #### Request body @@ -379,7 +425,7 @@ Create a chat completion using Meilisearch's OpenAI-compatible interface. The en | Name | Type | Required | Description | | :------------- | :------ | :------- | :--------------------------------------------------------------------------- | -| **`model`** | String | Yes | Model to use and will be related to the source LLM in the workspace settings | +| **`model`** | String | Yes | Model the agent should use when generating responses | | **`messages`** | Array | Yes | Array of message objects with `role` and `content` | | **`stream`** | Boolean | No | Enable streaming responses. Must be `true` if specified | From c10bfe548369f64216876b3bbe1612aa1a37edc6 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Thu, 2 Oct 2025 19:39:29 +0200 Subject: [PATCH 07/10] improve reference --- reference/api/chats.mdx | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index 7c578336f5..b1b3b9573f 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -426,7 +426,7 @@ Create a chat completion using Meilisearch's OpenAI-compatible interface. The en | Name | Type | Required | Description | | :------------- | :------ | :------- | :--------------------------------------------------------------------------- | | **`model`** | String | Yes | Model the agent should use when generating responses | -| **`messages`** | Array | Yes | Array of message objects with `role` and `content` | +| **`messages`** | Array | Yes | Array of [message objects](#message-object) | | **`stream`** | Boolean | No | Enable streaming responses. Must be `true` if specified | @@ -440,6 +440,14 @@ Meilisearch chat completions only supports streaming responses (`stream: true`). | **`role`** | String | Message role: `"system"`, `"user"`, or `"assistant"` | | **`content`** | String | Message content | +#### `role` + +Specifies the message origin: Meilisearch (`system`), the LLM provider (`assistant`), or user input (`user`) + +#### `content` + +String containing the message content. + ### Response The response follows the OpenAI chat completions format. For streaming responses, the endpoint returns Server-Sent Events (SSE). From bb15319fc36c7fe6469a77255be5ec35ad95b317 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Mon, 6 Oct 2025 14:56:49 +0200 Subject: [PATCH 08/10] address reviewer feedback --- reference/api/chats.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index b1b3b9573f..1684ce41e6 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -167,7 +167,7 @@ Get information about a workspace. **Type**: String
**Default value**: N/A
-**Description**: Base URL Meilisearch should target when sending requests to the embeddings provider. Required for Azure OpenAI and vLLM +**Description**: Base URL Meilisearch should target when sending requests to the embeddings provider. Must be the full URL preceding the `/chat/completions` fragment. Required for Azure OpenAI and vLLM #### `apiKey` From 3b890669536ee59e70c24190b32142120f5ca907 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Mon, 6 Oct 2025 16:06:04 +0200 Subject: [PATCH 09/10] fix typo, minor structural changes to reference --- learn/chat/conversational_search.mdx | 2 +- learn/chat/getting_started_with_chat.mdx | 2 +- reference/api/chats.mdx | 162 ++++++++++++----------- 3 files changed, 86 insertions(+), 80 deletions(-) diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index 707e2de30b..1b8c8b9224 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -47,7 +47,7 @@ Meilisearch's chat completions API consolidates RAG creation into a single proce Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch. -Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. ### Model Context Protocol (MCP) diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index 7671fa9d20..925737da55 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -30,7 +30,7 @@ curl \ ``` -Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. ### Find your chat API key diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index 1684ce41e6..1440f4b182 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -22,16 +22,20 @@ curl \ -Conversational search is still in early development. Conversational agents may occasionally hallucinate innacurate and misleading information, so it is important to closely monitor it in production environments. +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. ## Authorization -When implementing conversational search, use an API key with access to both the `search` and `chatCompletions` actions such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with access to the required actions. +When implementing conversational search, use an API key with access to both the `search` and `chatCompletions` actions such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with a key that has access to the required actions. -Chat queries only search indexes its API key can access. The default chat API key has access to all indexes. To limit chat access to specific indexes, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. +Chat queries only search the indexes its API key can access. The default chat API key has access to all indexes. To limit access, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. -## Chat workspace object +## Chat workspaces + +Workspaces are groups of chat settings tailored towards specific use cases. You must configure at least on workspace to use chat completions. + +### Chat workspace object ```json { @@ -43,79 +47,6 @@ Chat queries only search indexes its API key can access. The default chat API ke | :---------- | :----- | :--------------------------------------------------- | | **`uid`** | String | Unique identifier for the chat completions workspace | -## List chat workspaces - - - -List all chat workspaces. Results can be paginated by using the `offset` and `limit` query parameters. - -### Query parameters - -| Query parameter | Description | Default value | -| :-------------- | :----------------------------- | :------------ | -| **`offset`** | Number of workspaces to skip | `0` | -| **`limit`** | Number of workspaces to return | `20` | - -### Response - -| Name | Type | Description | -| :------------ | :------ | :----------------------------------- | -| **`results`** | Array | An array of [workspaces](#chat-workspace-object) | -| **`offset`** | Integer | Number of workspaces skipped | -| **`limit`** | Integer | Number of workspaces returned | -| **`total`** | Integer | Total number of workspaces | - -### Example - -```sh - curl \ - -X GET 'MEILISEARCH_URL/chats?limit=3' -``` - -#### Response: `200 Ok` - -```json -{ - "results": [ - { "uid": "WORKSPACE_1" }, - { "uid": "WORKSPACE_2" }, - { "uid": "WORKSPACE_3" } - ], - "offset": 0, - "limit": 20, - "total": 3 -} -``` - -## Get one chat workspace - - - -Get information about a workspace. - -### Path parameters - -| Name | Type | Description | -| :---------------- | :----- | :------------------------------------------------------------------------ | -| **`workspace_uid`** * | String | `uid` of the requested index | - -### Example - -```sh - curl \ - -X GET 'MEILISEARCH_URL/chats/WORKSPACE_UID' -``` - -#### Response: `200 Ok` - -```json -{ - "uid": "WORKSPACE_UID" -} -``` - -## Chat workspace settings - ### Chat workspace settings object ```json @@ -188,6 +119,77 @@ The prompts object accepts the following fields: - `prompts.QParam`: Description of expected user input and the desired output. Example: "Users will ask about Meilisearch. Provide short and direct keyword-style queries." - `prompts.IndexUidParam`: Instructions describing each index the agent has access to and how to use them. Example: "If user asks about code or API or parameters, use the index called `documentation`." +### List chat workspaces + + + +List all chat workspaces. Results can be paginated by using the `offset` and `limit` query parameters. + +#### Query parameters + +| Query parameter | Description | Default value | +| :-------------- | :----------------------------- | :------------ | +| **`offset`** | Number of workspaces to skip | `0` | +| **`limit`** | Number of workspaces to return | `20` | + +#### Response + +| Name | Type | Description | +| :------------ | :------ | :----------------------------------- | +| **`results`** | Array | An array of [workspaces](#chat-workspace-object) | +| **`offset`** | Integer | Number of workspaces skipped | +| **`limit`** | Integer | Number of workspaces returned | +| **`total`** | Integer | Total number of workspaces | + +#### Example + +```sh + curl \ + -X GET 'MEILISEARCH_URL/chats?limit=3' +``` + +##### Response: `200 Ok` + +```json +{ + "results": [ + { "uid": "WORKSPACE_1" }, + { "uid": "WORKSPACE_2" }, + { "uid": "WORKSPACE_3" } + ], + "offset": 0, + "limit": 20, + "total": 3 +} +``` + +### Get one chat workspace + + + +Get information about a workspace. + +#### Path parameters + +| Name | Type | Description | +| :---------------- | :----- | :------------------------------------------------------------------------ | +| **`workspace_uid`** * | String | `uid` of the requested index | + +#### Example + +```sh + curl \ + -X GET 'MEILISEARCH_URL/chats/WORKSPACE_UID' +``` + +##### Response: `200 Ok` + +```json +{ + "uid": "WORKSPACE_UID" +} +``` + ### Get chat workspace settings @@ -398,9 +400,13 @@ curl \ ## Chat completions +After creating a workspace, you can use the chat completions API to create a conversational search agent. + +### Stream chat completions + -Create a chat completion using Meilisearch's OpenAI-compatible interface. The endpoint searches relevant indexes and generates responses based on the retrieved content. +Create a chat completions stream using Meilisearch's OpenAI-compatible interface. This endpoint searches relevant indexes and generates responses based on the retrieved content. ### Path parameters From d73bc252bb3c9b7626def22da4277f8143081e98 Mon Sep 17 00:00:00 2001 From: gui machiavelli Date: Mon, 6 Oct 2025 16:50:43 +0200 Subject: [PATCH 10/10] copy edits --- learn/chat/conversational_search.mdx | 18 ++++++----- learn/chat/getting_started_with_chat.mdx | 41 ++++++++++++------------ 2 files changed, 31 insertions(+), 28 deletions(-) diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index 1b8c8b9224..2b32a74989 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -1,23 +1,29 @@ --- title: What is conversational search? -description: Conversational search is an AI-powered feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes +description: Conversational search allows people to make search queries using natural languages. --- +Conversational search is an AI-powered search feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes. + ## When to use conversational vs traditional search Use conversational search when: - Users need easy-to-read answers to specific questions -- You are handling informational-dense content, such as software documentation and knowledge bases +- You are handling informational-dense content, such as knowledge bases - Natural language interaction improves user experience Use traditional search when: - Users need to browse multiple options, such as an ecommerce website -- Approximative answers are not acceptable +- Approximate answers are not acceptable - Your users need very quick responses -## How conversational search usage differs from traditional search + +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. + + +## Conversational search user workflow ### Traditional search workflow @@ -46,10 +52,6 @@ Meilisearch's chat completions API consolidates RAG creation into a single proce Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch. - -Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. - - ### Model Context Protocol (MCP) An alternative method is using a Model Context Protocol (MCP) server. MCPs are designed for broader uses that go beyond answering questions, but can be useful in contexts where having up-to-date data is more important than comprehensive answers. diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index 925737da55..a2e2afed1f 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -3,7 +3,7 @@ title: Getting started with conversational search description: This article walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. --- -To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create chat workspaces targeting different use-cases, and building a chat interface. +To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create a chat workspaces, and build a chat interface. ## Prerequisites @@ -21,7 +21,7 @@ First, enable the chat completions experimental feature: ```bash curl \ - -X PATCH 'http://localhost:7700/experimental-features/' \ + -X PATCH 'MEILISEARCH_URL/experimental-features/' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -38,11 +38,11 @@ Conversational search is still in early development. Conversational agents may o When Meilisearch runs with a master key on an instance created after v1.15.1, it automatically generates a "Default Chat API Key" with `chatCompletions` and `search` permissions on all indexes. Check if you have the key using: ```bash -curl http://localhost:7700/keys \ +curl MEILISEARCH_URL/keys \ -H "Authorization: Bearer MEILISEARCH_KEY" ``` -Look for the key with the description "Default Chat API Key" Use this key when querying the `/chats` endpoint. +Look for the key with the description "Default Chat API Key". #### Troubleshooting: Missing default chat API key @@ -50,7 +50,7 @@ If your instance does not have a Default Chat API Key, create one manually: ```bash curl \ - -X POST 'http://localhost:7700/keys' \ + -X POST 'MEILISEARCH_URL/keys' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -68,7 +68,7 @@ After activating the `/chats` route and obtaining an API key with chat permissio ```bash curl \ - -X PATCH 'http://localhost:7700/indexes/INDEX_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -81,13 +81,14 @@ curl \ ``` - `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers -- `documentTemplate` defines the document data Meilisearch sends to the AI provider. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance +- `documentTemplate` defines the document data Meilisearch sends to the AI provider. This template outputs all searchable fields in your documents, which may not be ideal if your documents have many fields. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance +- `documentTemplateMaxBytes` establishes a size limit for the document templates. Documents bigger than 400 bytes are truncated to ensure a good balance between speed and relevancy ## Configure a chat completions workspace The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. Each workspace can: -- Use different embedding providers (openAi, azureOpenAi, mistral, gemini, vLlm) +- Use different embedding providers (OpenAI, Azure OpenAI, Mistral, Gemini, vLLM) - Establish separate conversation contexts via baseline prompts - Access a specific set of indexes @@ -99,7 +100,7 @@ Create a workspace setting your LLM provider as its `source`: ```bash OpenAI curl \ - -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -114,7 +115,7 @@ curl \ ```bash Azure OpenAI curl \ - -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -129,7 +130,7 @@ curl \ ```bash Mistral curl \ - -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -143,7 +144,7 @@ curl \ ```bash Gemini curl \ - -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -157,7 +158,7 @@ curl \ ```bash vLLM curl \ - -X PATCH 'http://localhost:7700/chats/WORKSPACE_NAME/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -173,9 +174,9 @@ curl \ Which fields are mandatory will depend on your chosen provider `source`. In most cases, you will have to provide an `apiKey` to access the provider. -`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. +`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. This is only mandatory for Azure OpenAI and vLLM sources. -`prompts.system` gives the conversational search bot the baseline context of your users and their questions. The `prompts` object accepts a few other fields that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results. +`prompts.system` gives the conversational search bot the baseline context of your users and their questions. [The `prompts` object accepts a few other fields](/reference/api/chats#prompts) that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results. ## Send your first chat completions request @@ -183,7 +184,7 @@ You have finished configuring your conversational search agent. To test everythi ```bash curl -N \ - -X POST 'http://localhost:7700/chats/WORKSPACE_NAME/chat/completions' \ + -X POST 'MEILISEARCH_URL/chats/WORKSPACE_NAME/chat/completions' \ -H 'Authorization: Bearer MEILISEARCH_API_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -243,7 +244,7 @@ Integrating Meiliearch and the OpenAI SDK with JavaScript would look lik this: import OpenAI from 'openai'; const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/WORKSPACE_NAME', + baseURL: 'MEILISEARCH_URL/chats/WORKSPACE_NAME', apiKey: 'PROVIDER_API_KEY', }); @@ -287,7 +288,7 @@ Take particular note of the last lines, which output the streamed responses to t - Use either the master key or the "Default Chat API Key" - Don't use search or admin API keys for chat endpoints -- Find your chat key: `curl http://localhost:7700/keys -H "Authorization: Bearer MEILISEARCH_KEY"` +- Find your chat key: `curl MEILISEARCH_URL/keys -H "Authorization: Bearer MEILISEARCH_KEY"` #### "Socket connection closed unexpectedly" @@ -298,14 +299,14 @@ Take particular note of the last lines, which output the streamed responses to t 1. Check workspace configuration: ```bash - curl http://localhost:7700/chats/WORKSPACE_NAME/settings \ + curl MEILISEARCH_URL/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" ``` 2. Update with valid API key: ```bash - curl -X PATCH http://localhost:7700/chats/WORKSPACE_NAME/settings \ + curl -X PATCH MEILISEARCH_URL/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" \ -H "Content-Type: application/json" \ -d '{"apiKey": "your-valid-api-key"}'