diff --git a/learn/chat/conversational_search.mdx b/learn/chat/conversational_search.mdx index 773cc2a224..2b32a74989 100644 --- a/learn/chat/conversational_search.mdx +++ b/learn/chat/conversational_search.mdx @@ -1,36 +1,29 @@ --- -title: Conversational search -sidebarTitle: Conversational search -description: Learn how to implement AI-powered conversational search using Meilisearch's chat feature +title: What is conversational search? +description: Conversational search allows people to make search queries using natural languages. --- -Meilisearch's chat completions feature enables AI-powered conversational search, allowing users to ask questions in natural language and receive direct answers based on your indexed content. This feature transforms the traditional search experience into an interactive dialogue. +Conversational search is an AI-powered search feature that allows users to ask questions in everyday language and receive answers based on the information in Meilisearch's indexes. - -This is an experimental feature. Use the Meilisearch Cloud UI or the experimental features endpoint to activate it: +## When to use conversational vs traditional search -```sh -curl \ - -X PATCH 'MEILISEARCH_URL/experimental-features/' \ - -H 'Content-Type: application/json' \ - --data-binary '{ - "chatCompletions": true - }' -``` - +Use conversational search when: -## What is conversational search? +- Users need easy-to-read answers to specific questions +- You are handling informational-dense content, such as knowledge bases +- Natural language interaction improves user experience -Conversational search interfaces allow users to: +Use traditional search when: -- Ask questions in natural language instead of using keywords -- Receive direct answers rather than just document links -- Maintain context across multiple questions -- Get responses grounded in your actual content +- Users need to browse multiple options, such as an ecommerce website +- Approximate answers are not acceptable +- Your users need very quick responses -This approach bridges the gap between traditional search and modern AI experiences, making information more accessible and intuitive to find. + +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. + -## How chat completions differs from traditional search +## Conversational search user workflow ### Traditional search workflow @@ -43,73 +36,24 @@ This approach bridges the gap between traditional search and modern AI experienc 1. User asks a question in natural language 2. Meilisearch retrieves relevant documents 3. AI generates a direct answer based on those documents -4. User can ask follow-up questions - -## When to use chat completions vs traditional search - -### Use conversational search when: - -- Users need direct answers to specific questions -- Content is informational (documentation, knowledge bases, FAQs) -- Users benefit from follow-up questions -- Natural language interaction improves user experience -### Use traditional search when: +## Implementation strategies -- Users need to browse multiple options -- Results require comparison (e-commerce products, listings) -- Exact matching is critical -- Response time is paramount +### Retrieval Augmented Generation (RAG) -## Use chat completions to implement RAG pipelines +In the majority of cases, you should use the [`/chats` route](/reference/api/chats) to build a Retrieval Augmented Generation (RAG) pipeline. RAGs excel when working with unstructured data and emphasise high-quality responses. -The chat completions feature implements a complete Retrieval Augmented Generation (RAG) pipeline in a single API endpoint. Meilisearch's chat completions consolidates RAG creation into one streamlined process: +Meilisearch's chat completions API consolidates RAG creation into a single process: 1. **Query understanding**: automatically transforms questions into search parameters 2. **Hybrid retrieval**: combines keyword and semantic search for better relevancy 3. **Answer generation**: uses your chosen LLM to generate responses 4. **Context management**: maintains conversation history by constantly pushing the full conversation to the dedicated tool -### Alternative: MCP integration - -When integrating Meilisearch with AI assistants and automation tools, consider using [Meilisearch's Model Context Protocol (MCP) server](/guides/ai/mcp). MCP enables standardized tool integration across various AI platforms and applications. - -## Architecture overview - -Chat completions operate through workspaces, which are isolated configurations for different use cases. Each workspace can: - -- Use different LLM sources (openAi, azureOpenAi, mistral, gemini, vLlm) -- Apply custom prompts -- Access specific indexes based on API keys -- Maintain separate conversation contexts - -### Key components - -1. **Chat endpoint**: `/chats/{workspace}/chat/completions` - - OpenAI-compatible interface - - Supports streaming responses - - Handles tool calling for index searches - -2. **Workspace settings**: `/chats/{workspace}/settings` - - Configure LLM provider and model - - Set system prompts - - Manage API credentials - -3. **Index integration**: - - Automatically searches relevant indexes - - Uses existing Meilisearch search capabilities - - Respects API key permissions - -## Security considerations - -The chat completions feature integrates with Meilisearch's existing security model: +Follow the [chat completions tutorial](/learn/chat/getting_started_with_chat) for information on how to implement a RAG with Meilisearch. -- **API key permissions**: chat only accesses indexes visible to the provided API key -- **Tenant tokens**: support for multi-tenant applications -- **LLM credentials**: stored securely in workspace settings -- **Content isolation**: responses based only on indexed content +### Model Context Protocol (MCP) -## Next steps +An alternative method is using a Model Context Protocol (MCP) server. MCPs are designed for broader uses that go beyond answering questions, but can be useful in contexts where having up-to-date data is more important than comprehensive answers. -- [Get started with chat completions implementation](/learn/chat/getting_started_with_chat) -- [Explore the chat completions API reference](/reference/api/chats) +Follow the [dedicated MCP guide](/guides/ai/mcp) if you want to implement it in your application. diff --git a/learn/chat/getting_started_with_chat.mdx b/learn/chat/getting_started_with_chat.mdx index 51c5960a69..a2e2afed1f 100644 --- a/learn/chat/getting_started_with_chat.mdx +++ b/learn/chat/getting_started_with_chat.mdx @@ -1,10 +1,9 @@ --- title: Getting started with conversational search -sidebarTitle: Getting started with chat -description: Learn how to implement AI-powered conversational search in your application +description: This article walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. --- -This guide walks you through implementing Meilisearch's chat completions feature to create conversational search experiences in your application. +To successfully implement a conversational search interface you must follow three steps: configure indexes for chat usage, create a chat workspaces, and build a chat interface. ## Prerequisites @@ -14,13 +13,15 @@ Before starting, ensure you have: - An API key from an LLM provider - At least one index with searchable content -## Enable the chat completions feature +## Setup + +### Enable the chat completions feature First, enable the chat completions experimental feature: ```bash curl \ - -X PATCH 'http://localhost:7700/experimental-features/' \ + -X PATCH 'MEILISEARCH_URL/experimental-features/' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -28,24 +29,28 @@ curl \ }' ``` -## Find your chat API key + +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. + + +### Find your chat API key When Meilisearch runs with a master key on an instance created after v1.15.1, it automatically generates a "Default Chat API Key" with `chatCompletions` and `search` permissions on all indexes. Check if you have the key using: ```bash -curl http://localhost:7700/keys \ +curl MEILISEARCH_URL/keys \ -H "Authorization: Bearer MEILISEARCH_KEY" ``` -Look for the key with the description "Default Chat API Key" Use this key when querying the `/chats` endpoint. +Look for the key with the description "Default Chat API Key". -### Troubleshooting: Missing default chat API key +#### Troubleshooting: Missing default chat API key If your instance does not have a Default Chat API Key, create one manually: ```bash curl \ - -X POST 'http://localhost:7700/keys' \ + -X POST 'MEILISEARCH_URL/keys' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ @@ -57,101 +62,108 @@ curl \ }' ``` -## Configure your indexes for chat +## Configure your indexes -Each index that you want to be searchable through chat needs specific configuration: +After activating the `/chats` route and obtaining an API key with chat permissions, configure the `chat` settings for each index you want to be searchable via chat UI: ```bash curl \ - -X PATCH 'http://localhost:7700/indexes/movies/settings' \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "chat": { - "description": "A comprehensive movie database containing titles, descriptions, genres, and release dates to help users find movies", + "description": "A comprehensive database of TYPE_OF_DOCUMENT containing titles, descriptions, genres, and release dates to help users searching for TYPE_OF_DOCUMENT", "documentTemplate": "{% for field in fields %}{% if field.is_searchable and field.value != nil %}{{ field.name }}: {{ field.value }}\n{% endif %}{% endfor %}", - "documentTemplateMaxBytes": 400, - "searchParameters": {} + "documentTemplateMaxBytes": 400 } }' ``` - -The `description` field helps the LLM understand what data is in the index, improving search relevance. - +- `description` gives the initial context of the conversation to the LLM. A good description improves relevance of the chat's answers +- `documentTemplate` defines the document data Meilisearch sends to the AI provider. This template outputs all searchable fields in your documents, which may not be ideal if your documents have many fields. Consult the best [document template best practices](/learn/ai_powered_search/document_template_best_practices) article for more guidance +- `documentTemplateMaxBytes` establishes a size limit for the document templates. Documents bigger than 400 bytes are truncated to ensure a good balance between speed and relevancy ## Configure a chat completions workspace -Create a workspace with your LLM provider settings. Here are examples for different providers: +The next step is to create a workspace. Chat completion workspaces are isolated configurations targeting different use cases. Each workspace can: + +- Use different embedding providers (OpenAI, Azure OpenAI, Mistral, Gemini, vLLM) +- Establish separate conversation contexts via baseline prompts +- Access a specific set of indexes + +For example, you may have one workspace for publicly visible data, and another for data only available for logged in users. + +Create a workspace setting your LLM provider as its `source`: -```bash openAi +```bash OpenAI curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "openAi", - "apiKey": "sk-abc...", - "baseUrl": "https://api.openai.com/v1", + "apiKey": "PROVIDER_API_KEY", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } }' ``` -```bash azureOpenAi +```bash Azure OpenAI curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "azureOpenAi", - "apiKey": "your-azure-key", - "baseUrl": "https://your-resource.openai.azure.com", + "apiKey": "PROVIDER_API_KEY", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } }' ``` -```bash mistral +```bash Mistral curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "mistral", - "apiKey": "your-mistral-key", + "apiKey": "PROVIDER_API_KEY", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } }' ``` -```bash gemini +```bash Gemini curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "gemini", - "apiKey": "your-gemini-key", + "apiKey": "PROVIDER_API_KEY", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } }' ``` -```bash vLlm +```bash vLLM curl \ - -X PATCH 'http://localhost:7700/chats/my-assistant/settings' \ + -X PATCH 'MEILISEARCH_URL/chats/WORKSPACE_NAME/settings' \ -H 'Authorization: Bearer MEILISEARCH_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ "source": "vLlm", - "baseUrl": "http://localhost:8000", + "baseUrl": "PROVIDER_API_URL", "prompts": { "system": "You are a helpful assistant. Answer questions based only on the provided context." } @@ -160,24 +172,29 @@ curl \ +Which fields are mandatory will depend on your chosen provider `source`. In most cases, you will have to provide an `apiKey` to access the provider. + +`baseUrl` indicates the URL Meilisearch queries when users submit questions to your chat interface. This is only mandatory for Azure OpenAI and vLLM sources. + +`prompts.system` gives the conversational search bot the baseline context of your users and their questions. [The `prompts` object accepts a few other fields](/reference/api/chats#prompts) that provide more information to improve how the agent uses the information it finds via Meilisearch. In real-life scenarios filling these fields would improve the quality of conversational search results. + ## Send your first chat completions request -Now you can start a conversation. Note the `-N` flag for handling streaming responses: +You have finished configuring your conversational search agent. To test everything is working as expected, send a streaming `curl` query to the chat completions API route: ```bash curl -N \ - -X POST 'http://localhost:7700/chats/my-assistant/chat/completions' \ - -H 'Authorization: Bearer ' \ + -X POST 'MEILISEARCH_URL/chats/WORKSPACE_NAME/chat/completions' \ + -H 'Authorization: Bearer MEILISEARCH_API_KEY' \ -H 'Content-Type: application/json' \ --data-binary '{ - "model": "gpt-3.5-turbo", + "model": "PROVIDER_MODEL_UID", "messages": [ { "role": "user", - "content": "What movies do you have about space exploration?" + "content": "USER_PROMPT" } ], - "stream": true, "tools": [ { "type": "function", @@ -197,129 +214,51 @@ curl -N \ }' ``` -Take particular note of the `tools` array. These settings are optional, but greatly improve user experience: - -- **`_meiliSearchProgress`**: shows users what searches are being performed -- **`_meiliSearchSources`**: displays the actual documents used to generate responses +- `model` is mandatory and must indicate a model supported by your chosen `source` +- `messages` contains the messages exchanged between the conversational search agent and the user +- `tools` sets up two optional but highly [recommended tools](/learn/chat/chat_tooling_reference): + - `_meiliSearchProgress`: shows users what searches are being performed + - `_meiliSearchSources`: displays the actual documents used to generate responses -## Build a chat interface using the OpenAI SDK +If Meilisearch returns a stream of data containing the chat agent response, you have correctly configured Meilisearch for conversational search: -Since Meilisearch's chat endpoint is OpenAI-compatible, you can use the official OpenAI SDK: - - - -```javascript JavaScript -import OpenAI from 'openai'; - -const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', - apiKey: 'YOUR_CHAT_API_KEY', -}); - -const completion = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, -}); - -for await (const chunk of completion) { - console.log(chunk.choices[0]?.delta?.content || ''); -} +```sh +data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"content":"Meilisearch"},"finish_reason":null}]} ``` -```python Python -from openai import OpenAI +If Meilisearch returns an error, consult the [troubleshooting section](#troubleshooting) to understand diagnose and fix the issues you encountered. -client = OpenAI( - base_url="http://localhost:7700/chats/my-assistant", - api_key="YOUR_CHAT_API_KEY" -) - -stream = client.chat.completions.create( - model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "What is Meilisearch?"}], - stream=True, -) - -for chunk in stream: - if chunk.choices[0].delta.content is not None: - print(chunk.choices[0].delta.content, end="") -``` - -```typescript TypeScript -import OpenAI from 'openai'; - -const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', - apiKey: 'YOUR_CHAT_API_KEY', -}); - -const stream = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, -}); +## Next steps -for await (const chunk of stream) { - const content = chunk.choices[0]?.delta?.content || ''; - process.stdout.write(content); -} -``` +In this article, you have seen how to activate the chats completion route, prepare your indexes to serve as a base for your AI agent, and performed your first conversational search. - +In most cases, that is only the beginning of adding conversational search to your application. Next, you are most likely going to want to add a graphical user interface to your application. -### Error handling +### Building a chat interface using the OpenAI SDK -When using the OpenAI SDK with Meilisearch's chat completions endpoint, errors from the streamed responses are natively handled by OpenAI. This means you can use the SDK's built-in error handling mechanisms without additional configuration: +Meilisearch's chat endpoint was designed to be OpenAI-compatible. This means you can use the official OpenAI SDK in any supported programming language, even if your provider is not OpenAI. - +Integrating Meiliearch and the OpenAI SDK with JavaScript would look lik this: ```javascript JavaScript import OpenAI from 'openai'; const client = new OpenAI({ - baseURL: 'http://localhost:7700/chats/my-assistant', - apiKey: 'MEILISEARCH_KEY', + baseURL: 'MEILISEARCH_URL/chats/WORKSPACE_NAME', + apiKey: 'PROVIDER_API_KEY', +}); + +const completion = await client.chat.completions.create({ + model: 'PROVIDER_MODEL_UID', + messages: [{ role: 'user', content: 'USER_PROMPT' }] }); -try { - const stream = await client.chat.completions.create({ - model: 'gpt-3.5-turbo', - messages: [{ role: 'user', content: 'What is Meilisearch?' }], - stream: true, - }); - - for await (const chunk of stream) { - console.log(chunk.choices[0]?.delta?.content || ''); - } -} catch (error) { - // OpenAI SDK automatically handles streaming errors - console.error('Chat completion error:', error); +for await (const chunk of completion) { + console.log(chunk.choices[0]?.delta?.content || ''); } ``` -```python Python -from openai import OpenAI - -client = OpenAI( - base_url="http://localhost:7700/chats/my-assistant", - api_key="MEILISEARCH_KEY" -) - -try: - stream = client.chat.completions.create( - model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "What is Meilisearch?"}], - stream=True, - ) - - for chunk in stream: - if chunk.choices[0].delta.content is not None: - print(chunk.choices[0].delta.content, end="") -except Exception as error: - # OpenAI SDK automatically handles streaming errors - print(f"Chat completion error: {error}") -``` +Take particular note of the last lines, which output the streamed responses to the browser console. In a real-life application, you would instead print the response chunks to the user interface. @@ -349,7 +288,7 @@ except Exception as error: - Use either the master key or the "Default Chat API Key" - Don't use search or admin API keys for chat endpoints -- Find your chat key: `curl http://localhost:7700/keys -H "Authorization: Bearer MEILISEARCH_KEY"` +- Find your chat key: `curl MEILISEARCH_URL/keys -H "Authorization: Bearer MEILISEARCH_KEY"` #### "Socket connection closed unexpectedly" @@ -360,14 +299,14 @@ except Exception as error: 1. Check workspace configuration: ```bash - curl http://localhost:7700/chats/my-assistant/settings \ + curl MEILISEARCH_URL/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" ``` 2. Update with valid API key: ```bash - curl -X PATCH http://localhost:7700/chats/my-assistant/settings \ + curl -X PATCH MEILISEARCH_URL/chats/WORKSPACE_NAME/settings \ -H "Authorization: Bearer MEILISEARCH_KEY" \ -H "Content-Type: application/json" \ -d '{"apiKey": "your-valid-api-key"}' @@ -381,18 +320,3 @@ except Exception as error: - Include `_meiliSearchProgress` and `_meiliSearchSources` tools in your request - Ensure indexes have proper chat descriptions configured - -#### "stream: false is not supported" error - -**Cause:** Trying to use non-streaming responses - -**Solution:** - -- Always set `"stream": true` in your requests -- Non-streaming responses are not yet supported - -## Next steps - -- Explore [advanced chat API features](/reference/api/chats) -- Learn about [conversational search concepts](/learn/chat/conversational_search) -- Review [security best practices](/learn/security/basic_security) \ No newline at end of file diff --git a/reference/api/chats.mdx b/reference/api/chats.mdx index 7b11a158ed..1440f4b182 100644 --- a/reference/api/chats.mdx +++ b/reference/api/chats.mdx @@ -21,13 +21,21 @@ curl \ ``` + +Conversational search is still in early development. Conversational agents may occasionally hallucinate inaccurate and misleading information, so it is important to closely monitor it in production environments. + + ## Authorization -When working with a secure Meilisearch instance, Use an API key with access to both the `search` and `chatCompletions` actions, such as the default chat API key. +When implementing conversational search, use an API key with access to both the `search` and `chatCompletions` actions such as the default chat API key. You may also use tenant tokens instead of an API key, provided you generate the tokens with a key that has access to the required actions. + +Chat queries only search the indexes its API key can access. The default chat API key has access to all indexes. To limit access, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. + +## Chat workspaces -Chat queries only search indexes its API key can access. The default chat API key has access to all indexes. To limit chat access to specific indexes, you must either create a new key, or [generate a tenant token](/learn/security/generate_tenant_token_sdk) from the default chat API key. +Workspaces are groups of chat settings tailored towards specific use cases. You must configure at least on workspace to use chat completions. -## Chat workspace object +### Chat workspace object ```json { @@ -39,20 +47,92 @@ Chat queries only search indexes its API key can access. The default chat API ke | :---------- | :----- | :--------------------------------------------------- | | **`uid`** | String | Unique identifier for the chat completions workspace | -## List chat workspaces +### Chat workspace settings object + +```json +{ + "source": "PROVIDER", + "orgId": null, + "projectId": null, + "apiVersion": null, + "deploymentId": null, + "baseUrl": null, + "apiKey": "PROVIDER_API_KEY", + "prompts": { + "system": "Description of the general search context" + } +} +``` + +#### `source` + +**Type**: String
+**Default value**: N/A
+**Description**: Name of the chosen embeddings provider. Must be one of: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` + +#### `orgId` + +**Type**: String
+**Default value**: N/A
+**Description**: Organization ID used to access the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `projectId` + +**Type**: String
+**Default value**: N/A
+**Description**: Project ID used to access the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `apiVersion` + +**Type**: String
+**Default value**: N/A
+**Description**: API version used by the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `deploymentId` + +**Type**: String
+**Default value**: N/A
+**Description**: Deployment ID used by the LLM provider. Required for Azure OpenAI, incompatible with other sources + +#### `baseUrl` + +**Type**: String
+**Default value**: N/A
+**Description**: Base URL Meilisearch should target when sending requests to the embeddings provider. Must be the full URL preceding the `/chat/completions` fragment. Required for Azure OpenAI and vLLM + +#### `apiKey` + +**Type**: String
+**Default value**: N/A
+**Description**: API key to access the LLM provider. Optional for vLLM, mandatory for all other providers + +#### `prompts` + +**Type**: Object
+**Default value**: N/A
+**Description**: Prompts giving baseline context to the conversational agent. + +The prompts object accepts the following fields: + +- `prompts.system`: Default prompt giving the general usage context of the conversational search agent. Example: "You are a helpful bot answering questions on how to use Meilisearch" +- `prompts.searchDescription`: An internal description of the Meilisearch chat tools. Use it to instruct the agent on how and when to use the configured tools. Example: "Tool for retrieving relevant documents. Use it when users ask for factual information, past records, or resources that might exist in indexed content." +- `prompts.QParam`: Description of expected user input and the desired output. Example: "Users will ask about Meilisearch. Provide short and direct keyword-style queries." +- `prompts.IndexUidParam`: Instructions describing each index the agent has access to and how to use them. Example: "If user asks about code or API or parameters, use the index called `documentation`." + +### List chat workspaces List all chat workspaces. Results can be paginated by using the `offset` and `limit` query parameters. -### Query parameters +#### Query parameters | Query parameter | Description | Default value | | :-------------- | :----------------------------- | :------------ | | **`offset`** | Number of workspaces to skip | `0` | | **`limit`** | Number of workspaces to return | `20` | -### Response +#### Response | Name | Type | Description | | :------------ | :------ | :----------------------------------- | @@ -61,14 +141,14 @@ List all chat workspaces. Results can be paginated by using the `offset` and `li | **`limit`** | Integer | Number of workspaces returned | | **`total`** | Integer | Total number of workspaces | -### Example +#### Example ```sh curl \ -X GET 'MEILISEARCH_URL/chats?limit=3' ``` -#### Response: `200 Ok` +##### Response: `200 Ok` ```json { @@ -83,26 +163,26 @@ List all chat workspaces. Results can be paginated by using the `offset` and `li } ``` -## Get one chat workspace +### Get one chat workspace -Get information about a workshop. +Get information about a workspace. -### Path parameters +#### Path parameters | Name | Type | Description | | :---------------- | :----- | :------------------------------------------------------------------------ | | **`workspace_uid`** * | String | `uid` of the requested index | -### Example +#### Example ```sh curl \ -X GET 'MEILISEARCH_URL/chats/WORKSPACE_UID' ``` -#### Response: `200 Ok` +##### Response: `200 Ok` ```json { @@ -110,34 +190,6 @@ Get information about a workshop. } ``` -## Chat workspace settings - -### Chat workspace settings object - -```json -{ - "source": "openAi", - "orgId": null, - "projectId": null, - "apiVersion": null, - "deploymentId": null, - "baseUrl": null, - "apiKey": "sk-abc...", - "prompts": { - "system": "You are a helpful assistant that answers questions based on the provided context." - } -} -``` - -#### The prompts object - -| Name | Type | Description | -| :------------------------ | :----- | :---------------------------------------------------------------- | -| **`system`** | String | A prompt added to the start of the conversation to guide the LLM | -| **`searchDescription`** | String | A prompt to explain what the internal search function does | -| **`searchQParam`** | String | A prompt to explain what the `q` parameter of the search function does and how to use it | -| **`searchIndexUidParam`** | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | - ### Get chat workspace settings @@ -175,32 +227,41 @@ curl \ -### Update chat workspace settings +### Create a chat workspace and update chat workspace settings Configure the LLM provider and settings for a chat workspace. -If the workspace does not exist, querying this endpoint will create it. +If a workspace does not exist, querying this endpoint will create it. #### Path parameters -| Name | Type | Description | -| :-------------- | :----- | :----------------------------------- | +| Name | Type | Description | +| :------------------ | :----- | :----------------------------------- | | **`workspace_uid`** | String | The workspace identifier | #### Settings parameters | Name | Type | Description | | :---------------- | :----- | :---------------------------------------------------------------------------- | -| **`source`** | String | LLM source: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` | -| **`orgId`** | String | Organization ID for the LLM provider (required for azureOpenAi) | -| **`projectId`** | String | Project ID for the LLM provider | -| **`apiVersion`** | String | API version for the LLM provider (required for azureOpenAi) | -| **`deploymentId`**| String | Deployment ID for the LLM provider (required for azureOpenAi) | -| **`baseUrl`** | String | Base URL for the provider (required for azureOpenAi and vLlm) | -| **`apiKey`** | String | API key for the LLM provider (optional for vLlm) | -| **`prompts`** | Object | Prompts object containing system prompts and other configuration | +| [`source`](#source) | String | LLM source: `"openAi"`, `"azureOpenAi"`, `"mistral"`, `"gemini"`, or `"vLlm"` | +| [`orgId`](#orgid) | String | Organization ID for the LLM provider | +| [`projectId`](#projectid) | String | Project ID for the LLM provider | +| [`apiVersion`](#apiversion) | String | API version for the LLM provider | +| [`deploymentId`](#deploymentid) | String | Deployment ID for the LLM provider | +| [`baseUrl`](#baseurl) | String | Base URL for the provider | +| [`apiKey`](#apikey) | String | API key for the LLM provider | +| [`prompts`](#prompts) | Object | Prompts object containing system prompts and other configuration | + +##### Prompt parameters + +| Name | Type | Description | +| :------------------------ | :----- | :---------------------------------------------------------------- | +| [`system]` (#prompts) | String | A prompt added to the start of the conversation to guide the LLM | +| [`searchDescription`](#prompts) | String | A prompt to explain what the internal search function does | +| [`searchQParam`](#prompts) | String | A prompt to explain what the `q` parameter of the search function does and how to use it | +| [`searchIndexUidParam`](#prompts) | String | A prompt to explain what the `indexUid` parameter of the search function does and how to use it | #### Request body @@ -339,9 +400,13 @@ curl \ ## Chat completions +After creating a workspace, you can use the chat completions API to create a conversational search agent. + +### Stream chat completions + -Create a chat completion using Meilisearch's OpenAI-compatible interface. The endpoint searches relevant indexes and generates responses based on the retrieved content. +Create a chat completions stream using Meilisearch's OpenAI-compatible interface. This endpoint searches relevant indexes and generates responses based on the retrieved content. ### Path parameters @@ -366,12 +431,12 @@ Create a chat completion using Meilisearch's OpenAI-compatible interface. The en | Name | Type | Required | Description | | :------------- | :------ | :------- | :--------------------------------------------------------------------------- | -| **`model`** | String | Yes | Model to use and will be related to the source LLM in the workspace settings | -| **`messages`** | Array | Yes | Array of message objects with `role` and `content` | -| **`stream`** | Boolean | No | Enable streaming responses (default: `true`) | +| **`model`** | String | Yes | Model the agent should use when generating responses | +| **`messages`** | Array | Yes | Array of [message objects](#message-object) | +| **`stream`** | Boolean | No | Enable streaming responses. Must be `true` if specified | -Currently, only streaming responses (`stream: true`) are supported. +Meilisearch chat completions only supports streaming responses (`stream: true`). ### Message object @@ -381,6 +446,14 @@ Currently, only streaming responses (`stream: true`) are supported. | **`role`** | String | Message role: `"system"`, `"user"`, or `"assistant"` | | **`content`** | String | Message content | +#### `role` + +Specifies the message origin: Meilisearch (`system`), the LLM provider (`assistant`), or user input (`user`) + +#### `content` + +String containing the message content. + ### Response The response follows the OpenAI chat completions format. For streaming responses, the endpoint returns Server-Sent Events (SSE). diff --git a/snippets/samples/code_samples_update_vector_store_settings_1.mdx b/snippets/samples/code_samples_update_vector_store_settings_1.mdx index 5418c2dfd3..ea8d93c626 100644 --- a/snippets/samples/code_samples_update_vector_store_settings_1.mdx +++ b/snippets/samples/code_samples_update_vector_store_settings_1.mdx @@ -2,7 +2,7 @@ ```bash cURL curl \ - -X PUT 'MEILISEARCH_URL/indexes/INDEX_UID/settings/vector-store' \ + -X PATCH 'MEILISEARCH_URL/indexes/INDEX_UID/settings/vector-store' \ -H 'Content-Type: application/json' \ --data-binary '"experimental"' ```