Skip to content

Commit 60068ff

Browse files
authored
Merge pull request #308 from mistralai/auto/docs-update
Auto-update llms.txt & llms-full.txt
2 parents c964ee8 + 69ddac5 commit 60068ff

File tree

2 files changed

+114
-95
lines changed

2 files changed

+114
-95
lines changed

static/llms-full.txt

Lines changed: 46 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -223,6 +223,16 @@ Source: https://docs.mistral.ai/api/#tag/chat_classifications_v1_chat_classifica
223223

224224
post /v1/chat/classifications
225225

226+
# Create Transcription
227+
Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post
228+
229+
post /v1/audio/transcriptions
230+
231+
# Create streaming transcription (SSE)
232+
Source: https://docs.mistral.ai/api/#tag/audio_api_v1_transcriptions_post_stream
233+
234+
post /v1/audio/transcriptions#stream
235+
226236
# List all libraries you have access to.
227237
Source: https://docs.mistral.ai/api/#tag/libraries_list_v1
228238

@@ -5305,7 +5315,7 @@ console.log(transcriptionResponse);
53055315
curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \
53065316
--header "x-api-key: $MISTRAL_API_KEY" \
53075317
--form 'file=@"/path/to/file/audio.mp3"' \
5308-
--form 'model="voxtral-mini-2507"' \
5318+
--form 'model="voxtral-mini-2507"'
53095319
```
53105320

53115321
**With Language defined**
@@ -5571,7 +5581,7 @@ client = Mistral(api_key=api_key)
55715581
transcription_response = client.audio.transcriptions.complete(
55725582
model=model,
55735583
file_url="https://docs.mistral.ai/audio/obama.mp3",
5574-
timestamp_granularities="segment"
5584+
timestamp_granularities=["segment"]
55755585
)
55765586

55775587
# Print the contents
@@ -5593,7 +5603,7 @@ const client = new Mistral({ apiKey: apiKey });
55935603
const transcriptionResponse = await client.audio.transcriptions.complete({
55945604
model: "voxtral-mini-latest",
55955605
fileUrl: "https://docs.mistral.ai/audio/obama.mp3",
5596-
timestamp_granularities: "segment"
5606+
timestamp_granularities: ["segment"]
55975607
});
55985608

55995609
// Log the contents
@@ -5607,7 +5617,7 @@ console.log(transcriptionResponse);
56075617
curl --location 'https://api.mistral.ai/v1/audio/transcriptions' \
56085618
--header "x-api-key: $MISTRAL_API_KEY" \
56095619
--form 'file_url="https://docs.mistral.ai/audio/obama.mp3"' \
5610-
--form 'model="voxtral-mini-latest"'
5620+
--form 'model="voxtral-mini-latest"' \
56115621
--form 'timestamp_granularities="segment"'
56125622
```
56135623
</TabItem>
@@ -13088,7 +13098,7 @@ Source: https://docs.mistral.ai/docs/capabilities/vision
1308813098

1308913099
Vision capabilities enable models to analyze images and provide insights based on visual content in addition to text. This multimodal approach opens up new possibilities for applications that require both textual and visual understanding.
1309013100

13091-
For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../OCR/document_ai_overview).
13101+
For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../document_ai/document_ai_overview).
1309213102

1309313103
## Models with Vision Capabilities:
1309413104
- Pixtral 12B (`pixtral-12b-latest`)
@@ -13739,7 +13749,10 @@ in two ways:
1373913749
This page focuses on the MaaS offering, where the following models are available:
1374013750

1374113751
- Mistral Large (24.11, 24.07)
13742-
- Mistral Small (24.09)
13752+
- Mistral Medium (25.05)
13753+
- Mistral Small (25.03)
13754+
- Mistral Document AI (25.05)
13755+
- Mistral OCR (25.05)
1374313756
- Ministral 3B (24.10)
1374413757
- Mistral Nemo
1374513758

@@ -13843,9 +13856,11 @@ To run the examples below, set the following environment variables:
1384313856
## Going further
1384413857

1384513858
For more details and examples, refer to the following resources:
13859+
- [Release blog post for Mistral Document AI](https://techcommunity.microsoft.com/blog/aiplatformblog/deepening-our-partnership-with-mistral-ai-on-azure-ai-foundry/4434656)
1384613860
- [Release blog post for Mistral Large 2 and Mistral NeMo](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/ai-innovation-continues-introducing-mistral-large-2-and-mistral/ba-p/4200181).
1384713861
- [Azure documentation for MaaS deployment of Mistral models](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral).
1384813862
- [Azure ML examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/mistral) with several Mistral-based samples.
13863+
- [Azure AI Foundry GitHub repository](https://github.com/azure-ai-foundry/foundry-samples/tree/main/samples/mistral)
1384913864

1385013865

1385113866
[IBM watsonx.ai]
@@ -14089,7 +14104,7 @@ To run the examples below you will need to set the following environment variabl
1408914104

1409014105
Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM).
1409114106
For more information, see the
14092-
[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint).
14107+
[code generation section](../../../capabilities/code_generation).
1409314108

1409414109

1409514110
<Tabs>
@@ -14390,7 +14405,7 @@ for more details.
1439014405

1439114406
Codestral can be queried using an additional completion mode called fill-in-the-middle (FIM).
1439214407
For more information, see the
14393-
[code generation section](../../../capabilities/code_generation/#fill-in-the-middle-endpoint).
14408+
[code generation section](../../../capabilities/code_generation).
1439414409

1439514410

1439614411
<Tabs>
@@ -15693,7 +15708,7 @@ The [Mistral AI APIs](https://console.mistral.ai/) empower LLM applications via:
1569315708

1569415709
- [Text generation](/capabilities/completion), enables streaming and provides the ability to display partial model results in real-time
1569515710
- [Vision](/capabilities/vision), enables the analysis of images and provides insights based on visual content in addition to text.
15696-
- [OCR](/capabilities/OCR/basic_ocr), allows the extraction of interleaved text and images from documents.
15711+
- [OCR](/capabilities/document_ai/basic_ocr), allows the extraction of interleaved text and images from documents.
1569715712
- [Code generation](/capabilities/code_generation), enpowers code generation tasks, including fill-in-the-middle and code completion.
1569815713
- [Embeddings](/capabilities/embeddings/overview), useful for RAG where it represents the meaning of text as a list of numbers.
1569915714
- [Function calling](/capabilities/function_calling), enables Mistral models to connect to external tools.
@@ -16198,7 +16213,7 @@ Mistral provides two types of models: open models and premier models.
1619816213

1619916214
| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
1620016215
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
16201-
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
16216+
| Mistral Medium 3.1 | | :heavy_check_mark: | Our frontier-class multimodal model released August 2025. Improving tone and performance. Read more about Medium 3 in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2508` | 25.08|
1620216217
| Magistral Medium 1.1 | | :heavy_check_mark: | Our frontier-class reasoning model released July 2025. | 40k | `magistral-medium-2507` | 25.07|
1620316218
| Codestral 2508 | | :heavy_check_mark: | Our cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-25-08/) | 256k | `codestral-2508` | 25.08|
1620416219
| Voxtral Mini Transcribe | | :heavy_check_mark: | An efficient audio input model, fine-tuned and optimized for transcription purposes only. | | `voxtral-mini-2507` via `audio/transcriptions` | 25.07|
@@ -16207,6 +16222,7 @@ Mistral provides two types of models: open models and premier models.
1620716222
| Magistral Medium 1 | | :heavy_check_mark: | Our first frontier-class reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-medium-2506` | 25.06|
1620816223
| Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-2410` | 24.10|
1620916224
| Ministral 8B | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-2410` | 24.10|
16225+
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
1621016226
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding with the second version released January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501/) | 256k | `codestral-2501` | 25.01|
1621116227
| Mistral Large 2.1 |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-2411` | 24.11|
1621216228
| Pixtral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our first frontier-class multimodal model released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `pixtral-large-2411` | 24.11|
@@ -16241,8 +16257,8 @@ Additionally, be prepared for the deprecation of certain endpoints in the coming
1624116257
Here are the details of the available versions:
1624216258
- `magistral-medium-latest`: currently points to `magistral-medium-2507`.
1624316259
- `magistral-small-latest`: currently points to `magistral-small-2507`.
16244-
- `mistral-medium-latest`: currently points to `mistral-medium-2505`.
16245-
- `mistral-large-latest`: currently points to `mistral-large-2411`.
16260+
- `mistral-medium-latest`: currently points to `mistral-medium-2508`.
16261+
- `mistral-large-latest`: currently points to `mistral-medium-2508`, previously `mistral-large-2411`.
1624616262
- `pixtral-large-latest`: currently points to `pixtral-large-2411`.
1624716263
- `mistral-moderation-latest`: currently points to `mistral-moderation-2411`.
1624816264
- `ministral-3b-latest`: currently points to `ministral-3b-2410`.
@@ -18984,6 +19000,24 @@ Here is an [example notebook](https://github.com/mistralai/cookbook/blob/main/th
1898419000

1898519001
<img src="/img/guides/obs_mlflow.png" alt="drawing" width="700"/>
1898619002

19003+
### Integration with Maxim
19004+
19005+
Maxim AI provides comprehensive observability for your Mistral based AI applications. With Maxim's one-line integration, you can easily trace and analyse LLM calls, metrics, and more.
19006+
19007+
**Pros:**
19008+
19009+
* Performance Analytics: Track latency, tokens consumed, and costs
19010+
* Advanced Visualisation: Understand agent trajectories through intuitive dashboards
19011+
19012+
**Mistral integration Example:**
19013+
19014+
* Learn how to integrate Maxim observability with the Mistral SDK in just one line of code - [Colab Notebook](https://github.com/mistralai/cookbook/blob/main/third_party/Maxim/cookbook_maxim_mistral_integration.ipynb)
19015+
19016+
Maxim Documentation to use Mistral as an LLM Provider and Maxim as Logger - [Docs Link](https://www.getmaxim.ai/docs/sdk/python/integrations/mistral/mistral)
19017+
19018+
19019+
![Gif](https://raw.githubusercontent.com/akmadan/platform-docs-public/docs/observability-maxim-provider/static/img/guides/maxim_traces.gif)
19020+
1898719021

1898819022
[Other resources]
1898919023
Source: https://docs.mistral.ai/docs/guides/other-resources
@@ -20736,18 +20770,3 @@ Mistral AI's LLM API endpoints charge based on the number of tokens in the input
2073620770

2073720771
To help you estimate your costs, our tokenization API makes it easy to count the number of tokens in your text. Simply run `len(tokens)` as shown in the example above to get the total number of tokens in the text, which you can then use to estimate your cost based on our pricing information.
2073820772

20739-
20740-
[Mistral AI Crawlers]
20741-
Source: https://docs.mistral.ai/docs/robots
20742-
20743-
## Mistral AI Crawlers
20744-
20745-
Mistral AI employs web crawlers ("robots") and user agents to execute tasks for its products, either automatically or upon user request. To facilitate webmasters in managing how their sites and content interact with AI, Mistral AI utilizes specific robots.txt tags.
20746-
20747-
### MistralAI-User
20748-
20749-
MistralAI-User is for user actions in LeChat. When users ask LeChat a question, it may visit a web page to help answer and include a link to the source in its response. MistralAI-User governs which sites these user requests can be made to. It is not used for crawling the web in any automatic fashion, nor to crawl content for generative AI training.
20750-
20751-
Full user-agent string: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; MistralAI-User/1.0; +https://docs.mistral.ai/robots)
20752-
20753-
Published IP addresses: https://mistral.ai/mistralai-user-ips.json

0 commit comments

Comments
 (0)