Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Mistral docs v2 #1674

Merged
merged 6 commits into from
May 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
76 changes: 46 additions & 30 deletions docs/docs/guides/ecosystem/mistral.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,68 @@
---
sidebar_position: 1
hide_table_of_contents: false
hide_table_of_contents: true
---

# MistralAI

Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python), after `weave.init()` is called.
Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python).

## Setup
## Traces

1. Install the MistralAI Python library:
```bash
pip install mistralai weave
```
It’s important to store traces of LLM applications in a central database, both during development and in production. You’ll use these traces for debugging, and as a dataset that will help you improve your application.

2. Initialize Weave in your Python script:
```python
import weave
weave.init("cheese_recommender")
```
:::note
We patch the mistral `chat_completion` method for you to keep track of your LLM calls.
:::
Weave will automatically capture traces for [mistralai](https://github.com/mistralai/client-python). You can use the library as usual, start by calling `weave.init()`:

3. Use the MistralAI library as usual:
```python
import weave
weave.init("cheese_recommender")

```python
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
# then use mistralai library as usual
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"

client = MistralClient(api_key=api_key)
client = MistralClient(api_key=api_key)

messages = [
ChatMessage(role="user", content="What is the best French cheese?")
]
messages = [
ChatMessage(role="user", content="What is the best French cheese?")
]

chat_response = client.chat(
model=model,
messages=messages,
)
```

Weave will now track and log all LLM calls made through the MistralAI library. You can view the traces in the Weave web interface.

[![mistral_trace.png](imgs/mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)

## Wrapping with your own ops

Weave ops make results *reproducible* by automatically versioning code as you experiment, and they capture their inputs and outputs. Simply create a function decorated with [`@weave.op()`](https://wandb.github.io/weave/guides/tracking/ops) that calls into [`mistralai.client.MistralClient.chat()`](https://docs.mistral.ai/capabilities/completion/) and Weave will track the inputs and outputs for you. Let's see how we can do this for our cheese recommender:

```python
# highlight-next-line
@weave.op()
def cheese_recommender(region:str, model:str) -> str:
"Recommend the best cheese in a given region"

messages = [ChatMessage(role="user", content=f"What is the best cheese in {region}?")]

chat_response = client.chat(
model=model,
messages=messages,
)
return chat_response.choices[0].message.content

print(chat_response.choices[0].message.content)
```
cheese_recommender(region="France", model="mistral-large-latest")
cheese_recommender(region="Spain", model="mistral-large-latest")
cheese_recommender(region="Netherlands", model="mistral-large-latest")
```

Weave will now track and log all LLM calls made through the MistralAI library. You can view the logs and insights in the Weave web interface.
[![mistral_ops.png](imgs/mistral_ops.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)

[![mistral_trace.png](mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)
Loading