Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 66 additions & 30 deletions docs/getstarted/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,14 +39,72 @@ pip install -e .

## Step 3: Set Your API Key

Let's use OpenAI as LLM provider and set the environment variable:
By default, the quickstart example uses OpenAI. Set your API key and you're ready to go. You can also use some other provider with a minor change:

```sh
# OpenAI (default)
export OPENAI_API_KEY="your-openai-key"
```
=== "OpenAI (Default)"
```sh
export OPENAI_API_KEY="your-openai-key"
```

The quickstart project is already configured to use OpenAI. You're all set!

=== "Anthropic Claude"
Set your Anthropic API key:

```sh
export ANTHROPIC_API_KEY="your-anthropic-key"
```

Then update the `_init_clients()` function in `evals.py`:

```python
from ragas.llms import llm_factory

llm = llm_factory("claude-3-5-sonnet-20241022", provider="anthropic")
```

=== "Google Gemini"
Set up your Google credentials:

```sh
export GOOGLE_API_KEY="your-google-api-key"
```

Then update the `_init_clients()` function in `evals.py`:

```python
from ragas.llms import llm_factory

llm = llm_factory("gemini-1.5-pro", provider="google")
```

=== "Local Models (Ollama)"
Install and run Ollama locally, then update the `_init_clients()` function in `evals.py`:

```python
from ragas.llms import llm_factory

llm = llm_factory(
"mistral",
provider="ollama",
base_url="http://localhost:11434" # Default Ollama URL
)
```

=== "Custom / Other Providers"
For any LLM with OpenAI-compatible API:

If you want to use any other LLM provider, check below on how to configure that.
```python
from ragas.llms import llm_factory

llm = llm_factory(
"model-name",
api_key="your-api-key",
base_url="https://your-api-endpoint"
)
```

For more details, learn about [LLM integrations](../concepts/metrics/index.md).

## Project Structure

Expand Down Expand Up @@ -88,6 +146,8 @@ The evaluation will:

![](../_static/imgs/results/rag_eval_result.png)

Congratulations! You have a complete evaluation setup running. 🎉

---

## Customize Your Evaluation
Expand Down Expand Up @@ -121,30 +181,6 @@ def load_dataset():
return dataset
```

### Change the LLM Provider

In the `_init_clients()` function in `evals.py`, update the LLM factory call:

```python
from ragas.llms import llm_factory

def _init_clients():
"""Initialize OpenAI client and RAG system."""
openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
rag_client = default_rag_client(llm_client=openai_client)

# Use Anthropic Claude instead
llm = llm_factory("claude-3-5-sonnet-20241022", provider="anthropic")

# Or use Google Gemini
# llm = llm_factory("gemini-1.5-pro", provider="google")

# Or use local Ollama
# llm = llm_factory("mistral", provider="ollama", base_url="http://localhost:11434")

return openai_client, rag_client, llm
```

### Customize Dataset and RAG System

The template includes:
Expand Down
28 changes: 25 additions & 3 deletions docs/howtos/integrations/_opik.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,11 +188,33 @@ rag_pipeline("What is the capital of France?")



#### Evaluating datasets
from datasets import load_dataset

If you looking at evaluating a dataset, you can use the Ragas `evaluate` function. When using this function, the Ragas library will compute the metrics on all the rows of the dataset and return a summary of the results.
from ragas import evaluate
from ragas.metrics import answer_relevancy, context_precision, faithfulness

You can use the OpikTracer callback to log the results of the evaluation to the Opik platform. For this we will configure the OpikTracer
fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")

# Reformat the dataset to match the schema expected by the Ragas evaluate function
dataset = fiqa_eval["baseline"].select(range(3))

dataset = dataset.map(
lambda x: {
"user_input": x["question"],
"reference": x["ground_truth"],
"retrieved_contexts": x["contexts"],
}
)

opik_tracer_eval = OpikTracer(tags=["ragas_eval"], metadata={"evaluation_run": True})

result = evaluate(
dataset,
metrics=[context_precision, faithfulness, answer_relevancy],
callbacks=[opik_tracer_eval],
)

print(result)


```python
Expand Down
Loading