Skip to content

docs: update RagasEvaluator to reflect Ragas 0.4.3 API changes#11186

Open
ritikraj2425 wants to merge 1 commit intodeepset-ai:mainfrom
ritikraj2425:update-ragas-docs
Open

docs: update RagasEvaluator to reflect Ragas 0.4.3 API changes#11186
ritikraj2425 wants to merge 1 commit intodeepset-ai:mainfrom
ritikraj2425:update-ragas-docs

Conversation

@ritikraj2425
Copy link
Copy Markdown
Contributor

Related Issues

Proposed Changes:

Update the RagasEvaluator documentation to reflect the breaking changes in the Ragas 0.4.3 integration:

  • Modernized Metric Initialization: Switched from the RagasMetric enum and metric_params to the modern Ragas metrics API. Documentation now reflects that metrics should be initialized as classes from ragas.metrics.collections (e.g., AnswerRelevancy(llm=llm)).
  • Updated run() Method: Reflected the change in the run() method signature. The component now accepts explicit arguments like query, response, documents, and reference instead of a nested inputs dictionary.
  • LLM Configuration: Updated code examples to use AsyncOpenAI and llm_factory, which is the new recommended pattern for configuring Ragas metrics.
  • Cleanup: Removed a duplicated parameter table in the ragasevaluator.mdx file.

How did you test it?

  • Manual verification of code snippets against the new RagasEvaluator implementation in the haystack-core-integrations repository (PR #3207).
  • Verified that the example pipeline structure correctly maps to the new component input/output requirements.

Notes for the reviewer

  • These changes bring the manual .mdx documentation in sync with the automatically generated API reference (which was already updated in the main repository).

Checklist

@ritikraj2425 ritikraj2425 requested a review from a team as a code owner April 24, 2026 08:38
@ritikraj2425 ritikraj2425 requested review from julian-risch and removed request for a team April 24, 2026 08:38
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 24, 2026

@ritikraj2425 is attempting to deploy a commit to the deepset Team on Vercel.

A member of the Team first needs to authorize it.

@sjrl sjrl requested review from sjrl and removed request for julian-risch April 29, 2026 07:17
@sjrl
Copy link
Copy Markdown
Contributor

sjrl commented Apr 29, 2026

Hey @ritikraj2425 thanks for opening! Could you handle the merge conflict?

Comment on lines +54 to 57
#### Evaluate Answer Relevancy

To create a context-relevance evaluation pipeline:
To create an answer relevancy evaluation pipeline:

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a note that the env variable OPENAI_API_KEY must be set for the code block below to work.

pipeline = Pipeline()
evaluator = RagasEvaluator(
metric=RagasMetric.ANSWER_RELEVANCY,
ragas_metrics=[AnswerRelevancy(llm=llm)],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to make this an example about AnswerRelevancy we also need to set the embeddings param

embeddings=embedding_factory("openai", model="text-embedding-3-small", client=client)
AnswerRelevancy(llm=llm, embeddings=embeddings)

RagasEvaluator,
RagasMetric,
)
from haystack_integrations.components.evaluators.ragas import RagasEvaluator
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please run this code block locally to make sure it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

docs: Update Ragas docs

2 participants