From 53c35469ac9bdc667aba5d1f75ceba3a88597139 Mon Sep 17 00:00:00 2001 From: jjmachan Date: Thu, 3 Aug 2023 01:16:14 +0530 Subject: [PATCH] docs: correct docs for answer relevancy --- docs/metrics.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/metrics.md b/docs/metrics.md index ef8cac95f..8f3616a04 100644 --- a/docs/metrics.md +++ b/docs/metrics.md @@ -35,7 +35,7 @@ results = context_rel.score(dataset) This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better. ```python from ragas.metrics.answer_relevancy import AnswerRelevancy -answer_relevancy = AnswerRelevancy(model_name="t5-small") +answer_relevancy = AnswerRelevancy() # Dataset({ # features: ['question','answer'], # num_rows: 25