-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Closed
Labels
bugSomething isn't workingSomething isn't workingmodule-metricsthis is part of metrics modulethis is part of metrics modulequestionFurther information is requestedFurther information is requested
Description
✅ I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
This is what dataset looks like
[
{
"user_input": "test",
"retrieved_contexts": ["test"
],
"response": "test",
"reference": "test"
}
]
I'm sure the format is right, cuz I followed the doc.
But it keeps showing TypeError(BaseChatModel.agenerate() missing 1 required positional argument: 'messages') and metrics are all NaN.
Here's the error message
/Users/arianx/PycharmProjects/bigproj/.venv/bin/python /Users/arianx/PycharmProjects/bigproj/test__.py
EvaluationDataset(features=['user_input', 'retrieved_contexts', 'response', 'reference'], len=1)
Evaluating: 0%| | 0/1 [00:00<?, ?it/s]Exception raised in Job[0]: TypeError(BaseChatModel.agenerate() missing 1 required positional argument: 'messages')
Evaluating: 100%|██████████| 1/1 [00:00<00:00, 528.85it/s]
{'context_recall': nan}
user_input retrieved_contexts response reference context_recall
0 test [test] test test NaN
Process finished with exit code 0
Can someone fix that for me, I'd appreciate it !
Code Examples
with open("tmpds.json", "r", encoding="utf-8") as f:
dataset = json.load(f)
from ragas import EvaluationDataset
evaluation_dataset = EvaluationDataset.from_list(dataset)
from ragas import evaluate
from ragas.metrics import LLMContextRecall, Faithfulness, FactualCorrectness
result = evaluate(dataset=evaluation_dataset, metrics=[LLMContextRecall()])
print(result)
df = result.to_pandas()
print(df)
dosubot
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingmodule-metricsthis is part of metrics modulethis is part of metrics modulequestionFurther information is requestedFurther information is requested