Skip to content

Custom Prompts #344

@Hakimovich99

Description

@Hakimovich99

Hey!

Sorry for not following the issues template. I saw a few issues about the ability to have custom prompts (#245 #334 ). I wanted to know if we can expect it to be released in ragas or if there was any reason for not providing this ability.

I would like to compare different evaluations using different LLMs in different languages. It would be nice to be able to modify the evaluation prompt when computing metrics like context_relevancy and context_precision for example.

A feature similar to that would be great (lines 20-28 and 109)!:
https://github.com/run-llama/llama_index/blob/218392bc3006c344dc2a3407feaf10a61b8193b8/llama_index/evaluation/dataset_generation.py#L109

Thanks and looking forward to reading you!

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions