Skip to content

Conversation

peanutshawny
Copy link
Contributor

@peanutshawny peanutshawny commented Feb 18, 2024

Since we already specify whether or not a metric requires an LLM/embeddings model via inheritence from MetricWithLLM and MetricWithEmbeddings, there isn't really a need to force the use of default LLMs/embeddings in evaluate if, for example, no metrics that need embeddings are specified in metrics.

I believe that initiating an LLM/embedding model at the metric level will help clarify how to use evaluate, and will make things simpler in the future when more metrics are added as it decouples needing to initialize LLMs/embedding models for metrics that potentially don't need it. They are even optional arguments to the function itself.

Copilot Description

This pull request mainly refactors the evaluate function in the src/ragas/evaluation.py file. The changes aim to optimize the import and usage of llm_factory and embedding_factory, and clarify the function comments.

Here are the main changes:

  • src/ragas/evaluation.py: Two new imports were added to the top of the file: embedding_factory and llm_factory from ragas.embeddings.base and ragas.llms respectively. This change helps to avoid repetitive imports within the evaluate function.

Changes within the evaluate function:

  • The comments for the llm and embeddings parameters were updated to specify that the default language model and embeddings are used for metrics which require an LLM or embeddings. This provides more clarity on the function's behavior.
  • The conditional logic for setting llm and embeddings was simplified. The llm_factory and embedding_factory are now only called when llm and embeddings are None and the corresponding metric requires them. This change removes the need for importing llm_factory and embedding_factory inside the function.

@codeant-ai codeant-ai bot added enhancement New feature or request bug_fix labels Feb 18, 2024
Copy link

codeant-ai bot commented Feb 18, 2024

PR Description updated to latest commit (7a46111)

@peanutshawny peanutshawny marked this pull request as ready for review February 18, 2024 21:09
@jjmachan
Copy link
Member

thanks a lot for the fix @peanutshawny 🙂

btw what do you think of new AI PR helper?

@jjmachan jjmachan merged commit f6a932a into explodinggradients:main Feb 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug_fix enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants