Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable isolated_node_eval for AnswerGenerator nodes (incl. OpenAI's) #3035

Closed
tholor opened this issue Aug 12, 2022 · 0 comments · Fixed by #3036
Closed

Enable isolated_node_eval for AnswerGenerator nodes (incl. OpenAI's) #3035

tholor opened this issue Aug 12, 2022 · 0 comments · Fixed by #3036

Comments

@tholor
Copy link
Member

tholor commented Aug 12, 2022

Is your feature request related to a problem? Please describe.
When running pipeline.eval(), we have have two options to get metrics from our nodes: isolated and integrated.
While integrated works already on our generative QA nodes, the isolated mode is currently only working for Readers not Generators.

Example:
When running pipeline.eval() on a pipeline consisting of a BM25Retriever -> OpenAIAnswerGenerator, we get the error message No node(s) or global parameter(s) named add_isolated_node_eval found in pipeline.

Describe the solution you'd like
Modify the run() method of the base class to use the perfect labels as input from previous nodes

Additional context
This will also be needed to allow experiment runs in deepset Cloud as the additional isolated mode there is the expected default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant