Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Multi-Evaluator Node #200

Open
ianarawjo opened this issue Dec 28, 2023 · 0 comments
Open

Add Multi-Evaluator Node #200

ianarawjo opened this issue Dec 28, 2023 · 0 comments

Comments

@ianarawjo
Copy link
Owner

Currently each evaluator is a single node. Although code evaluators can support dictionary outputs for multiple metrics, this behavior is relatively obscure and only works for code-based assertions. To better support "iterative refinement" of prompts for developers, we should make it easier to add multiple, independent evaluations in the same node (i.e. a mix of named code assertions and LLM scoring prompts).

To implement this we need to:

  • abstract out the code eval and LLM scorer subcomponents from their respective nodes, so that they can be added independent of the node (like how the Response Inspector view works)
  • (possibly useful, but not strictly required) make LLM scorer nodes locked to true/false values by default (?), or otherwise re-think them to be easier to write and add expected output types that scores stick to
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant