New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix incomplete json output in guideline evaluator #12646
Conversation
@@ -71,7 +79,9 @@ def __init__( | |||
else: | |||
self._eval_template = eval_template or DEFAULT_EVAL_TEMPLATE | |||
|
|||
self._output_parser = PydanticOutputParser(output_cls=EvaluationData) | |||
self._output_parser = PydanticOutputParser( | |||
output_cls=EvaluationData, pydantic_format_tmpl=PYDANTIC_FORMAT_TMPL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we modify prompt here? There is already a very similar prompt here
PYDANTIC_FORMAT_TMPL = """ |
Feels like either the default prompt could be modified, or this is specific to the LLM you are using and you should just pass in a customized output parser as kwarg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Actually I'm overwriting the template for evaluations only because I'm not quite sure whether it's a common requirements for json output to modify the default template.
Now I feel adding a custom output parser to the constructor here makes more sense since it allows flexibility with different LLMs/settings and backward compatibility. I have added a custom output parser para now.
Description
For guidelines evaluator, the code expect json output from LLM and raise exception when fail to parse json.
However, sometimes the LLM response is longer than max token due to a long feedback, the output json will be incomplete and thus break the evaluation process. This is an unintended behavior for evaluation process.
Considering we don't need very long feedbacks in evaluation, I added a 'concise hint' to the output format prompt to fix this issue.
Fixes # (issue)
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Suggested Checklist:
make format; make lint
to appease the lint gods