Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
predict_and_parse
exists, and it's a nice abstraction to allow for applying output parsers to LLM generations. And async is very useful.As an aside, the difference between
call/acall
,predict/apredict
andgenerate/agenerate
isn't entirelyclear to me other than they all call into the LLM in slightly different ways.
Is there some documentation or a good way to think about these differences?
One thought:
output parsers should just work magically for all those LLM calls. If the
output_parser
arg is set on the prompt, the LLM has access, so it seems like extra work on the user's end to have to calloutput_parser.parse
If this sounds reasonable, happy to throw something together. @hwchase17