Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add apredict_and_parse to LLM #2164

Merged
merged 2 commits into from
Mar 30, 2023

Conversation

timothyasp
Copy link
Contributor

predict_and_parse exists, and it's a nice abstraction to allow for applying output parsers to LLM generations. And async is very useful.

As an aside, the difference between call/acall, predict/apredict and generate/agenerate isn't entirely
clear to me other than they all call into the LLM in slightly different ways.

Is there some documentation or a good way to think about these differences?

One thought:

output parsers should just work magically for all those LLM calls. If the output_parser arg is set on the prompt, the LLM has access, so it seems like extra work on the user's end to have to call output_parser.parse

If this sounds reasonable, happy to throw something together. @hwchase17

predict_and_parse exists, and it's a nice abstraction to allow for applying
output parsers to LLM generations.  And async is very useful.
@hwchase17
Copy link
Contributor

output parsers should just work magically for all those LLM calls. If the output_parser arg is set on the prompt, the LLM has access, so it seems like extra work on the user's end to have to call output_parser.parse

If this sounds reasonable, happy to throw something together. @hwchase17

yeah we thought about this a bit when revamping output parsers... tldr we want to hold off for now, but not against adding in the future

@hwchase17 hwchase17 merged commit 6be6727 into langchain-ai:master Mar 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants