-
Notifications
You must be signed in to change notification settings - Fork 14.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llm.batch
method fails with max_concurrency
in config
#12643
Comments
🤖 Hello @kaihe-stori, I'm Dosu, an AI here to assist you with your LangChain issues while you're waiting for a human maintainer. I'm here to help you solve bugs, answer questions, and guide you on your journey to becoming a contributor. From your description, it seems like you've encountered a recursion error when using the The Your proposed fix of setting However, please note that this is a temporary workaround and might not be the best solution in the long run. The LangChain team might need to revisit the handling of the I hope this helps! If you have any more questions or need further clarification, feel free to ask. Best, SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
… of `BaseLLM` (langchain-ai#12713) - **Description:** Correct number of elements in config list in `batch()` and `abatch()` of `BaseLLM` in case `max_concurrency` is not None. - **Issue:** langchain-ai#12643 - **Twitter handle:** @akionux --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
System Info
Who can help?
No response
Information
Related Components
Reproduction
Similar to the guide for
chat_models.batch
, but if we usellm
instead,batch
andabatch
methods will throw exceptions.Here is an example for
batch
You get the following error for dead loop exceeding recursion limit
Or if you set
max_concurrency
to be smaller than the length of the prompts, say 1Similar errors for
abatch
withmax_concurrecy
.I believe the bug lies in
langchain/libs/langchain/langchain/llms/base.py
Line 297 in f94e24d
config
list length is inconsistent with theinputs
length after chopping up by max_concurrency. The fix could be as simple asExpected behavior
No error, just like with
chat_models
The text was updated successfully, but these errors were encountered: