You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
Example Code
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import GPT4All
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=model_path, callbacks=callbacks)
Description
Previously it was possible to enable streaming the answer of a GPT4all model, but now it does not work anymore.
In the model source there is a streaming attribute declared at the class level, but it's not used anywere.
If I edit the source manually to add streaming as a valid parameter, I can make it work again by doing GPT4All(model=model_path, callbacks=callbacks, streaming=True)
The text was updated successfully, but these errors were encountered:
dosubotbot
added
Ɑ: models
Related to LLMs or chat model modules
🤖:bug
Related to a bug, vulnerability, unexpected error with an existing feature
labels
Jan 22, 2024
It's great to see that you've identified a potential solution to the issue with streaming for GPT4All in LangChain. Your proposed fix seems reasonable, and it's wonderful that you're willing to contribute to the project by creating a pull request. Your contribution will be valuable in improving the framework for all users. Keep up the good work, and thank you for your willingness to contribute to LangChain!
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Checked other resources
Example Code
Description
Previously it was possible to enable streaming the answer of a GPT4all model, but now it does not work anymore.
In the model source there is a
streaming
attribute declared at the class level, but it's not used anywere.If I edit the source manually to add
streaming
as a valid parameter, I can make it work again by doing GPT4All(model=model_path, callbacks=callbacks, streaming=True)System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
Debian Sid
Python 3.10.4
Related Components
The text was updated successfully, but these errors were encountered: