Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming broken for GPT4all #16389

Closed
4 of 15 tasks
tomjorquera opened this issue Jan 22, 2024 · 2 comments · Fixed by #16392
Closed
4 of 15 tasks

Streaming broken for GPT4all #16389

tomjorquera opened this issue Jan 22, 2024 · 2 comments · Fixed by #16392
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@tomjorquera
Copy link
Contributor

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.

Example Code

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import GPT4All

callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=model_path, callbacks=callbacks)

Description

Previously it was possible to enable streaming the answer of a GPT4all model, but now it does not work anymore.

In the model source there is a streaming attribute declared at the class level, but it's not used anywere.

If I edit the source manually to add streaming as a valid parameter, I can make it work again by doing GPT4All(model=model_path, callbacks=callbacks, streaming=True)

System Info

langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13

Debian Sid

Python 3.10.4

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async
@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Jan 22, 2024
Copy link

dosubot bot commented Jan 22, 2024

🤖

It's great to see that you've identified a potential solution to the issue with streaming for GPT4All in LangChain. Your proposed fix seems reasonable, and it's wonderful that you're willing to contribute to the project by creating a pull request. Your contribution will be valuable in improving the framework for all users. Keep up the good work, and thank you for your willingness to contribute to LangChain!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@tomjorquera
Copy link
Contributor Author

I created #16392 implementing the mentioned changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant