Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPT4All v2.7.3: List of Chats: Visible construction of the short summary/main idea of a Chat #2233

Open
SINAPSA-IC opened this issue Apr 17, 2024 · 0 comments
Labels
bug-unconfirmed chat gpt4all-chat issues

Comments

@SINAPSA-IC
Copy link

SINAPSA-IC commented Apr 17, 2024

Hello.

I do not state that this is a bug, but an unnecessary procedure, or a redundant one.

In GPT4All v2.7.3, when opening up the list of Chats on the left,
it can be noticed how the text of one or more entries...
(I've seen this on 1 one entry, because there was only 1 one previous Chat available)
... is being formed/constructed token-by-token,
as if the associated LLM is creating it at that time,
instead of it being shown all at once, after being taken from a static list (how to say... as with bijective functions)

Steps to Reproduce

Just look at the Chats list - as in the image below.
It is possible that you will notice how the text is being formed on an entry; I've just noticed it today in v2.7.3, and I've been using the program frequently since v2.5.4.

Expected Behavior

The entries on the list of Chats should be static text, shown all at once, not created on the spot by the Reloaded LLM who would do supplementary work,

Motivation: in order not to Reload the associated LLM just for making a Very Short summary (as it seems to be the case), and to conserve memory and processor time:
there are instances when a Chat cannot be deleted, because the corresponding model is being Reloaded and the program simply stops responding after the user clicks on the trashcan.

Possible solution:

  1. the Very Short summary to be shown as an entry in the list should be created at the time of ending the respective conversation, or at some point during it
  2. for subsequent use, that static text should be linked to the model that generated it, whose name would be shown along the text
  3. with the user given the choice to Load that LLM (and continue the conversation)

Your Environment

  • GPT4All version 2.7.3
  • Operating System: Windows 10
  • Chat model used (if applicable): any

Thank you for considering this.

img_gpt4all273chats

@SINAPSA-IC SINAPSA-IC changed the title GPT v2.7.3: List of Chats: Visible construction of the short summary/main idea of a Chat GPT4All v2.7.3: List of Chats: Visible construction of the short summary/main idea of a Chat Apr 17, 2024
@cebtenzzre cebtenzzre added the chat gpt4all-chat issues label Apr 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed chat gpt4all-chat issues
Projects
None yet
Development

No branches or pull requests

2 participants