You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I do not state that this is a bug, but an unnecessary procedure, or a redundant one.
In GPT4All v2.7.3, when opening up the list of Chats on the left,
it can be noticed how the text of one or more entries...
(I've seen this on 1 one entry, because there was only 1 one previous Chat available)
... is being formed/constructed token-by-token,
as if the associated LLM is creating it at that time,
instead of it being shown all at once, after being taken from a static list (how to say... as with bijective functions)
Steps to Reproduce
Just look at the Chats list - as in the image below.
It is possible that you will notice how the text is being formed on an entry; I've just noticed it today in v2.7.3, and I've been using the program frequently since v2.5.4.
Expected Behavior
The entries on the list of Chats should be static text, shown all at once, not created on the spot by the Reloaded LLM who would do supplementary work,
Motivation: in order not to Reload the associated LLM just for making a Very Short summary (as it seems to be the case), and to conserve memory and processor time:
there are instances when a Chat cannot be deleted, because the corresponding model is being Reloaded and the program simply stops responding after the user clicks on the trashcan.
Possible solution:
the Very Short summary to be shown as an entry in the list should be created at the time of ending the respective conversation, or at some point during it
for subsequent use, that static text should be linked to the model that generated it, whose name would be shown along the text
with the user given the choice to Load that LLM (and continue the conversation)
Your Environment
GPT4All version 2.7.3
Operating System: Windows 10
Chat model used (if applicable): any
Thank you for considering this.
The text was updated successfully, but these errors were encountered:
SINAPSA-IC
changed the title
GPT v2.7.3: List of Chats: Visible construction of the short summary/main idea of a Chat
GPT4All v2.7.3: List of Chats: Visible construction of the short summary/main idea of a Chat
Apr 17, 2024
Hello.
I do not state that this is a bug, but an unnecessary procedure, or a redundant one.
In GPT4All v2.7.3, when opening up the list of Chats on the left,
it can be noticed how the text of one or more entries...
(I've seen this on 1 one entry, because there was only 1 one previous Chat available)
... is being formed/constructed token-by-token,
as if the associated LLM is creating it at that time,
instead of it being shown all at once, after being taken from a static list (how to say... as with bijective functions)
Steps to Reproduce
Just look at the Chats list - as in the image below.
It is possible that you will notice how the text is being formed on an entry; I've just noticed it today in v2.7.3, and I've been using the program frequently since v2.5.4.
Expected Behavior
The entries on the list of Chats should be static text, shown all at once, not created on the spot by the Reloaded LLM who would do supplementary work,
Motivation: in order not to Reload the associated LLM just for making a Very Short summary (as it seems to be the case), and to conserve memory and processor time:
there are instances when a Chat cannot be deleted, because the corresponding model is being Reloaded and the program simply stops responding after the user clicks on the trashcan.
Possible solution:
Your Environment
Thank you for considering this.
The text was updated successfully, but these errors were encountered: