Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggestion] Adaptive memory #10

Open
andrewc0de opened this issue Mar 5, 2023 · 3 comments
Open

[Suggestion] Adaptive memory #10

andrewc0de opened this issue Mar 5, 2023 · 3 comments

Comments

@andrewc0de
Copy link

Typically, I rely on ChatGPT to handle large chunks of code or lengthy blocks of text, depending on my work process. Unfortunately, this frequently leads to hitting the 4090 token limit, requiring me to start the conversation anew from scratch, which can be quite frustrating.

It would be immensely helpful if this project incorporated a feature similar to ChatGPT's adaptive memory system. This approach would only retain the most pertinent recent messages in memory, enabling ChatGPT to comprehend the context of the conversation without having to store the entire conversation history. This would prevent the API from hitting the token limit, making the experience much smoother.

@SamuelMiller
Copy link

This conversation discusses various approaches. https://community.openai.com/t/chatgpt-api-maximum-token/83321

@SamuelMiller
Copy link

SamuelMiller commented Mar 8, 2023

This may be tricky, but I wonder if there is a way to keep user-defined strings assigned in earlier prompts in their entirety if and when the prior conversation in the thread is being summarized to save tokens.

{stringObject} = '"""blah, blah, for multiple lines"""

Then I refer to that string in future prompts in that thread (I suspect until the 4090 token limit is reached):

What are the main concepts that are presented in {stringObject}?
What metaphysical, epistemological, and ethical frameworks can be used to analyze the concepts presented in {stringObject}?

Ideally, I want to be able to use the {stringObject} for as long as I can in that particular conversation.

@palandovalex
Copy link

Well, it can be made with chapGpt. If we came to token limit - then just ask gpt to compress early messages. Then, system can store both. and real chat history, and compressed history, that also can be edited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants