Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display short-term and long-term memory usage #5

Closed
Torantulino opened this issue Mar 29, 2023 · 14 comments
Closed

Display short-term and long-term memory usage #5

Torantulino opened this issue Mar 29, 2023 · 14 comments
Labels

Comments

@Torantulino
Copy link
Member

Torantulino commented Mar 29, 2023

Auto-GPT currently pins it's Long-Term memory to the start of it's context window. It is able to manage this through commands.

Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits. e.g memory usage: (2555/4000 tokens)

This may lead to some interesting behaviour where it is less inclined to read long strings of text, or is more meticulous at saving information to long-term-memory when it sees it's running low on tokens.

@Torantulino Torantulino added the enhancement New feature or request label Mar 29, 2023
@claysauruswrecks
Copy link

From what I was reading, you can take the context window, and compress chunks at the rear into summaries.

@Torantulino
Copy link
Member Author

Interesting idea! This would expand short-term memory.

Currently Auto-GPT manages it's own "Long-Term Memory" which is "pinned" to the start of the context.

@tedspare
Copy link

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

@Torantulino
Copy link
Member Author

I've been meaning to look into this.
Is it practical to regularly rebuild/add to an embedding?

Forgive my ignorance, I've never used them.

@tedspare
Copy link

tedspare commented Apr 2, 2023

All good! Thanks for your reply. In my (limited) understanding, adding embeddings is no more than adding a row to a DB (but with vector data).

@jantic
Copy link

jantic commented Apr 3, 2023

Another approach could be to run history through an embeddings API, save the embeddings to a Vector DB, then do a lookup for relevant memories on each step.

I really think this is an excellent idea. In fact it might be a huge win. This would basically give you an indefinite context window in effect, in terms of "long term" memory. Of course the discarding of "irrelevant" info in any given call to the model will be imperfect, but I'd bet it'll work pretty well.

I was thinking about this myself this morning and wondered if anybody else already mentioned it. Basically I see it as an "associative memory", much like what we have in our own minds. You could perhaps have the GPT model generate a few orthogonal short summaries of what it just output and responded to (top 5?), store these in the vector db, and then get the most relevant "memories" for subsequent calls based on this same process.

So combine these "N closest" memories with most recent and I think you'll get a very effective long term memory mechanism.

Is there anyone out there that sees problems with this idea or has way to improve upon it? It seems super awesome to me...

@dschonholtz
Copy link
Contributor

@Torantulino I'm going to pick this up if it is ok with you.
Here is my laundry list:

  1. Store long term memory in pinecone: https://www.pinecone.io/. There are lots of options here, this is just fairly simple and is what babyagi is using: https://github.com/yoheinakajima/babyagi
  2. Pull in n closest memories. Default n to 5, but make it configurable. (Do some experimentation on what seems most useful.)
  3. Make this memory object a class that is optional. Specify the delete and add operations on the current memory dict obj to pinecone operations. I'll try to keep this fairly extensible so we could easily make classes with the same interface for different vector DBs
  4. Add in a pinecone api key in .env.template
  5. Update the readme to tell people to use it.
  6. If no api key is specified tell the user they are using a local memory (The current implementation). Also, support explicit local memory option.

Let me know if there is anything here you'd like me to change. I should have a working version of this by EOD tomorrow EST.

I would hope to then be able to extend this to processing files in large repos too and eventually I want to make this feed into the self improvement pipeline with respect to remembering where relevant local files are to large tasks.

@alreadydone
Copy link

alreadydone commented Apr 4, 2023

I believe it's possible to simply use a key-value store as memory and make it available to Auto-GPT as a tool, letting the model itself decide when and what to read from and write to the memory. Auto-GPT already has code execution implemented, so it has all Python functions available as tools, and this is just one more tool. To make the model aware of the memory tool and good at utilizing it, we would have to finetune it (e.g. using the Toolformer approach; there are two open-source implementations and this is more popular than the official one), and would need to collect some usage data (there isn't any paper or implementation that uses a memory tool yet AFAIK). Finetuning is available for ChatGPT-3.5 but not GPT-4, but I think we'll need to finetune anyway if we want Auto-GPT to create new tools and self-improve; we may also use an open model (many of them have LoRA finetuning implementations), which are be less powerful, but we may expose GPT-4 API to it and train it to use the API as a tool, so the whole system would not be less powerful.

@alreadydone
Copy link

alreadydone commented Apr 4, 2023

Actually, maybe we can make GPT models aware of the memory tool using the system message without the need of finetuning, since it's just a single simple tool. Something like

You are a language model with limited memory (or context length) so that you'll forget what's said 8,000 tokens (3,000 words?) earlier. However, you now have access to a key-value database that serve as your long-term memory. If you are about to forget something important, you may say <remember "k" "v"> to store it in the database, which you can later recall by saying <recall "k">.

I'm not experienced in prompt engineering so there's definitely room for improvement. Notice that

In general, gpt-3.5-turbo-0301 does not pay strong attention to the system message, and therefore important instructions are often better placed in a user message.

so this should work better with GPT-4 than 3.5. If you have access, please try!

@dschonholtz
Copy link
Contributor

This works. Hard to test this kind of thing concretely. But anecdotally it seems like it is much smarter now.
I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts.
Then I'm gonna do another pass with the debugger and assuming it appears to be doing what I think it is doing I'll put it up for review

@dschonholtz
Copy link
Contributor

See pull: #122

@Pwuts
Copy link
Member

Pwuts commented Apr 18, 2023

Is this resolved with the output of --debug?

@Boostrix
Copy link
Contributor

Boostrix commented May 4, 2023

I'm implementing a thing to actually track memory usage, number of memory keys taken up or number of vectors in DB to output between thoughts.
Auto-GPT should be aware of it's short and long term memory usage so that it knows when something is doing to be deleted from it's memory due to context limits.

This would ideally be a part of a "quota"-like system so that sub-agents could be managed by agents higher up in the chain whenever there is a quota/constraint violation (soft/hard), as per #3466

@github-actions github-actions bot added the Stale label Sep 6, 2023
@github-actions
Copy link

This issue was closed automatically because it has been stale for 10 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants