-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: database size #68
Comments
MemGPT has access to data you give it in databases. When it wants to read the contents of this data, it has to page it in via function calls. These function calls are paginated (in the functions we give provide in this repo), so data gets pulled into LLM context in chunks. |
I found this answer and the documentation a bit unclear - say my DB dump file is 1 TB. Is that too large / ineffective at its size? 200 GB? 500 GB? Does it depend on how much computer RAM is available? GPU VRAM? |
Nice question @tytung2020 Giving my cents on this topic, The function that loads the database get all the data from all the found tables and put it into memory, so you'll likely need to have as much RAM as the data you want with some added value that I can't tell how big I'll need to be references Line 275 in 849782d
Lines 280 to 282 in 849782d
|
Just a question, does the long context technique here includes the size of outside database that it connects to (via the method shown in the example)?
Not issue, just a question.
The text was updated successfully, but these errors were encountered: