-
Notifications
You must be signed in to change notification settings - Fork 43.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vector memory revamp (part 1: refactoring) #4208
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
This PR exceeds the recommended size of 200 lines. Please make sure you are NOT addressing multiple issues with one PR. Note this PR might be rejected due to its size |
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
Remaining todo's, to be addressed in a follow-up PR:
|
This comment was marked as duplicate.
This comment was marked as duplicate.
We committed some changes to this branch, please run
before pushing more changes to the remote. |
This comment was marked as duplicate.
This comment was marked as duplicate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open an issue for the nits that were closed related to prompting :)
* Add settings for custom base url and embedding dimension Making the openai base url and embedding dimension configurable, these are useful to integrate AutoGPT with other models, like LLaMA * Update to milvus.py to load the configuration also in the init_collection function * Update radismem.py to get rid of Config() loading * Update local.py to get rid of Config() loading * Correct code format (python black) * Revert DEFAULT_EMBED_DIM name to EMBED_DIM to keep tests valid * Better description for EMBED_DIM setting * Set MockConfig to the type Config in Milvus test * Fix formatting * Update Milvus test, using Config() instead of building a mock config * using the last milvus test code from main * Remove embed_dim , no more needed after #4208 * Add example for OPENAI_BASE_URL --------- Co-authored-by: Nicholas Tindle <nick@ntindle.com> Co-authored-by: Reinier van der Leer <github@pwuts.nl> Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gg
Work in progress on the memory system.
🔭 Primary todo's
Robust and reliable memorization routines for basic content (WIP)
Good memory search/retrieval based on relevance (WIP)
For a given query (e.g. a prompt or question), we need to be able to find the most relevant memories.
Must be implemented separately for each memory backend provider:
RedisMilvus(The other currently implemented providers are not in this list because they may be moved to plugins.)
🛠️ Secondary todo's
autogpt.memory
module structureMemoryItem
entity as a uniform interfaceUpdate Redis memory backendUpdate Milvus memory backendautogpt.processing
(WIP)text-davinci-003
and classic text completions)🔧 Other changes
llm_utils.create_text_completions
llm_utils.@metered
decorator for functions that make API calls to OpenAI