-
Notifications
You must be signed in to change notification settings - Fork 14k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a conversation memory that combines a (optionally persistent) vectorstore history with a token buffer #22155
Add a conversation memory that combines a (optionally persistent) vectorstore history with a token buffer #22155
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
libs/langchain/langchain/memory/token_buffer_vectorstore_memory.py
Outdated
Show resolved
Hide resolved
BTW, I'm using this routinely with persistence in my personal chatbots and am constantly amazed at how frequently the chatbot recalls something relevant from a previous conversation. For small (7b) local models, this really works much better than the conversation summary chain. I think a maintainer needs to approve the required workflows? |
f1e65aa
to
ba23101
Compare
Deployment failed with the following error:
|
The |
@isahers1 All tests are passing, documentation looks good. At what point does this get merged with the main branch — or is more needed to be done? |
Just merged, sorry for the delay! |
…torstore history with a token buffer (#22155) **langchain: ConversationVectorStoreTokenBufferMemory** -**Description:** This PR adds ConversationVectorStoreTokenBufferMemory. It is similar in concept to ConversationSummaryBufferMemory. It maintains an in-memory buffer of messages up to a preset token limit. After the limit is hit timestamped messages are written into a vectorstore retriever rather than into a summary. The user's prompt is then used to retrieve relevant fragments of the previous conversation. By persisting the vectorstore, one can maintain memory from session to session. -**Issue:** n/a -**Dependencies:** none -**Twitter handle:** Please no!!! - [X] **Add tests and docs**: I looked to see how the unit tests were written for the other ConversationMemory modules, but couldn't find anything other than a test for successful import. I need to know whether you are using pytest.mock or another fixture to simulate the LLM and vectorstore. In addition, I would like guidance on where to place the documentation. Should it be a notebook file in docs/docs? - [X] **Lint and test**: I am seeing some linting errors from a couple of modules unrelated to this PR. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. --------- Co-authored-by: Lincoln Stein <lstein@gmail.com> Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
langchain: ConversationVectorStoreTokenBufferMemory
-Description: This PR adds ConversationVectorStoreTokenBufferMemory. It is similar in concept to ConversationSummaryBufferMemory. It maintains an in-memory buffer of messages up to a preset token limit. After the limit is hit timestamped messages are written into a vectorstore retriever rather than into a summary. The user's prompt is then used to retrieve relevant fragments of the previous conversation. By persisting the vectorstore, one can maintain memory from session to session.
-Issue: n/a
-Dependencies: none
-Twitter handle: Please no!!!
Add tests and docs: I looked to see how the unit tests were written for the other ConversationMemory modules, but couldn't find anything other than a test for successful import. I need to know whether you are using pytest.mock or another fixture to simulate the LLM and vectorstore. In addition, I would like guidance on where to place the documentation. Should it be a notebook file in docs/docs?
Lint and test: I am seeing some linting errors from a couple of modules unrelated to this PR.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.