Fix memory update issue in GPTIndexChatMemory #14
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue Description
The issue reported in #12 is caused by the conversation history not being updated correctly in the
GPTIndexChatMemory
class. This results in the agent not being able to recall previous information, such as the user's name.Solution
I have made the following changes to address the issue:
In
llama_index/langchain_helpers/memory_wrapper.py
:load_memory_variables
method to correctly load the conversation history.save_context
method to correctly save the conversation history.In
examples/langchain_demo/LangchainDemo.ipynb
:LlamaIndex
and made sure it's done correctly.In
llama_index/agent/openai_agent.py
:chat
method and made sure the agent correctly interacts with the memory module.Testing
I have tested the changes by running the provided code snippet in the issue description. After applying the changes, the agent was able to correctly recall the user's name in the conversation.
Additional Notes
I have also added debug print statements in the relevant code sections to help with troubleshooting in case any issues arise in the future.
Please review and merge this PR. Thank you!
Fixes #12.
To checkout this PR branch, run the following command in your terminal: