You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 22, 2023. It is now read-only.
Hi @FieldRen, The current memory implementations in langchain are heuristic-based, meaning you are using all the conversation history or the last k messages. Hence, such approaches are not adaptive if the user changes the topic which was part of earlier conversation history, the heuristic approaches might miss it because of token overflow or it could be outside the last k messages. We in our case try to make the history component adaptive hence, it only brings the previous k messages which are relevant to the current message from the entire history. Hence, adding more relevant context in the prompt and never running out of token length. Below is an example:
If you want to delve deeper into current memory implementations of langchain, here is a great article.
Hi, great work! But what's the difference with langchain?
https://python.langchain.com/en/latest/modules/memory/getting_started.html
The text was updated successfully, but these errors were encountered: