-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add Zep Cloud Memory component #9657
Conversation
Hey @paul-paliychuk, Thanks for the PR, While we are not generally accepting new node PRs from the community this looks like it doesn't need to be a new node. Are you able to update the existing credential to have an option for "Cloud" that defaults to open source so we don't have a breaking change then update the existing node to use either the url from the credential or the Cloud url. This will then remove the need for a new node and make this a lot smaller, it looks like the new node is also trying to get the apiUrl from the new credential which doesn't exist as a field and there is no authenticate or test function in the credential... but this wouldn't be an issue if the existing node and credential was modified. Let me know if you have any questions on this. |
@Joffcom Hey, makes sense thanks, will update the PR. We don't require an api url for the cloud sdk, so I will make it optional. Noted about nodes contributions, we've had quite a lot of community requests for n8n support recently :) |
@Joffcom removed previously added zep cloud memory node, zep cloud api credential and added cloud support to the original node. I made apiUrl optional as well as it's built into the sdk, however facing some issues with credential testing. Wondering if there's nice way to bypass those tests if cloud is used (as we have different api routes on cloud) |
Hey @paul-paliychuk, If you were to have a toggle or dropdown in the node to select between self hosted or cloud you could use that in the credential test to set the test url to the cloud endpoint and hide the url field. |
@Joffcom Makes sense, just updated the credential. I also realized that now that cloud control inside of memory node is somewhat redundant as we can get that info from the credential object lmk if that's ok CleanShot.2024-06-07.at.11.30.01.mp4 |
Hey @paul-paliychuk, That looks pretty good although I would probably have the toggle under the API key so we would have API Key, Cloud, URL. The only bit I am not sure about is the extra packages but that I will pass this to @OlegIvaniv our main man for the AI nodes 😄 |
@Joffcom That makes sense to me, rearranged the order of credential fields. Thanks! |
@Joffcom @OlegIvaniv Hey guys, any update on this PR? Thanks! |
@paul-paliychuk Appreciate the effort! I've ran into some issue when testing the cloud version though. It seems like the messages are merged together so it would always return just a single HumanMessage. This is problematic because when we pre-load messages we need to populate both AI and Human messages. |
@OlegIvaniv Hi, thanks for reviewing this. The message merging is intentional. The reason why the memory (facts, summary) along with the messages comes as a Human message is that we've faced limitations with some model providers unable to parse more than one system message and being restrictive about the messages order too. In particular, we've had issues with newer anthropic models and gemini. We wanted to make the implementation more flexible so that the user could provide their own system message if they needed to. Could you please elaborate on how this breaks the preloading of messages? |
I see, it would be nice for this behavior to be configurable. Currently it breaks pre-loading for both debug and public chat and also "Get Many" and "Delete Messages" operations in Chat Memory Manager node. These expect a Langchain's standard single message format. Returning them as a single message would result in a message like this when we load messages from history: Here's a simple workflow to demonstrate that: |
@OlegIvaniv Got it makes sense. I will add a configurable option to our |
@OlegIvaniv Just updated langchain community version to include a recently added I also had to bump langchain-core version to 0.2.9, otherwise the build was failing. |
There's still something funky going on when pre-loading the messages:
Video for better demonstration: CleanShot.2024-06-28.at.12.04.23.mp4 |
# Conflicts: # pnpm-lock.yaml
…ignment, duplicate messages
@OlegIvaniv Hi, thanks for reviewing the PR one more time. It turned out that both issues (preloading of messages and duplication) have been fixed by removing the assignment of While testing this I also found another issue with anthropic model, when a new session is started, it would output an internal error because an Also confirmed that it works well for me on the flow with preloaded messages that you attached previously (with wikipedia agent tool) (both anthropic and openai) CleanShot.2024-06-28.at.16.25.04.mp4CleanShot.2024-06-28.at.16.26.28.mp4 |
Also - answering your question previously, we are aware of the anthropic limitation (we've encountered the same on with gemini too), that's why we opted to merge messages. Was able to confirm that it doesn't break the pre loading of messages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@paul-paliychuk Thank you so much for fixing these two issues! Seems to be working as expected now :-)
Got released with |
Summary
Example workflow
Related tickets and issues
Review / Merge checklist
(no-changelog)
otherwise. (conventions)