Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core[patch]: Bugfix: cache missed because callback _parentRunId passed in to cache key function (_getSerializedCacheKeyParametersForCall) #5205

Merged
merged 3 commits into from
Apr 26, 2024

Conversation

davidfant
Copy link
Contributor

This PR is a patch I've applied locally to fix the problem below. Whoever understands this code should prob change this fix to be more appropriate. Also, the TS types don't reflect a "config" being in callOptions, so idk where that comes from.

Problem

Here's an example output from _getSerializedCacheKeyParametersForCall, which is used to generate BaseLanguageModel cache keys:

_model:"base_chat_model",_type:"openai",apiKey:"...",config:{"tags":[],"metadata":{},"callbacks":{"handlers":[{"lc":1,"type":"not_implemented","id":["langchain_core","callbacks","langchain_tracer","LangChainTracer"]}],"inheritableHandlers":[{"lc":1,"type":"not_implemented","id":["langchain_core","callbacks","langchain_tracer","LangChainTracer"]}],"tags":[],"inheritableTags":[],"metadata":{},"inheritableMetadata":{},"name":"callback_manager","_parentRunId":"1f2f3a62-f04c-4e82-9bd0-11c8747c734b"},"recursionLimit":99,"outputKeys":"__end__","configurable":{}},dangerouslyAllowBrowser:true,frequency_penalty:0,max_tokens:1000,model:"gpt-4-1106-preview",model_name:"gpt-4-1106-preview",n:1,outputKeys:"__end__",presence_penalty:0,stream:false,temperature:0,tool_choice:{"type":"function","function":{"name":"respond"}},tools:[{"type":"function","function":{"name":"respond","parameters":{"type":"object","properties":{"reasoning":{"type":"string"},"response":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","const":"sendMessage"},"message":{"type":"string"}},"required":["type","message"],"additionalProperties":false},{"type":"object","properties":{"type":{"type":"string","const":"submitWorkflowGoal"},"name":{"type":"string"},"goal":{"type":"string"}},"required":["type","name","goal"],"additionalProperties":false}]}},"required":["reasoning","response"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}}}],top_p:1

What's problematic about this is "_parentRunId":"1f2f3a62-f04c-4e82-9bd0-11c8747c734b", which causes the cache key to be different for each run. My fix makes sure that config isn't being passed in when generating this key.

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Apr 24, 2024
Copy link

vercel bot commented Apr 24, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-api-refs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 26, 2024 10:28pm
langchainjs-docs ✅ Ready (Inspect) Visit Preview Apr 26, 2024 10:28pm

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Apr 24, 2024
@davidfant davidfant changed the title bugfix: cache missed because callback info (incl _parentRunId) was in… Bugfix: cache missed because callback _parentRunId passed in to cache key function (_getSerializedCacheKeyParametersForCall) Apr 24, 2024
@jacoblee93 jacoblee93 changed the title Bugfix: cache missed because callback _parentRunId passed in to cache key function (_getSerializedCacheKeyParametersForCall) core[patch]: Bugfix: cache missed because callback _parentRunId passed in to cache key function (_getSerializedCacheKeyParametersForCall) Apr 26, 2024
@jacoblee93
Copy link
Collaborator

Hmm interesting! Will investigate, thank you!

@jacoblee93
Copy link
Collaborator

jacoblee93 commented Apr 26, 2024

Oh, I think you might be using a RunnableLambda in LangGraph?

@jacoblee93
Copy link
Collaborator

This has to do with a backwards compatibility shim around RunnableLambda typing - would be breaking to fix it properly, so this is fine for now.

@jacoblee93 jacoblee93 added the lgtm PRs that are ready to be merged as-is label Apr 26, 2024
@jacoblee93 jacoblee93 merged commit dd7f528 into langchain-ai:main Apr 26, 2024
23 checks passed
@jacoblee93
Copy link
Collaborator

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature lgtm PRs that are ready to be merged as-is size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants