Leverage Anthropic's cache with generateObject #3921
Replies: 1 comment
-
The issue stems from how caching prompts work in relation to the According to Anthropic's documentation: "Prompt caching references the entire prompt - tools, system, and messages (in that order) up to and including the block designated with cache_control." Looking into the
Because the schema is injected into the system prompt, any schema change will always invalidate the cache according to the caching order mentioned above. I created a workaround using
|
Beta Was this translation helpful? Give feedback.
-
Hey! I noticed that using
generateObject
function, it seems impossible to leverage Anthropic's cache when the schema changes.e.g. in the following example:
will write another cache for the second request, instead of using the written cache from the first, and the only thing that changed is the JSON schema (it works correctly if I ask again for
field1
).My use case is:
context
is huge, and I need different LLM calls to address different needs (some calls for example need previous results to work), and I'd love to be able to cache it and add some following messages with the different instructions.I understand Anthropic uses tools, system and messages (in this order) to prefix the caches, but wondering if there is something I can do to properly cache only the prompt, and reuse it for different JSON outputs (or even between
generateObject
/generateText
requests).Thanks for any kind of help! 🙏
Beta Was this translation helpful? Give feedback.
All reactions