-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
fix(core): Fix and add missing cache attributes in Vercel AI #17982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(core): Fix and add missing cache attributes in Vercel AI #17982
Conversation
| * The number of cached input tokens that were used | ||
| * @see https://ai-sdk.dev/docs/ai-sdk-core/telemetry#basic-llm-span-information | ||
| */ | ||
| export const AI_USAGE_CACHED_INPUT_TOKENS_ATTRIBUTE = 'ai.usage.cachedInputTokens'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l: I couldn't find the ai.usage.cachedInputTokens in the given docs. Not sure if that is wanted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size-limit report 📦
|
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|

With the relay now handling cache token attributes (instead of scrubbing them), some Anthropic related token attributes were still missing. This PR adds the missing cache attributes and corrects the types in the Anthropic provider metadata used for extracting token data.
Fixes: #17890