Conversation
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…3/4) Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
…ase 4/4) Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 53ff629. Configure here.
| } | ||
| } | ||
| return input; | ||
| } |
There was a problem hiding this comment.
Local parseInput duplicates shared parseToolCallArgs utility
Low Severity
The private parseInput function in llamaindex.ts duplicates the logic of the exported parseToolCallArgs in utils.ts. Both parse a JSON string with a { __raw: ... } fallback, but they differ subtly for empty-string inputs: parseToolCallArgs coalesces "" to "{}" yielding {}, while parseInput would let JSON.parse("") throw and return { __raw: "" }. This hidden inconsistency means empty tool-call arguments hash differently depending on which adapter produced them. The LlamaIndex adapter could delegate to parseToolCallArgs for string inputs and only add the pass-through for non-string inputs.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 53ff629. Configure here.
| throw err; | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
storeMultipart missing cache.model span attribute
Low Severity
The storeMultipart method doesn't set span.setAttribute('cache.model', params.model) like the existing store method does (which sets it at line 169). This omission causes the OpenTelemetry span for multipart stores to lack the model attribute, making traces harder to filter and analyze. The fallback value for the TTL attribute also differs: store uses ttl ?? -1 while storeMultipart uses ttl ?? 0, creating an inconsistent sentinel for "no TTL."
Reviewed by Cursor Bugbot for commit 53ff629. Configure here.


Summary
Adds multi-modal support to @betterdb/agent-cache via three new SDK adapters (OpenAI Chat, Anthropic, LlamaIndex), a new OpenAI Responses adapter, a pluggable binary normalizer for content-addressed image/audio/document hashing,
and a storeMultipart() method on the LLM cache tier. All four adapters normalize to a shared intermediate representation, so the same cached response can be served regardless of which SDK the caller uses.
Changes
a stable cache key ref.
ReasoningBlock.
ToolResultBlock, ReasoningBlock, BlockHints). Backward compatible - text-only callers produce identical hashes to v0.2.0.
Checklist
roborev review --branchor/roborev-review-branchin Claude Code (internal)Note
Medium Risk
Moderate risk because it expands the public API and changes the LLM cache entry shape/hashing logic (new optional fields and
contentBlocks), which could affect cache hit rates and interoperability across SDKs.Overview
Adds multi-modal LLM caching to
@betterdb/agent-cacheby introducing a shared content-block IR (text/binary/tool_call/reasoning), a pluggable binary normalizer (composeNormalizer,hashBase64/hashUrl/etc.), and a newllm.storeMultipart()path that stores both flattened text and structuredcontentBlocks(withcheck()now returning them on hits).Introduces new provider adapters and exports for
OpenAI Chat,OpenAI Responses,Anthropic, andLlamaIndex(prepareParams), plus cross-provider fixtures/tests to ensure these adapters normalize to the same params+hash. Updates package exports/peers, bumps version to0.3.0, adds runnable examples for the new adapters, and extends the release workflow build verification to include the new adapter artifacts.Reviewed by Cursor Bugbot for commit 1989dc4. Bugbot is set up for automated code reviews on this repo. Configure here.