Skip to content

Add @betterdb/agent-cache package — multi-tier LLM/tool/session cache with framework adapters#105

Merged
KIvanow merged 42 commits intomasterfrom
agent-cache
Apr 15, 2026
Merged

Add @betterdb/agent-cache package — multi-tier LLM/tool/session cache with framework adapters#105
KIvanow merged 42 commits intomasterfrom
agent-cache

Conversation

@KIvanow
Copy link
Copy Markdown
Member

@KIvanow KIvanow commented Apr 14, 2026

Summary

  • Introduces packages/agent-cache, a standalone multi-tier exact-match cache for AI agent workloads backed by Valkey. Three cache tiers behind one connection: LLM responses, tool results, and session state, with built-in OpenTelemetry tracing and Prometheus metrics.
  • Adds framework adapters for LangChain (BetterDBLlmCache), Vercel AI SDK (createAgentCacheMiddleware), and LangGraph (BetterDBSaver checkpoint saver). Works on vanilla Valkey 7+ with no modules — unlike langgraph-checkpoint-redis, does not require Redis 8+, RedisJSON, or RediSearch.
  • Includes minor fixes to the existing API: .gitignore for root data/ directory, truncated license key in usage telemetry, and 402 status handling in migration e2e tests.

Key features

  • LLM cache: exact-match on model/messages/temperature/tools with cost tracking via configurable costTable
  • Tool cache: per-tool TTL policies, toolEffectiveness() rankings with increase/decrease/optimal recommendations
  • Session store: sliding-window TTL, scanFieldsByPrefix() for targeted reads without refreshing unrelated TTLs
  • LangGraph saver: full checkpoint protocol including pendingWrites reconstruction, interrupt/resume, human-in-the-loop
  • AI SDK middleware: skips caching non-text responses (tool calls), marks hits with providerMetadata.agentCache.hit
  • Self-healing: corrupt cache entries are deleted on detection rather than re-fetched until TTL expiry
  • Stats: per-tier hit/miss counts, per-tool hit rates, cost savings in microdollars

Robustness hardening (from multiple Roborev rounds)

  • Validate thread_id/checkpoint_id in putWrites()/put() before use
  • Sequential checkpoint-then-latest writes in put() to prevent dangling pointers on partial failure
  • resetTracker() uses gauge.set(0) instead of dec(count) to avoid negative drift
  • LLM miss stats use pipeline for consistency with tool cache pattern
  • SessionTracker LRU eviction exported and directly unit-tested
  • Glob metacharacters escaped in all SCAN patterns; colons rejected in tool names
  • specificationVersion: 'v3' coupling documented; ai peer dep tightened to ^6.0.135

Test plan

  • Run cd packages/agent-cache && npx vitest run — all 108 unit tests pass
  • Run integration tests with a local Valkey instance (VALKEY_URL=redis://localhost:6380)
  • Verify apps/api e2e migration tests still pass with 402 status tolerance

Checklist

  • Unit / integration tests added
  • Docs added / updated
  • Roborev review passed — run roborev review --branch or /roborev-review-branch in Claude Code (internal)
  • Competitive analysis done / discussed (internal)
  • Blog post about it discussed (internal)

Note

Medium Risk
Medium risk because it introduces a new publishable package (runtime cache logic, telemetry/analytics hooks, and Valkey key scanning/deletion) plus a new automated npm/GitHub release workflow; failures could affect publishing or runtime caching behavior.

Overview
Introduces a new publishable package, @betterdb/agent-cache, providing a Valkey/Redis-backed multi-tier cache (LLM, tool, session) with built-in OpenTelemetry spans, Prometheus metrics, stats aggregation, and optional framework adapters.

Adds extensive package docs, changelog, and runnable adapter examples, plus unit/integration tests. Also adds a GitHub Actions workflow to build/version/publish the package on agent-cache-v* tags (and create a GitHub release), updates .gitignore to ignore root data/, and truncates the API’s usage-telemetry license key to a safe ...last4 suffix.

Reviewed by Cursor Bugbot for commit c238b5d. Bugbot is set up for automated code reviews on this repo. Configure here.

KIvanow and others added 25 commits April 9, 2026 09:48
Co-authored-by: Kristiyan Ivanov <k.ivanow@gmail.com>
…nd list() putWrites() stored pending writes to session keys but getTuple() never read them back, silently breaking interrupt/resume and human-in-the-loop workflows. Added extractPendingWrites() to reconstruct CheckpointPendingWrite tuples from session fields in both getTuple() and list().
…ause

  - Reset SessionTracker and gauge in flush() to stay in sync with Valkey
  - Measure valueJson instead of response for accurate storedBytes metric
  - Batch stats hincrby calls into pipeline to reduce round-trips
  - Use native ES2022 Error cause option in ValkeyCommandError
  - Add resetPolicies() to ToolCache, call from flush() to clear stale policies
  - Escape glob metacharacters in getAll/touch SCAN patterns
  - Reject glob chars in destroyThread to prevent accidental mass deletion
  - Pass token usage from AI SDK generate result to store() for cost tracking
…, document TTL side effect

  - Reject glob metacharacters in invalidateByTool() to prevent unintended key matches
  - URL-encode checkpointId in putWrites() to handle | delimiter safely
  - Update extractPendingWrites() to match encoded checkpointId prefix
  - Document that list() refreshes TTL on all checkpoints via getAll() side effect
…used newVersions

  - Reject colons in tool names at check/store/setPolicy to fix stats parsing
  - Escape cache name glob chars in flush() SCAN pattern
  - Remove newVersions from stored checkpoint data (no consumer)
  - Add tests for glob/colon rejection and list() before filter
…idateByModel

  - Short-circuit list() to read checkpoint:latest directly when limit=1 with no before filter
  - Parse timestamps to Date objects for correct chronological ordering
  - Remove console.warn from loadPolicies - libraries should not log to console
  - Escape cache name in invalidateByModel SCAN pattern to match flush behavior
…etry

  - Apply escapeGlobPattern(this.name) consistently in ToolCache.invalidateByTool,
    SessionStore.getAll, destroyThread, and touch SCAN patterns
  - Truncate license key to last 4 chars in PostHog events to avoid exposing full key
  - Fix misleading comment in list() fast path about getAll still being called
…recommendations

  - Move escapeGlobPattern and validateNoGlobChars to utils.ts, remove duplicates
  - Use Promise.all in put() and putWrites() for atomic/parallel writes
  - Default started=true in list() when before checkpoint not found
  - Resolve effective TTL through full hierarchy in toolEffectiveness()
  - Correct put() comment: Promise.all provides concurrency, not atomicity. Add note about crash consistency trade-off vs MULTI/EXEC.
  - Fix list() before filter: return empty when before checkpoint not found per LangGraph protocol (before means "older than this checkpoint")
  - Fix extractPendingWrites: parse idx from storage key and sort by it to preserve write ordering within a checkpoint
Add scanFieldsByPrefix() to SessionStore for targeted SCAN without TTL refresh side effects. Use it in BetterDBSaver's getTuple() and list() fast path instead of getAll(), which was silently extending TTL on all session fields for the thread.

Document intentionally omitted fields in the AI middleware cache hit return (rawCall, rawResponse, etc.) and add a test locking in list() behavior when the before checkpoint ID doesn't exist.
…before use

Casting undefined configurable fields with `as string` caused encodeURIComponent(undefined) to silently produce literal "undefined" keys, corrupting write storage. Now throws AgentCacheUsageError early.
…LicenseKey

- AI middleware now skips caching when response contains tool-call parts, preventing silent data loss on cache hit for tool-calling workflows
- Add SessionStore.scanFieldsByPrefix() unit tests (prefix matching, empty results, no TTL refresh, multi-page SCAN cursor)
- Add dedicated test for list() limit=1 fast path verifying it uses session.get + scanFieldsByPrefix instead of getAll
- Mark getLicenseKey() private since only consumer already uses a cast
- Delete unparseable LLM/tool cache entries on first detection instead of re-fetching them on every check() until TTL expires
- Add providerMetadata.agentCache.hit to AI middleware cache-hit responses so consumers can distinguish them from real zero-token calls
…y, SessionTracker coverage, and ai peer dep

- Add unit tests for corrupt cache entry self-healing (del on invalid JSON) in LlmCache and ToolCache
- Switch LLM miss stats from standalone hincrby to pipeline, matching the tool cache pattern for consistency
- Export SessionTracker and add direct unit tests for LRU eviction logic and gauge tracking via set()
- Tighten ai peer dependency from ">=6.0.0" to "^6.0.135" and document the specificationVersion coupling with a code comment
…orev

- Write checkpoint before latest pointer sequentially in put() to prevent dangling references on partial failure
- Use set(0) instead of dec(count) in resetTracker() to avoid driving the active_sessions gauge negative after drift
- Pre-filter writes map in list() general path so extractPendingWrites only iterates relevant fields instead of the full session
- Await del() in corrupt cache entry handlers for guaranteed cleanup
- Document why parts.length !== 3 skip is safe in extractPendingWrites
@KIvanow KIvanow requested a review from jamby77 April 14, 2026 19:20
@github-actions
Copy link
Copy Markdown

Thank you for your contribution! Before we can merge this PR, you need to sign our Contributor License Agreement.

To sign, please comment below with:

I have read the CLA Document and I hereby sign the CLA


I have read the CLA Document and I hereby sign the CLA


1 out of 2 committers have signed the CLA.
✅ (KIvanow)[https://github.com/KIvanow]
@cursoragent
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Comment thread packages/agent-cache/src/adapters/langgraph.ts Outdated
Comment thread packages/agent-cache/src/AgentCache.ts
@KIvanow
Copy link
Copy Markdown
Member Author

KIvanow commented Apr 14, 2026

Thank you for your contribution! Before we can merge this PR, you need to sign our Contributor License Agreement.

To sign, please comment below with:

I have read the CLA Document and I hereby sign the CLA

I have read the CLA Document and I hereby sign the CLA

1 out of 2 committers have signed the CLA.✅ (KIvanow)[https://github.com/KIvanow]❌ @cursoragentYou can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

I should be able to sign it properly during a squash merge, so ignore this check for now

cursoragent and others added 4 commits April 14, 2026 19:53
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Autofix Details

Bugbot Autofix prepared fixes for both issues found in the latest run.

  • ✅ Fixed: Missing checkpoint ID validation in put() method
    • Added an explicit checkpoint.id guard in put() that throws AgentCacheUsageError before writing any session keys when the ID is missing or empty.
  • ✅ Fixed: Inconsistent tool name validation allows unclearable entries
    • Updated validateToolName to also reject glob metacharacters so check, store, and setPolicy enforce the same tool-name constraints as invalidateByTool.

Create PR

Or push these changes by commenting:

@cursor push fcf3aafa0f
Preview (fcf3aafa0f)
diff --git a/packages/agent-cache/src/__tests__/ToolCache.test.ts b/packages/agent-cache/src/__tests__/ToolCache.test.ts
--- a/packages/agent-cache/src/__tests__/ToolCache.test.ts
+++ b/packages/agent-cache/src/__tests__/ToolCache.test.ts
@@ -300,6 +300,12 @@
       await expect(cache.setPolicy('my:tool', { ttl: 300 })).rejects.toThrow(AgentCacheUsageError);
     });
 
+    it('rejects glob metacharacters in check(), store(), and setPolicy()', async () => {
+      await expect(cache.check('tool[1]', {})).rejects.toThrow(AgentCacheUsageError);
+      await expect(cache.store('tool*', {}, 'result')).rejects.toThrow(AgentCacheUsageError);
+      await expect(cache.setPolicy('tool?name', { ttl: 300 })).rejects.toThrow(AgentCacheUsageError);
+    });
+
     it('rejects glob metacharacters in invalidateByTool()', async () => {
       await expect(cache.invalidateByTool('tool*')).rejects.toThrow(AgentCacheUsageError);
       await expect(cache.invalidateByTool('tool?name')).rejects.toThrow(AgentCacheUsageError);

diff --git a/packages/agent-cache/src/__tests__/adapters.test.ts b/packages/agent-cache/src/__tests__/adapters.test.ts
--- a/packages/agent-cache/src/__tests__/adapters.test.ts
+++ b/packages/agent-cache/src/__tests__/adapters.test.ts
@@ -640,6 +640,20 @@
     ).rejects.toThrow('put() requires config.configurable.thread_id');
   });
 
+  it('put() throws AgentCacheUsageError when checkpoint.id is missing', async () => {
+    const mockCache = createMockAgentCache();
+    const saver = new BetterDBSaver({ cache: mockCache });
+
+    await expect(
+      saver.put(
+        { configurable: { thread_id: 'thread-1' } },
+        { id: '', ts: '2024-01-01T00:00:00Z' } as any,
+        {},
+        {},
+      ),
+    ).rejects.toThrow('put() requires checkpoint.id');
+  });
+
   it('deleteThread() calls session.destroyThread', async () => {
     const mockCache = createMockAgentCache();
     (mockCache.session.destroyThread as ReturnType<typeof vi.fn>).mockResolvedValue(5);

diff --git a/packages/agent-cache/src/adapters/langgraph.ts b/packages/agent-cache/src/adapters/langgraph.ts
--- a/packages/agent-cache/src/adapters/langgraph.ts
+++ b/packages/agent-cache/src/adapters/langgraph.ts
@@ -88,6 +88,9 @@
       throw new AgentCacheUsageError('put() requires config.configurable.thread_id');
     }
     const checkpointId = checkpoint.id;
+    if (!checkpointId) {
+      throw new AgentCacheUsageError('put() requires checkpoint.id');
+    }
 
     // Note: newVersions is not stored - no current consumer needs it.
     // If version-based conflict detection is needed, add it back with a concrete use case.

diff --git a/packages/agent-cache/src/tiers/ToolCache.ts b/packages/agent-cache/src/tiers/ToolCache.ts
--- a/packages/agent-cache/src/tiers/ToolCache.ts
+++ b/packages/agent-cache/src/tiers/ToolCache.ts
@@ -4,10 +4,10 @@
 import { toolCacheHash, escapeGlobPattern, validateNoGlobChars } from '../utils';
 
 /**
- * Validate that tool name doesn't contain colons, which are used as key delimiters.
- * Tool names with colons would break stats parsing in AgentCache.stats().
+ * Validate tool names used in key construction and stats fields.
  */
 function validateToolName(toolName: string): void {
+  validateNoGlobChars(toolName, 'toolName');
   if (toolName.includes(':')) {
     throw new AgentCacheUsageError(
       `Tool name "${toolName}" contains colon (:). ` +

This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment thread packages/agent-cache/src/adapters/langgraph.ts
Comment thread packages/agent-cache/src/tiers/ToolCache.ts
@cursor
Copy link
Copy Markdown

cursor Bot commented Apr 14, 2026

Bugbot Autofix prepared fixes for both issues found in the latest run.

  • ✅ Fixed: Checkpoint ID "latest" collides with pointer field name
    • Changed the latest pointer to a reserved field (__checkpoint_latest) with legacy fallback handling so checkpoints with id latest remain listable and retrievable.
  • ✅ Fixed: Variable shadowing obscures which toolStats is returned
    • Renamed the inner per-tool loop variable to perToolEntry to remove shadowing and make the returned aggregate toolStats unambiguous.

Create PR

Or push these changes by commenting:

@cursor push 524703fc0a
Preview (524703fc0a)
diff --git a/packages/agent-cache/src/AgentCache.ts b/packages/agent-cache/src/AgentCache.ts
--- a/packages/agent-cache/src/AgentCache.ts
+++ b/packages/agent-cache/src/AgentCache.ts
@@ -154,8 +154,8 @@
     }
 
     // Compute hit rates for per-tool stats
-    for (const toolStats of Object.values(perTool)) {
-      toolStats.hitRate = computeHitRate(toolStats.hits, toolStats.misses);
+    for (const perToolEntry of Object.values(perTool)) {
+      perToolEntry.hitRate = computeHitRate(perToolEntry.hits, perToolEntry.misses);
     }
 
     return {

diff --git a/packages/agent-cache/src/__tests__/adapters.test.ts b/packages/agent-cache/src/__tests__/adapters.test.ts
--- a/packages/agent-cache/src/__tests__/adapters.test.ts
+++ b/packages/agent-cache/src/__tests__/adapters.test.ts
@@ -330,12 +330,12 @@
     );
     expect(mockCache.session.set).toHaveBeenCalledWith(
       'thread-1',
-      'checkpoint:latest',
+      '__checkpoint_latest',
       expect.any(String),
     );
   });
 
-  it('list() limit=1 fast path reads checkpoint:latest and uses scanFieldsByPrefix (not getAll)', async () => {
+  it('list() limit=1 fast path reads latest pointer and uses scanFieldsByPrefix (not getAll)', async () => {
     const mockCache = createMockAgentCache();
     const latestTuple = {
       config: { configurable: { thread_id: 'thread-1', checkpoint_id: 'cp-3' } },
@@ -358,7 +358,7 @@
     expect(results[0].checkpoint.id).toBe('cp-3');
     expect(results[0].pendingWrites).toEqual([['task-1', 'output', 'fast-result']]);
 
-    expect(mockCache.session.get).toHaveBeenCalledWith('thread-1', 'checkpoint:latest');
+    expect(mockCache.session.get).toHaveBeenCalledWith('thread-1', '__checkpoint_latest');
     expect(mockCache.session.scanFieldsByPrefix).toHaveBeenCalledWith('thread-1', 'writes:cp-3|');
     expect(mockCache.session.getAll).not.toHaveBeenCalled();
   });
@@ -399,6 +399,34 @@
     expect(results[1].pendingWrites).toBeUndefined();
   });
 
+  it('list() includes checkpoint with id "latest"', async () => {
+    const mockCache = createMockAgentCache();
+    const checkpoints = {
+      'checkpoint:latest': JSON.stringify({
+        config: { configurable: { thread_id: 'thread-1', checkpoint_id: 'latest' } },
+        checkpoint: { id: 'latest', ts: '2024-01-03T00:00:00Z' },
+        metadata: {},
+      }),
+      'checkpoint:cp-1': JSON.stringify({
+        config: { configurable: { thread_id: 'thread-1', checkpoint_id: 'cp-1' } },
+        checkpoint: { id: 'cp-1', ts: '2024-01-01T00:00:00Z' },
+        metadata: {},
+      }),
+    };
+    (mockCache.session.getAll as ReturnType<typeof vi.fn>).mockResolvedValue(checkpoints);
+
+    const saver = new BetterDBSaver({ cache: mockCache });
+    const results: any[] = [];
+
+    for await (const tuple of saver.list({ configurable: { thread_id: 'thread-1' } })) {
+      results.push(tuple);
+    }
+
+    expect(results).toHaveLength(2);
+    expect(results[0].checkpoint.id).toBe('latest');
+    expect(results[1].checkpoint.id).toBe('cp-1');
+  });
+
   it('list() respects limit option', async () => {
     const mockCache = createMockAgentCache();
     const checkpoints = {

diff --git a/packages/agent-cache/src/adapters/langgraph.ts b/packages/agent-cache/src/adapters/langgraph.ts
--- a/packages/agent-cache/src/adapters/langgraph.ts
+++ b/packages/agent-cache/src/adapters/langgraph.ts
@@ -10,6 +10,9 @@
 import type { AgentCache } from '../AgentCache';
 import { AgentCacheUsageError } from '../errors';
 
+const LATEST_CHECKPOINT_POINTER_FIELD = '__checkpoint_latest';
+const LEGACY_LATEST_CHECKPOINT_POINTER_FIELD = 'checkpoint:latest';
+
 export interface BetterDBSaverOptions {
   /** A pre-configured AgentCache instance. */
   cache: AgentCache;
@@ -23,7 +26,7 @@
  *
  * Storage layout in session tier:
  *   {name}:session:{thread_id}:checkpoint:{checkpoint_id} = JSON(CheckpointTuple)
- *   {name}:session:{thread_id}:checkpoint:latest = JSON(CheckpointTuple)
+ *   {name}:session:{thread_id}:__checkpoint_latest = JSON(CheckpointTuple)
  *   {name}:session:{thread_id}:writes:{checkpoint_id}|{task_id}|{channel}|{idx} = JSON(value)
  *
  * Known limitations:
@@ -49,9 +52,17 @@
     if (!threadId) return undefined;
 
     const checkpointId = config.configurable?.checkpoint_id as string | undefined;
-    const field = checkpointId ? `checkpoint:${checkpointId}` : 'checkpoint:latest';
-
-    const data = await this.cache.session.get(threadId, field);
+    const field = checkpointId
+      ? `checkpoint:${checkpointId}`
+      : LATEST_CHECKPOINT_POINTER_FIELD;
+    let data = await this.cache.session.get(threadId, field);
+    // Backward compatibility for pointers written before __checkpoint_latest.
+    if (!data && !checkpointId) {
+      data = await this.cache.session.get(
+        threadId,
+        LEGACY_LATEST_CHECKPOINT_POINTER_FIELD,
+      );
+    }
     if (!data) return undefined;
 
     let tuple: CheckpointTuple;
@@ -105,7 +116,7 @@
     // would reference a non-existent checkpoint. Sequential order ensures latest
     // only points to a checkpoint that already exists.
     await this.cache.session.set(threadId, `checkpoint:${checkpointId}`, serialized);
-    await this.cache.session.set(threadId, 'checkpoint:latest', serialized);
+    await this.cache.session.set(threadId, LATEST_CHECKPOINT_POINTER_FIELD, serialized);
 
     return {
       ...config,
@@ -149,11 +160,21 @@
     if (!threadId) return;
 
     // Fast path: limit=1 with no before filter is the common case (fetch latest).
-    // Short-circuit by reading checkpoint:latest directly to avoid parsing and sorting
+    // Short-circuit by reading the latest pointer directly to avoid parsing and sorting
     // all checkpoints. Uses scanFieldsByPrefix() for writes to avoid refreshing TTL on
     // unrelated session fields (getAll() has a sliding window side effect).
     if (options?.limit === 1 && !options?.before) {
-      const latestData = await this.cache.session.get(threadId, 'checkpoint:latest');
+      let latestData = await this.cache.session.get(
+        threadId,
+        LATEST_CHECKPOINT_POINTER_FIELD,
+      );
+      // Backward compatibility for pointers written before __checkpoint_latest.
+      if (!latestData) {
+        latestData = await this.cache.session.get(
+          threadId,
+          LEGACY_LATEST_CHECKPOINT_POINTER_FIELD,
+        );
+      }
       if (latestData) {
         try {
           const tuple: CheckpointTuple = JSON.parse(latestData);
@@ -184,10 +205,15 @@
     for (const [field, value] of Object.entries(all)) {
       if (field.startsWith('writes:')) {
         writeFields[field] = value;
-      } else if (field.startsWith('checkpoint:') && field !== 'checkpoint:latest') {
+      } else if (field.startsWith('checkpoint:')) {
         try {
           const tuple: CheckpointTuple = JSON.parse(value);
-          checkpoints.push(tuple);
+          const isLegacyLatestPointer =
+            field === LEGACY_LATEST_CHECKPOINT_POINTER_FIELD &&
+            tuple.checkpoint?.id !== 'latest';
+          if (!isLegacyLatestPointer) {
+            checkpoints.push(tuple);
+          }
         } catch {
           /* skip corrupt entries */
         }

This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

Comment thread packages/agent-cache/src/adapters/langgraph.ts
Comment thread packages/agent-cache/scripts/inject-telemetry-defaults.mjs Outdated
Comment thread packages/agent-cache/src/adapters/langchain.ts
Comment thread packages/agent-cache/src/adapters/langgraph.ts
Comment thread packages/agent-cache/src/adapters/ai.ts
KIvanow added 2 commits April 15, 2026 11:12
  - Wrap flush() SCAN/DEL loop in try/finally so in-memory state resets on partial failure
  - Use replaceAll in inject-telemetry-defaults.mjs to cover all placeholder occurrences
  - Handle mixed valid/NaN case in langgraph checkpoint timestamp sort
for (const [placeholder, value] of Object.entries(replacements)) {
if (value && source.includes(placeholder)) {
source = source.replace(placeholder, value);
replaced++;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

String replace mishandles dollar-sign patterns in values

Low Severity

String.prototype.replace() interprets special replacement patterns ($&, $`, $', $$) in the second argument even when the first argument is a plain string. If a POSTHOG_API_KEY or POSTHOG_HOST environment variable contains a $ followed by one of these special characters, the injected value in analytics.js will be corrupted, potentially breaking analytics initialization at runtime.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 4361e4c. Configure here.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fine as a potential issue for us, doesn't impact users

Comment thread packages/agent-cache/src/tiers/SessionStore.ts Outdated
Co-authored-by: Kristiyan Ivanov <k.ivanow@gmail.com>
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

There are 3 total unresolved issues (including 1 from previous review).

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit fc9ab97. Configure here.

Comment thread packages/agent-cache/src/utils.ts Outdated
Comment thread packages/agent-cache/src/AgentCache.ts
cursoragent and others added 2 commits April 15, 2026 13:26
Co-authored-by: Kristiyan Ivanov <k.ivanow@gmail.com>
Co-authored-by: Kristiyan Ivanov <k.ivanow@gmail.com>
Copy link
Copy Markdown
Collaborator

@jamby77 jamby77 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great initial version with potential

@KIvanow KIvanow merged commit 73ca569 into master Apr 15, 2026
2 of 3 checks passed
@KIvanow KIvanow deleted the agent-cache branch April 15, 2026 19:51
@github-actions github-actions Bot locked and limited conversation to collaborators Apr 15, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants