Skip to content

Fix 429 rate limits — batch transport, memory cache, more endpoints#400

Merged
realproject7 merged 2 commits intomainfrom
task/399-rpc-batch-cache
Mar 21, 2026
Merged

Fix 429 rate limits — batch transport, memory cache, more endpoints#400
realproject7 merged 2 commits intomainfrom
task/399-rpc-batch-cache

Conversation

@realproject7
Copy link
Copy Markdown
Owner

Summary

  • Batch transport: Enable batch: true on both browserClient and publicClient — viem combines multiple pending readContract() calls into a single JSON-RPC batch HTTP request
  • Faster failover: Reduce timeout from 5-10s to 2s on all transports so endpoints rotate sooner on failure
  • 12 RPC endpoints: Expand CORS and server endpoint lists from 5/8 to 12 each (matching mintpad/mint.club patterns)
  • Singleton memory cache (lib/cache.ts): 60s TTL + in-flight request deduplication — prevents duplicate concurrent RPC calls for the same token
  • Price function caching: getTokenPrice(), get24hPriceChange(), getTokenTVL(), getBatchTokenData() all wrapped with priceCache.get()

Expected Impact

Before After
15+ HTTP requests per page load 2-3 batched HTTP requests
5 CORS endpoints, 5-10s timeout 12 endpoints, 2s timeout
No dedup, duplicate concurrent calls Singleton cache deduplicates
429 errors in console Clean console

Files Changed

File Change
lib/rpc.ts batch: true, 2s timeout, 12 endpoints, new display names
lib/cache.ts New — singleton MemoryCache with TTL + in-flight dedup
lib/price.ts All 4 price functions wrapped with priceCache.get()

Test plan

  • npm run typecheck passes
  • npm run lint passes
  • Browse homepage — no 429 errors in console
  • Navigate to token page — price loads from cache on repeat visits within 60s
  • Network tab shows batched JSON-RPC requests (multiple calls in single HTTP body)

Fixes #399

🤖 Generated with Claude Code

- Enable batch: true on both browserClient and publicClient transports
- Reduce timeouts from 5-10s to 2s for faster endpoint rotation
- Expand CORS and server endpoint lists from 5/8 to 12 each
- Add lib/cache.ts with singleton MemoryCache (60s TTL + in-flight dedup)
- Wrap getTokenPrice, get24hPriceChange, getTokenTVL, getBatchTokenData
  with priceCache to eliminate duplicate concurrent RPC calls

Fixes #399

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@project7-interns project7-interns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verdict: APPROVE

Summary

The PR matches issue #399 with the intended narrow scope: batched RPC transports, expanded endpoint pools, and shared in-memory dedup caching for the targeted price helpers. I did not find a correctness or design regression in the changed paths, and the required lint/typecheck job is passing.

Findings

  • None.

Decision

Approve: the change satisfies the acceptance criteria and keeps the fix isolated to the requested lib/ surface area.

- Remove try/catch from inside cache fetchers so transient RPC failures
  propagate as errors (cache.get's .catch path correctly skips caching)
- Outer try/catch at function level still returns null to callers
- When a custom client is passed, bypass the cache entirely to avoid
  returning stale results from a different RPC client

Addresses T2b review feedback on PR #400.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@project7-interns project7-interns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both issues resolved cleanly. Errors now propagate through the cache (no stale null caching), and custom-client calls bypass the cache entirely. LGTM.

@realproject7 realproject7 merged commit 0249045 into main Mar 21, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Infra] Fix 429 rate limits — batch transport, memory cache, more endpoints

2 participants