Skip to content

perf(redis-lua): stream ZRANGEBYSCORE from the score index#570

Merged
bootjp merged 10 commits intomainfrom
perf/redis-lua-zrange-streaming
Apr 21, 2026
Merged

perf(redis-lua): stream ZRANGEBYSCORE from the score index#570
bootjp merged 10 commits intomainfrom
perf/redis-lua-zrange-streaming

Conversation

@bootjp
Copy link
Copy Markdown
Owner

@bootjp bootjp commented Apr 21, 2026

Summary

Phase B of the ZSet fast-path plan. Production pprof after PRs #565 / #567 / #568 showed cmdZRangeByScore + cmdZRangeByScoreAsc holding 40 active goroutines — the top residual Lua-path signal — driven by BullMQ delay-queue polls. The legacy path loaded ALL zset members via zsetStateensureZSetLoaded before filtering by score bounds: O(N) GetAts per ZRANGEBYSCORE, so an N=1000 delay queue cost several seconds per call.

Replace load-then-filter with a bounded scan over the pre-sorted score index (!zs|scr|<user>|<sortable-score>|<member>).

Changes

adapter/redis_compat_commands.go — new zsetRangeByScoreFast helper plus small utilities:

  • Legacy-blob zsets, string-priority corruption, and empty-result-but-zset-missing cases return hit=false so the slow path's WRONGTYPE / disambiguation semantics remain intact.
  • zsetRangeEmptyFastResult checks ZSetMetaKey to separate "empty range on a live zset" from "no zset here".
  • Split into zsetFastPathEligible / zsetFastScanLimit / zsetScoreScan / decodeZSetScoreRange / finalizeZSetFastRange to stay under the cyclomatic-complexity cap.

adapter/redis_lua_context.gocmdZRangeByScore gated on luaZSetAlreadyLoaded + cachedType (mirrors PR #567 / #568 safety guards):

  • In-script ZADD / ZREM / DEL / SET / type-change still go through the legacy full-load path (cmdZRangeByScoreSlow).
  • zsetScoreScanBounds maps zScoreBound (-Inf / value-inc / value-exc / +Inf) to a byte range; exclusive edges are handled post-scan by scoreInRange.

Per-call effect on BullMQ-style delay-queue polls

Scenario Before After (fast-path hit)
ZRANGEBYSCORE key 0 now LIMIT 0 10 over N=1000 zset ~1000 member GetAts 1 legacy-blob ExistsAt + 1 bounded ScanAt (≤10 rows) + 1 priority-guard ExistsAt + 1 TTL probe
Empty range ~1000 GetAts 1 legacy-blob + 1 ZSetMetaKey + 1 TTL
In-script ZADD then ZRANGEBYSCORE unchanged unchanged (slow path via luaZSetAlreadyLoaded)

Test plan

  • go test -race -short ./adapter/... passes
  • 10 new cases in adapter/redis_lua_collection_fastpath_test.go:
    • fast-path hit (ascending)
    • WITHSCORES
    • LIMIT + offset
    • exclusive bounds (min (max
    • empty range on live zset
    • missing key
    • WRONGTYPE (key holds a string)
    • in-script ZADD visible to subsequent ZRANGEBYSCORE
    • ZREVRANGEBYSCORE (reverse iteration)
    • TTL-expired zset
  • Production: deploy and verify cmdZRangeByScore active goroutines drop from ~20+20 back toward the keyTypeAt-style residual. Expect p50 redis.call latency inside Lua to fall from the current 3s toward ms-scale as delay-queue polls stop pulling the whole zset.

Follow-ups (out of scope)

  • ZRANGE (by rank) streaming — needs the same pattern with positional iteration.
  • ZPOPMIN — writes; needs the streaming scan plus the delete-member transaction plumbing.
  • HGETALL / HMGET — separate wide-column prefix scans.

Summary by CodeRabbit

  • New Features

    • Optimized sorted-set range-by-score fast path for faster, paginated queries with correct inclusive/exclusive bound handling.
  • Bug Fixes

    • Safer fallback to full-path when results may be incomplete; fixed empty-range, truncation, type-mismatch and TTL/expiration behaviors; negative LIMIT offsets now rejected and negative limits treated as “no limit.”
  • Tests

    • Added extensive Lua-scripted tests covering range reads, WITHSCORES, LIMIT semantics, exclusive bounds, in-script mutations, TTL, and regressions.

Production pprof after #565 / #567 / #568 showed
`cmdZRangeByScore` + `cmdZRangeByScoreAsc` holding 40 active
goroutines (top Lua-path signal), driven by BullMQ delay-queue
polls. The legacy path loaded ALL zset members via
zsetState -> ensureZSetLoaded before filtering by score bounds --
O(N) GetAts per ZRANGEBYSCORE, so an N=1000 delay queue cost
several seconds per call and was the dominant residual after the
previous keyTypeAt fast-paths landed.

Replace the load-then-filter flow with a bounded scan over the
pre-sorted score index:

- adapter/redis_compat_commands.go adds zsetRangeByScoreFast which
  translates [startKey, endKey) byte bounds + offset / limit /
  scoreFilter into a ScanAt / ReverseScanAt on
  ZSetScoreRangeScanPrefix. Legacy-blob zsets, string-priority
  corruption, and empty-result-but-zset-is-missing cases bail out
  with hit=false so the slow path's WRONGTYPE / disambiguation
  semantics are preserved. zsetRangeEmptyFastResult checks
  ZSetMetaKey to distinguish "empty range on a live zset" from "no
  zset here". Split into zsetFastPathEligible, zsetFastScanLimit,
  zsetScoreScan, decodeZSetScoreRange, finalizeZSetFastRange
  helpers so the main entrypoint stays under the cyclomatic-
  complexity cap.
- adapter/redis_lua_context.go cmdZRangeByScore consults
  luaZSetAlreadyLoaded + cachedType (mirrors PR #567 / #568 safety
  guards) before entering the fast path. The legacy full-load path
  lives in cmdZRangeByScoreSlow and still runs for in-script
  mutations / legacy-blob / type-change cases. zsetScoreScanBounds
  maps zScoreBound (-Inf / value-inc / value-exc / +Inf) to the
  byte range; exclusive edges are handled post-scan by
  scoreInRange.

Per-call effect on BullMQ-style delay-queue polls with LIMIT:
  Before: O(N) member GetAts + in-process filter
  After  (fast-path hit): 1 legacy-blob ExistsAt + a single bounded
           ScanAt (N_range <= offset + limit, clamped by
           maxWideScanLimit) + 1 priority-guard ExistsAt + 1 TTL
           probe.

Tests: 10 new cases in redis_lua_collection_fastpath_test.go --
fast-path hit / WITHSCORES / LIMIT offset / exclusive bounds /
empty range / missing key / WRONGTYPE / in-script ZADD visibility
/ ZREVRANGEBYSCORE / TTL-expired. All pass under -race along with
the existing adapter suite.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 21, 2026

Warning

Rate limit exceeded

@bootjp has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 10 minutes and 28 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 10 minutes and 28 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: aaff1cf9-b511-46f8-98e3-857ddd946804

📥 Commits

Reviewing files that changed from the base of the PR and between b718dc9 and 0d1921a.

📒 Files selected for processing (1)
  • adapter/redis_lua_context.go
📝 Walkthrough

Walkthrough

Adds a wide-column fast read path for ZSET score-range queries (ZRANGEBYSCORE/ZREVRANGEBYSCORE): indexed byte-range scanning over score keys, candidate decoding with post-scan score filtering and pagination, TTL/priority checks, and safe fallbacks to the legacy slow path when correctness cannot be guaranteed.

Changes

Cohort / File(s) Summary
ZSET Fast-Path Core
adapter/redis_compat_commands.go
New fast-path APIs and helpers: zsetRangeByScoreFast, finalizeZSetFastRange, zsetFastPathEligible, zsetScoreScan, zsetRangeEmptyFastResult, zsetFastScanLimit, decodeZSetScoreRange, zsetFastPathTruncated, zsetDecodeAllocSize. Implements scan quota/clamping, directional score-index scans, decode+filter+pagination, TTL/priority checks, and safe hit=false conditions. Added bytes usage for range comparisons.
Lua Context & Command Flow
adapter/redis_lua_context.go
Threaded request context.Context into luaScriptContext; refactored cmdZRangeByScore to parse options (validate LIMIT semantics), add zrangeByScoreFastPath invocation using zsetScoreScanBounds, and extracted legacy behavior into cmdZRangeByScoreSlow. Adjusted reply pre-allocation.
Fast-Path Tests
adapter/redis_lua_collection_fastpath_test.go
Added extensive Lua integration tests (+math, strconv imports) covering forward/reverse fast-path behavior, WITHSCORES, LIMIT/offset semantics (including negative/large cases), exclusive bounds, empty/missing-key cases, WRONGTYPE errors, in-script ZADD visibility, TTL expiry, truncation fallback, and exact ±inf boundary cases.

Sequence Diagram

sequenceDiagram
    participant Lua as "Lua Eval"
    participant LuaCtx as "Lua Context"
    participant FastCheck as "Fast-Path Eligibility"
    participant Scanner as "Score Scanner"
    participant Decoder as "Decoder"
    participant Finalizer as "Finalizer"
    participant Store as "Storage"

    Lua->>LuaCtx: ZRANGEBYSCORE key min max [opts]
    LuaCtx->>FastCheck: zsetFastPathEligible(key, ctx)
    FastCheck->>Store: probe metadata / legacy-blob
    Store-->>FastCheck: metadata (type, blob presence, TTL)
    alt eligible
        LuaCtx->>LuaCtx: compute [startKey,endKey) via zsetScoreScanBounds
        LuaCtx->>Scanner: zsetScoreScan(startKey,endKey,scanLimit,reverse,ctx)
        Scanner->>Store: scan index keys
        Store-->>Scanner: []*KVPair (candidates)
        Scanner-->>Decoder: decodeZSetScoreRange(candidates, offset, limit, scoreFilter)
        Decoder-->>LuaCtx: decoded entries
        LuaCtx->>Finalizer: finalizeZSetFastRange(entries, offset, limit, scanLimit, scoreFilter)
        Finalizer->>Store: check TTL/priority, resolve ambiguous cases
        Store-->>Finalizer: TTL/status
        Finalizer-->>Lua: result (hit=true or hit=false)
    else ineligible or hit=false
        FastCheck-->>LuaCtx: ineligible / require slow path
        LuaCtx->>Store: load full zset (slow path)
        Store-->>Lua: result (slow path)
    end
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through bytes and bounded scores today,
Scanning rows where members dance and play,
I filter, page, and guard TTL with care,
If doubt appears, the slow path waits right there,
Hooray — fast ranges found, or safely spared.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 51.85% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'perf(redis-lua): stream ZRANGEBYSCORE from the score index' clearly and specifically describes the main change: implementing a performance optimization that streams ZRANGEBYSCORE results directly from the score index rather than loading full zset members.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch perf/redis-lua-zrange-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands in Redis Lua scripts, enabling efficient score-index scanning for wide-column sets without loading the entire set into memory. The changes include eligibility logic, scanning helpers, and extensive unit tests. A review comment suggests replacing context.Background() with a request-scoped context to ensure proper propagation of deadlines and cancellations.

Comment thread adapter/redis_lua_context.go Outdated
return scoreInRange(score, options.minBound, options.maxBound)
}
return c.server.zsetRangeByScoreFast(
context.Background(), key, startKey, endKey, reverse,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using context.Background() in a request handler or command execution path is discouraged as it loses the request-scoped context, which is necessary for managing deadlines and cancellation. Please pass the existing context from the caller if available. Note that if this propagation requires wide-reaching API changes, it should be deferred to a dedicated follow-up pull request to manage complexity and avoid scope creep.

References
  1. When designing interfaces, use context.Context for managing deadlines and cancellation to prevent ambiguity.
  2. During large refactorings, defer wide-reaching API changes like context propagation to dedicated follow-up pull requests to manage complexity.

Addresses the gemini note on PR #570 about context.Background() in
the fast-path call site. Full propagation across the 15+ legacy
Background() call sites in redis_lua_context.go is deferred to a
dedicated follow-up PR (as gemini suggested) to keep this change
focused on the streaming scan.

Minimal bridging change so the new code does not entrench the
pattern:

- luaScriptContext gains a ctx field captured from
  newLuaScriptContext's parameter, so the request-scoped context
  is reachable from every cmd* handler. The field is documented as
  "new handlers should propagate this into store calls" with the
  caveat that migrating the existing Background() sites is
  tracked separately.
- scriptCtx() helper returns c.ctx with a context.Background()
  fallback for unit tests that construct luaScriptContext
  directly.
- zrangeByScoreFastPath now passes scriptCtx() into
  zsetRangeByScoreFast, so the Eval's deadline / cancellation
  reach store.ScanAt / ReverseScanAt / ExistsAt on the hot
  BullMQ delay-queue poll path. The legacy full-load fallback
  path (cmdZRangeByScoreSlow -> zsetState) still uses the old
  Background() pattern, same as every other cmd in this file,
  pending the follow-up migration.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

gemini のコメント対応 push 済みです (5e124b97).

  • luaScriptContextctx フィールドを追加、newLuaScriptContext で受け取った ctx を保存するようにしました
  • scriptCtx() ヘルパー経由で新規 fast-path (zrangeByScoreFastPath) は c.ctx を使うようにしました (BullMQ hot path で Eval のデッドライン / キャンセルが store 層まで届きます)
  • 既存の 15 箇所以上の context.Background() 呼び出しは gemini の提案通り別 PR で一括移行します(このスタックの scope を保つため)。新規コードには正しいパターンが入ったので、以降の cmd* ハンドラは scriptCtx() 経由で ctx を透過できます

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@adapter/redis_compat_commands.go`:
- Around line 1928-1936: The fast-path in zsetRangeEmptyFastResult incorrectly
uses r.store.ExistsAt(store.ZSetMetaKey(key)) which misses delta-only wide ZSETs
and skips the string-priority guard; replace that simple ExistsAt check with the
full ZSET metadata resolution routine (i.e., call the codepath that fully
resolves ZSET meta including delta-only state — e.g., use the existing
load/resolve ZSET meta helper used elsewhere instead of ExistsAt), then run the
string-type guard (redisStrKey) and hasExpired check against the resolved meta;
if no resolved ZSET meta exists return the slow-path (not an empty result) or
surface WRONGTYPE when a string key is present.

In `@adapter/redis_lua_context.go`:
- Around line 2507-2524: zsetScoreScanBounds currently sets startKey to
ZSetScoreRangeScanPrefix(key, minBound.score) for a value minBound, but does not
exclude rows equal to that score at the byte-level; move exclusive-min handling
into the start key by using the prefix end so the scan begins strictly after the
score. Concretely, in zsetScoreScanBounds when minBound.kind == zBoundValue and
minBound.exclusive is true, set startKey =
store.PrefixScanEnd(store.ZSetScoreRangeScanPrefix(key, minBound.score))
(otherwise keep the existing ZSetScoreRangeScanPrefix), so exclusive lower
bounds are enforced by the scan range rather than post-filtering; keep the
existing logic for zBoundNegInf and for endKey computation unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a149fec3-2bf0-43cb-9b83-3ca65895b0e4

📥 Commits

Reviewing files that changed from the base of the PR and between 6597539 and 5e124b9.

📒 Files selected for processing (3)
  • adapter/redis_compat_commands.go
  • adapter/redis_lua_collection_fastpath_test.go
  • adapter/redis_lua_context.go

Comment thread adapter/redis_compat_commands.go Outdated
Comment thread adapter/redis_lua_context.go Outdated
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a streaming fast-path for ZRANGEBYSCORE and ZREVRANGEBYSCORE within Lua scripts, optimizing performance by scanning the score index directly rather than loading the entire sorted set. It also introduces request-scoped context propagation to ensure storage operations respect deadlines and cancellations. Review feedback highlights several critical improvements: ensuring the fast path falls back to the slow path if scan limits are reached to avoid incomplete results, validating offset and limit parameters, preventing potential integer overflows in scan limit calculations, and reducing O(N) allocations during entry decoding.

Comment thread adapter/redis_compat_commands.go Outdated
if err != nil {
return nil, false, err
}
return r.finalizeZSetFastRange(ctx, key, kvs, offset, limit, scoreFilter, readTS)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The scanLimit should be passed to finalizeZSetFastRange to allow checking if the result set is complete or if it was truncated by the maxWideScanLimit cap.

Suggested change
return r.finalizeZSetFastRange(ctx, key, kvs, offset, limit, scoreFilter, readTS)
return r.finalizeZSetFastRange(ctx, key, kvs, offset, limit, scanLimit, scoreFilter, readTS)

Comment on lines +1827 to +1830
func (r *RedisServer) finalizeZSetFastRange(
ctx context.Context, key []byte, kvs []*store.KVPair,
offset, limit int, scoreFilter func(float64) bool, readTS uint64,
) ([]redisZSetEntry, bool, error) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Update the function signature to accept scanLimit for truncation detection.

Suggested change
func (r *RedisServer) finalizeZSetFastRange(
ctx context.Context, key []byte, kvs []*store.KVPair,
offset, limit int, scoreFilter func(float64) bool, readTS uint64,
) ([]redisZSetEntry, bool, error) {
func (r *RedisServer) finalizeZSetFastRange(
ctx context.Context, key []byte, kvs []*store.KVPair,
offset, limit, scanLimit int, scoreFilter func(float64) bool, readTS uint64,
) ([]redisZSetEntry, bool, error) {

if expired {
return nil, true, nil
}
return entries, true, nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The fast path must return hit=false if the scan limit was reached but the request was not fully satisfied. This ensures the system does not silently drop entries or return incomplete results, maintaining state consistency between the fast and slow paths.

isComplete := len(kvs) < scanLimit || (limit >= 0 && len(entries) >= limit)
return entries, isComplete, nil
References
  1. Avoid silently dropping entries during serialization or normalization. If placeholder or incomplete entries are valid in some contexts, provide a specific, more lenient handling path for them to ensure they are preserved on disk and avoid state inconsistencies.

Comment on lines +2407 to +2410
options, err := parseZRangeByScoreOptions(args, reverse)
if err != nil {
return luaReply{}, err
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The offset and limit values should be validated to be non-negative (except for the default -1 limit). This prevents unexpected behavior and ensures external requests are bounded to prevent OOM or panics.

Suggested change
options, err := parseZRangeByScoreOptions(args, reverse)
if err != nil {
return luaReply{}, err
}
options, err := parseZRangeByScoreOptions(args, reverse)
if err != nil {
return luaReply{}, err
}
if options.offset < 0 || (options.limit < 0 && options.limit != -1) {
return luaReply{}, errors.New("ERR syntax error")
}
References
  1. To prevent unbounded memory growth and potential OOM issues, apply a fixed bound to collections that can grow from external requests, such as pending configuration changes. Reject new requests when the bound is reached.

Comment thread adapter/redis_compat_commands.go Outdated
if limit < 0 {
return maxWideScanLimit
}
scanLimit := offset + limit
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The addition offset + limit can overflow if the inputs are very large. Capping offset before the addition ensures the result remains within safe bounds for the subsequent comparison, preventing potential memory issues from oversized scan requests.

Suggested change
scanLimit := offset + limit
if offset > maxWideScanLimit {
return maxWideScanLimit
}
scanLimit := offset + limit
References
  1. To prevent unbounded memory growth and potential OOM issues, apply a fixed bound to collections that can grow from external requests, such as pending configuration changes. Reject new requests when the bound is reached.

entries := make([]redisZSetEntry, 0, len(kvs))
skipped := 0
for _, kv := range kvs {
score, member, ok := store.ExtractZSetScoreAndMember(kv.Key, key)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

store.ExtractZSetScoreAndMember re-allocates the scan prefix on every iteration of the loop. Since the prefix is constant for the entire scan, it should be calculated once outside the loop to avoid O(N) allocations.

…nly zset

Addresses three independent correctness findings from the CodeRabbit
and gemini reviews on PR #570.

1. Exclusive-min edge lost members under LIMIT (CodeRabbit, Major).
   `ZRANGEBYSCORE key (2 +inf LIMIT 0 1` used to hit members at
   score=2 in the scan (using ZSetScoreRangeScanPrefix as startKey),
   let scoreInRange drop them post-scan, consume the offset+limit
   budget on those drops, and return empty even when a score=3
   match existed. Fix: move the exclusive-min edge into the byte
   start key via PrefixScanEnd(scorePrefix) so the scan skips the
   whole score=min equivalence class before spending budget.

2. Scan truncation could silently drop entries (gemini, HIGH).
   finalizeZSetFastRange now takes scanLimit and refuses to return
   a possibly-incomplete result. When the raw scanner returned
   exactly scanLimit rows AND the caller's request is still
   unsatisfied (unbounded limit, or collected < limit), return
   hit=false so the slow full-load path produces the authoritative
   answer. Factored into a zsetFastPathTruncated helper.

3. Empty-result tail missed delta-only zsets and the string-
   priority guard (CodeRabbit, Major). zsetRangeEmptyFastResult
   now uses resolveZSetMeta (which recognises both the base meta
   key and delta keys) instead of a plain ExistsAt on ZSetMetaKey;
   a fresh zset whose base meta has not been persisted yet is no
   longer mistaken for "no zset here". It also now runs the
   string-priority guard so a corrupted redisStrKey + zset meta
   surfaces WRONGTYPE via the slow path rather than an empty array.

4. Defensive bounds on offset / limit (gemini, HIGH). cmdZRangeByScore
   rejects negative offset and non-sentinel negative limit with
   `ERR syntax error` before dispatch, keeping the fast path's
   arithmetic well-defined and maxWideScanLimit meaningful as a
   memory bound against hostile script input.

Tests: 4 new regressions in redis_lua_collection_fastpath_test.go
pin each finding. All existing ZRANGEBYSCORE tests still pass.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

CodeRabbit (Major x2) と gemini (High x2) の指摘を対応しました (a38b8172)。

修正内容:

  1. 排他最小バウンド (CodeRabbit Major)ZRANGEBYSCORE key (2 +inf LIMIT 0 1 で score=2 が多数あると、scan が offset+limit 予算を score=2 の行で消費して score=3 以上のマッチを取り逃していました。zsetScoreScanBounds で排他最小を PrefixScanEnd(scorePrefix) に移動、scan 時点で score=min 全行をスキップ。

  2. スキャン打ち切り検出 (gemini High)finalizeZSetFastRangescanLimit を受け取り、raw スキャンが上限に達し、かつ request 未満足なら hit=false を返して slow path へフォールバック。サイレント切り詰め防止。zsetFastPathTruncated ヘルパーに集約。

  3. 空結果 tail が delta-only zset を見落とし (CodeRabbit Major)ExistsAt(ZSetMetaKey) ではなく resolveZSetMeta を使用(base meta + delta 両方を認識)。また priority guard も empty tail で走らせるようにして、corrupted redisStrKey + zset meta で empty ではなく WRONGTYPE が返るように。

  4. offset/limit 防御チェック (gemini High)cmdZRangeByScore で negative offset / 非 sentinel negative limit を ERR syntax error で早期拒否。hostile script の OOM / panic 防止。

regression test 4 本追加 (exclusive-min with many at bound / truncation / fresh zset empty range / negative offset)。既存テストも全通過。

@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a streaming fast-path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands within Lua scripts. The implementation adds zsetRangeByScoreFast and associated helpers to perform bounded score-index scans for wide-column zsets, significantly improving performance for patterns like delay-queue polling. It also updates the Lua script context to propagate request-scoped deadlines and includes extensive regression tests. Review feedback suggests simplifying a limit check condition and removing an unused parameter in the scan bounds calculation.

Comment thread adapter/redis_lua_context.go Outdated
// guards the fast path's offset/limit arithmetic (zsetFastScanLimit
// + decodeZSetScoreRange) against panics on hostile script input
// and keeps maxWideScanLimit meaningful as a memory bound.
if options.offset < 0 || (options.limit < 0 && options.limit != -1) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition options.limit < 0 && options.limit != -1 is redundant because options.limit is an integer and the only negative value allowed is -1. This can be simplified to options.limit < -1.

Suggested change
if options.offset < 0 || (options.limit < 0 && options.limit != -1) {
if options.offset < 0 || options.limit < -1 {

Comment thread adapter/redis_lua_context.go Outdated
func zsetScoreScanBounds(key []byte, minBound, maxBound zScoreBound, reverse bool) (startKey, endKey []byte) {
full := store.ZSetScoreScanPrefix(key)
fullEnd := store.PrefixScanEnd(full)
_ = reverse
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable reverse is unused in zsetScoreScanBounds. If it is not intended to be used, it should be removed from the function signature to avoid confusion. Removing unused code paths improves code clarity and maintainability.

func zsetScoreScanBounds(key []byte, minBound, maxBound zScoreBound) (startKey, endKey []byte) {
References
  1. Remove dead or unreachable code paths to improve code clarity and maintainability.

Addresses two Medium gemini notes on PR #570.

- cmdZRangeByScore's bound check is simplified: limit is an int and
  the only negative value the parser emits is -1 ("no limit"), so
  `options.limit < 0 && options.limit != -1` collapses to
  `options.limit < -1`.
- zsetScoreScanBounds never branched on reverse (forward and reverse
  scans share the same [start, end) endpoints; direction only
  affects the iterator). Drop the unused parameter and update the
  single call site.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@adapter/redis_compat_commands.go`:
- Around line 1900-1908: zsetFastScanLimit currently computes scanLimit :=
offset + limit which can overflow; change the logic to first handle limit < 0 as
before, then if offset >= maxWideScanLimit return maxWideScanLimit, and
otherwise check if limit > maxWideScanLimit - offset (or limit would make
offset+limit exceed maxWideScanLimit) and return maxWideScanLimit in that case;
only when those guards pass compute and return offset + limit. This uses the
existing function zsetFastScanLimit and constant maxWideScanLimit to avoid
signed integer overflow before the clamp.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 117685bc-0242-446a-9bb4-0bbf5b5a83f6

📥 Commits

Reviewing files that changed from the base of the PR and between 5e124b9 and ca3ecf6.

📒 Files selected for processing (3)
  • adapter/redis_compat_commands.go
  • adapter/redis_lua_collection_fastpath_test.go
  • adapter/redis_lua_context.go
✅ Files skipped from review due to trivial changes (1)
  • adapter/redis_lua_context.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • adapter/redis_lua_collection_fastpath_test.go

Comment thread adapter/redis_compat_commands.go Outdated
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands, allowing score-index scans for wide-column sorted sets without loading the entire collection into memory. Review feedback identifies a potential integer overflow in the scan limit calculation and a behavior mismatch with Redis regarding negative LIMIT counts. Additionally, suggestions were made to optimize memory allocation for result entries and to implement an early exit when the limit is zero for improved efficiency.

Comment thread adapter/redis_compat_commands.go Outdated
Comment on lines +1904 to +1907
scanLimit := offset + limit
if scanLimit > maxWideScanLimit {
return maxWideScanLimit
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The addition offset + limit can overflow if both values are large (e.g., provided via a Lua script), resulting in a negative scanLimit. This would cause zsetRangeByScoreFast to incorrectly return an empty result via zsetRangeEmptyFastResult instead of falling back to the slow path for authoritative error handling. Applying a bound here prevents logic errors and aligns with repository rules regarding bounded collections.

Suggested change
scanLimit := offset + limit
if scanLimit > maxWideScanLimit {
return maxWideScanLimit
}
// Use uint64 to prevent overflow during addition.
if uint64(offset)+uint64(limit) > uint64(maxWideScanLimit) {
return maxWideScanLimit
}
scanLimit := offset + limit
References
  1. To prevent unbounded memory growth and potential OOM issues, apply a fixed bound to collections that can grow from external requests.

return nil, false, err
}
scanLimit := zsetFastScanLimit(offset, limit)
if scanLimit <= 0 || bytes.Compare(startKey, endKey) >= 0 {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When limit is 0, the fast path can avoid the score index scan entirely and jump straight to the existence check and TTL probe. This is a minor efficiency gain for queries that only care about existence or type validation.

Suggested change
if scanLimit <= 0 || bytes.Compare(startKey, endKey) >= 0 {
if limit == 0 || scanLimit <= 0 || bytes.Compare(startKey, endKey) >= 0 {

entries := make([]redisZSetEntry, 0, len(kvs))
skipped := 0
for _, kv := range kvs {
score, member, ok := store.ExtractZSetScoreAndMember(kv.Key, key)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Allocating entries with len(kvs) capacity can be wasteful if offset is large and limit is small, as len(kvs) can be up to 100,001. Calculating a tighter capacity based on the requested range and the available data reduces memory pressure while maintaining the efficiency of pre-allocation.

	if offset >= len(kvs) {
		return nil
	}
	allocSize := len(kvs) - offset
	if limit >= 0 && limit < allocSize {
		allocSize = limit
	}
	entries := make([]redisZSetEntry, 0, allocSize)
References
  1. Pre-allocating a buffer is acceptable if a strictly enforced maximum size limit is checked before allocation and the entire data must be materialized in memory.

Comment thread adapter/redis_lua_context.go Outdated
Comment on lines +2417 to +2419
if options.offset < 0 || options.limit < -1 {
return luaReply{}, errors.New("ERR syntax error")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Redis ZRANGEBYSCORE returns an empty array for a negative count in the LIMIT clause, rather than a syntax error. Additionally, the use of -1 as a sentinel for "no limit" in the underlying parser conflicts with an explicit user input of LIMIT offset -1. While the sentinel conflict is deeper in the parser, we should at least avoid returning an incorrect error for negative limits.

	if options.offset < 0 {
		return luaReply{}, errors.New("ERR syntax error")
	}
	// Redis returns empty array for negative LIMIT count, not an error.
	if options.limit < 0 && options.limit != -1 {
		options.limit = 0
	}

Apply three correctness fixes raised on PR #570 reviews:

- zsetFastScanLimit: guard against signed-integer overflow on hostile
  offset+limit input. A wrap would produce a negative scanLimit and
  misroute live zsets into the empty-result tail. Compare bounds
  before adding instead.
- decodeZSetScoreRange: tight pre-allocation via zsetDecodeAllocSize,
  (len(kvs) - offset) capped by limit. Avoids up to maxWideScanLimit
  entries of wasted slice capacity for small windows at large offsets.
- cmdZRangeByScore: Redis treats any negative LIMIT count as "no
  limit". Coerce options.limit < 0 -> -1 instead of rejecting with
  ERR syntax error, so Lua scripts using LIMIT 0 -1 behave exactly
  like the real server.

Add TestLua_ZRANGEBYSCORE_NegativeLimitReturnsAll pinning the
negative-limit semantics.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

Addressed the latest review round (97bc74f):

  • gemini HIGH / CodeRabbit Minor — zsetFastScanLimit signed-integer overflow: compare limit > maxWideScanLimit - offset before adding, so math.MaxInt + math.MaxInt no longer wraps to a negative scanLimit and misroutes live zsets into the empty-result tail.
  • gemini Medium — decodeZSetScoreRange over-allocation: extracted zsetDecodeAllocSize(kvLen, offset, limit) that returns (kvLen - offset) capped by limit, saving up to maxWideScanLimit wasted slice capacity on small windows at large offsets.
  • gemini Medium — negative LIMIT parity with upstream Redis: cmdZRangeByScore now coerces options.limit < 0-1 instead of returning ERR syntax error, so ZRANGEBYSCORE k -inf +inf LIMIT 0 -1 in Lua behaves exactly like the real server.

New regression test TestLua_ZRANGEBYSCORE_NegativeLimitReturnsAll pins the negative-limit semantics.

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands, allowing range queries on wide-column sorted sets to be performed via index scans rather than loading the entire set. The implementation includes truncation detection to maintain correctness and propagates request-scoped contexts to the storage layer. Reviewers suggested optimizations for handling zero-limit cases and improving the decoding loop's efficiency when large offsets are provided.

// `scanLimit <= 0` branch to misroute a live zset into the
// empty-result tail.
func zsetFastScanLimit(offset, limit int) int {
if limit < 0 {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When limit is 0, the caller expects an empty result regardless of the offset. In this case, we can avoid scanning the score index entirely by returning a scanLimit of 0. This will trigger the short-circuit in zsetRangeByScoreFast and call zsetRangeEmptyFastResult, which correctly handles existence and type checks without the overhead of a prefix scan.

Suggested change
if limit < 0 {
if limit == 0 {
return 0
}
if limit < 0 {

Comment thread adapter/redis_compat_commands.go Outdated
Comment on lines +1969 to +1971
if limit >= 0 && len(entries) >= limit {
break
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The check for limit saturation is currently performed after skipping offset entries. If limit is 0, the loop will still iterate offset times before breaking. While the optimization in zsetFastScanLimit handles this for the common case, moving this check to the top of the loop (or checking it before the skip logic) would be more robust and efficient for cases where limit is small relative to offset.

		if limit >= 0 && len(entries) >= limit {
			break
		}
		if skipped < offset {
			skipped++
			continue
		}

Two minor efficiency fixes from PR #570 review:

- zsetFastScanLimit: when limit==0 return 0 unconditionally. The
  caller's scanLimit <= 0 branch already routes to
  zsetRangeEmptyFastResult, which still runs resolveZSetMeta for
  proper WRONGTYPE / existence disambiguation. Avoids an otherwise
  useless full-quota score-index scan.
- decodeZSetScoreRange: check limit saturation before the offset
  skip. A small limit combined with a large offset now exits the
  decode loop immediately instead of burning offset iterations on
  the skip branch. Correctness is unchanged: once len(entries) >=
  limit we are done regardless of remaining skip budget.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

Addressed the latest review round (d883818):

  • gemini Medium — zsetFastScanLimit limit==0 short-circuit: return 0 unconditionally when limit == 0; the caller's scanLimit <= 0 branch already routes to zsetRangeEmptyFastResult, which still runs resolveZSetMeta for correct WRONGTYPE / existence disambiguation. No wasted score-index scan for LIMIT 0 0.
  • gemini Medium — decodeZSetScoreRange loop ordering: hoisted the len(entries) >= limit check above the offset-skip branch, so a small limit combined with a large offset exits the decode loop immediately instead of burning offset iterations on the skip path.

Correctness unchanged in both cases.

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands, enabling the system to stream entries directly from the score index instead of loading the entire sorted set into memory. The implementation includes logic for index scanning, pagination, and truncation detection, alongside the propagation of request-scoped contexts to the storage layer. Feedback indicates a correctness issue in the mapping of infinity sentinels within zsetScoreScanBounds, where specific ranges such as -inf to -inf result in empty scans instead of returning members with those scores.

Comment on lines +2532 to +2562
func zsetScoreScanBounds(key []byte, minBound, maxBound zScoreBound) (startKey, endKey []byte) {
full := store.ZSetScoreScanPrefix(key)
fullEnd := store.PrefixScanEnd(full)
switch minBound.kind {
case zBoundNegInf:
startKey = full
case zBoundValue:
scorePrefix := store.ZSetScoreRangeScanPrefix(key, minBound.score)
if minBound.inclusive {
startKey = scorePrefix
} else {
startKey = store.PrefixScanEnd(scorePrefix)
}
case zBoundPosInf:
startKey = fullEnd
}
switch maxBound.kind {
case zBoundNegInf:
endKey = full
case zBoundValue:
scorePrefix := store.ZSetScoreRangeScanPrefix(key, maxBound.score)
if maxBound.inclusive {
endKey = store.PrefixScanEnd(scorePrefix)
} else {
endKey = scorePrefix
}
case zBoundPosInf:
endKey = fullEnd
}
return startKey, endKey
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current mapping of zBoundNegInf and zBoundPosInf sentinels in zsetScoreScanBounds creates a correctness discrepancy with Redis for ranges where infinity is used as a specific score value rather than an open-ended range marker.

Specifically, ZRANGEBYSCORE key -inf -inf will return an empty result because startKey and endKey both resolve to full, resulting in an empty scan range [full, full). However, in Redis, this command should return members with a score of -inf. Similarly, ZRANGEBYSCORE key +inf +inf will be empty because both bounds resolve to fullEnd.

While these are edge cases, they can be addressed by mapping the sentinels to the actual prefix of the encoded infinity value when they appear on the "wrong" side of the range (e.g., -inf as a maxBound or +inf as a minBound). This ensures that valid entries are not silently dropped during the normalization of scan bounds.

References
  1. Avoid silently dropping entries during serialization or normalization. Ensure that placeholder or specific sentinel values are handled to preserve data and avoid state inconsistencies.

PR #570 review caught a correctness gap in zsetScoreScanBounds:
when -Inf appeared as maxBound or +Inf as minBound, both sentinels
were mapped to the empty-range endpoints (full / fullEnd), so
ZRANGEBYSCORE key -inf -inf and ZRANGEBYSCORE key +inf +inf returned
empty instead of the members Redis would return.

- maxBound = -Inf now maps to PrefixScanEnd(ScoreRangeScanPrefix(-Inf))
- minBound = +Inf now maps to ScoreRangeScanPrefix(+Inf)

Members with score = +/-Inf are legitimate Redis values (ZADD key
+inf m is valid), so silently dropping them was a data-loss bug in
the fast path.

Add regression tests TestLua_ZRANGEBYSCORE_NegInfExactMatch and
TestLua_ZRANGEBYSCORE_PosInfExactMatch pinning the behaviour.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

Addressed the latest review finding (bb6ba4f):

  • gemini Medium — zsetScoreScanBounds silently drops +/-Inf exact matches: -Inf as maxBound and +Inf as minBound were both mapping to the empty-range endpoints (full / fullEnd), so ZRANGEBYSCORE key -inf -inf / +inf +inf returned empty instead of the legitimate members Redis would return (ZADD key +inf m is valid).
    • maxBound = -InfPrefixScanEnd(ZSetScoreRangeScanPrefix(key, -Inf))
    • minBound = +InfZSetScoreRangeScanPrefix(key, +Inf)

Added regression tests TestLua_ZRANGEBYSCORE_NegInfExactMatch and TestLua_ZRANGEBYSCORE_PosInfExactMatch pinning the corrected behaviour.

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands within Lua scripts, allowing the system to stream score-index entries instead of loading entire sorted sets into memory. The changes include comprehensive logic for offset/limit pagination, score filtering, and TTL checks, along with a suite of regression tests. Additionally, the luaScriptContext now captures and propagates request-scoped contexts to storage operations to respect deadlines and cancellations. Feedback was provided regarding the efficiency of zsetFastScanLimit when handling very large offsets, suggesting a potential short-circuit to avoid wasted resource consumption.

Comment on lines +1906 to +1925
func zsetFastScanLimit(offset, limit int) int {
// limit == 0: the caller wants zero entries regardless of offset.
// Return 0 so the caller's `scanLimit <= 0` branch routes to the
// empty-result tail (which still runs resolveZSetMeta for proper
// WRONGTYPE / existence disambiguation) instead of a pointless
// full-quota scan.
if limit == 0 {
return 0
}
if limit < 0 {
return maxWideScanLimit
}
if offset >= maxWideScanLimit {
return maxWideScanLimit
}
if limit > maxWideScanLimit-offset {
return maxWideScanLimit
}
return offset + limit
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The zsetFastScanLimit function correctly clamps the scan window to maxWideScanLimit to prevent OOM, which aligns with the rule to apply fixed bounds to collections that can grow from external requests. However, when offset is very large, the scan will still fetch up to maxWideScanLimit rows only to have them all skipped. Consider if we should short-circuit and return hit=false immediately if offset exceeds a certain threshold to avoid wasted resource consumption, or if we should rely on the positional iteration follow-up mentioned in the PR description to manage this complexity.

References
  1. To prevent unbounded memory growth and potential OOM issues, apply a fixed bound to collections that can grow from external requests, such as pending configuration changes. Reject new requests when the bound is reached.
  2. During large refactorings, limit the scope of a single pull request. Defer related but wide-reaching API changes to dedicated follow-up pull requests to manage complexity and avoid scope creep.

Address PR #570 review: when offset >= maxWideScanLimit the fast
path would scan maxWideScanLimit rows and skip all of them, then
trigger the truncation guard and fall back anyway -- pure waste.
Short-circuit hit=false immediately in zsetRangeByScoreFast so the
slow path handles it directly.

Add TestLua_ZRANGEBYSCORE_LargeOffsetShortCircuit pinning the
large-offset path (offset=200000 on a 3-member zset returns empty).
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

Addressed the latest review finding (9ef65d0):

  • gemini Medium — large-offset wasted scan: when offset >= maxWideScanLimit, the fast path was scanning maxWideScanLimit rows and skipping all of them, then hitting the truncation guard and falling back anyway. Added an explicit hit=false short-circuit at the top of zsetRangeByScoreFast so the slow path answers directly without the redundant score-index scan.

Added TestLua_ZRANGEBYSCORE_LargeOffsetShortCircuit pinning the behaviour (LIMIT 200000 10 on a 3-member zset returns empty cleanly).

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a fast-path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands within Lua scripts, enabling entry streaming from the score index to avoid full sorted set loads. Key additions include logic for bounded index scanning, eligibility validation, and comprehensive test coverage for edge cases like exclusive bounds and TTL. Review feedback suggests optimizing slice capacity allocation when scores are included and simplifying redundant conditional logic in the empty-result handling path.

func decodeZSetScoreRange(
key []byte, kvs []*store.KVPair, offset, limit int, scoreFilter func(float64) bool,
) []redisZSetEntry {
entries := make([]redisZSetEntry, 0, zsetDecodeAllocSize(len(kvs), offset, limit))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The capacity allocation for the entries slice can be optimized when withScores is true. Currently, zsetDecodeAllocSize returns the number of members, but if scores are included, the final luaReply array will be twice that size. Pre-allocating the correct capacity here or in zsetRangeReply would reduce slice growth overhead. Ensure a strictly enforced maximum size limit is checked before allocation to prevent OOM issues.

References
  1. When deserializing data with a size header, pre-allocating a buffer is acceptable if a strictly enforced maximum size limit is checked before allocation and the entire data must be materialized in memory.

Comment thread adapter/redis_compat_commands.go Outdated
Comment on lines +2027 to +2030
if expired {
return nil, true, nil
}
return nil, true, nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expired check here is redundant as both branches return nil, true, nil. This unreachable logic should be simplified to improve code clarity and maintainability.

Suggested change
if expired {
return nil, true, nil
}
return nil, true, nil
if _, expErr := r.hasExpired(ctx, key, readTS, true); expErr != nil {
return nil, false, cockerrors.WithStack(expErr)
}
return nil, true, nil
References
  1. Remove dead or unreachable code paths to improve code clarity and maintainability.

Two minor cleanups from PR #570 review:

- zsetRangeReply: WITHSCORES doubles the reply width (member + score
  per entry), so allocating for len(entries) guaranteed an internal
  grow on the final slice. Pre-allocate 2x when withScores is true.
  Bounded by maxWideScanLimit at the fast-path layer, so the
  allocation remains capped.
- zsetRangeEmptyFastResult: the expired/non-expired branches both
  returned (nil, true, nil), so the switch was dead. Collapse to a
  single tail return; keep the hasExpired call for its error-
  surfacing side effect so TTL-resolution errors still propagate.
@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

Addressed the latest review findings (b718dc9):

  • gemini Medium — WITHSCORES reply under-allocation: zsetRangeReply sized out to len(entries) but each entry emits 2 replies when withScores=true. Pre-allocate 2 * len(entries) in that case. Bounded by maxWideScanLimit at the fast-path layer so allocation remains capped.
  • gemini Medium — dead expired branch: zsetRangeEmptyFastResult had if expired { return nil,true,nil } followed by return nil,true,nil — identical returns. Collapsed to a single tail return; kept the hasExpired call for its error-surfacing side effect.

/gemini review

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
adapter/redis_lua_collection_fastpath_test.go (1)

636-1094: LGTM — coverage matches the regression history in the PR description.

Tests pin the key fixes: exclusive-min with many-at-bound (line 867), resolveZSetMeta empty-tail (line 948), negative offset rejection (969), negative-limit Redis semantics (988), ±Inf exact-match (1016, 1072), and large-offset short-circuit (1045). WITHSCORES, in-script ZADD visibility, TTL expiry, WRONGTYPE, and reverse iteration are all covered.

One nit (optional): the many similar createNode + redis.NewClient preambles per test would consolidate well into a table-driven shape with a shared setup helper, which the coding guidelines prefer for *_test.go. Deferring is fine given each test asserts a distinct scenario.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@adapter/redis_lua_collection_fastpath_test.go` around lines 636 - 1094, Many
tests repeat the same setup boilerplate (createNode + redis.NewClient + defer
shutdown/Close) — factor this into a shared test helper (e.g.,
newTestRedisClient or setupLuaTest) and call it from each test
(TestLua_ZRANGEBYSCORE_FastPathHit, TestLua_ZRANGEBYSCORE_FastPathHitWithScores,
TestLua_ZRANGEBYSCORE_LimitOffset, etc.); the helper should encapsulate
createNode(...), returning the client, ctx, and a teardown function that
performs shutdown(nodes) and rdb.Close(), and update each test to call the
helper and defer its teardown.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@adapter/redis_lua_collection_fastpath_test.go`:
- Around line 636-1094: Many tests repeat the same setup boilerplate (createNode
+ redis.NewClient + defer shutdown/Close) — factor this into a shared test
helper (e.g., newTestRedisClient or setupLuaTest) and call it from each test
(TestLua_ZRANGEBYSCORE_FastPathHit, TestLua_ZRANGEBYSCORE_FastPathHitWithScores,
TestLua_ZRANGEBYSCORE_LimitOffset, etc.); the helper should encapsulate
createNode(...), returning the client, ctx, and a teardown function that
performs shutdown(nodes) and rdb.Close(), and update each test to call the
helper and defer its teardown.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 78a64b74-ea0c-48eb-b9d7-0d65c1bbae0f

📥 Commits

Reviewing files that changed from the base of the PR and between ca3ecf6 and b718dc9.

📒 Files selected for processing (3)
  • adapter/redis_compat_commands.go
  • adapter/redis_lua_collection_fastpath_test.go
  • adapter/redis_lua_context.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • adapter/redis_lua_context.go

@bootjp
Copy link
Copy Markdown
Owner Author

bootjp commented Apr 21, 2026

CodeRabbit review acknowledged — thanks for the LGTM.

The test-helper consolidation nit is a fair style point but I'm deferring it intentionally: each test body is narrow and the preamble being visible inline keeps the regression-pinning story readable next to the assertion. Happy to revisit in a follow-up refactor PR if the file grows further.

No other actionable findings. Marking the review cycle complete on my side.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a streaming fast path for ZRANGEBYSCORE and ZREVRANGEBYSCORE commands, enabling direct score-index queries instead of loading entire sorted sets. The implementation includes comprehensive logic for pagination, score filtering, and fallback mechanisms for script-local mutations or legacy encodings. Additionally, the luaScriptContext was updated to propagate request-scoped contexts to storage operations. Review feedback suggests refining the error message for negative offsets to improve compatibility with standard Redis error responses.

Comment thread adapter/redis_lua_context.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@bootjp bootjp enabled auto-merge April 21, 2026 12:10
@bootjp bootjp merged commit f1dcc14 into main Apr 21, 2026
8 checks passed
@bootjp bootjp deleted the perf/redis-lua-zrange-streaming branch April 21, 2026 12:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant