feat: implement 20 missing Redis commands (COPY, bit ops, SORT, GEO*, CONFIG REWRITE, CLIENT PAUSE, MEMORY)#68
Conversation
Add COPY source destination [REPLACE] with full Redis 6.2 semantics: - Returns 1 on success, 0 if dest exists without REPLACE - Clones entry including TTL metadata - DB option parsed but returns explicit unsupported error - 6 unit tests, entries in test-commands.sh and test-consistency.sh
Full Redis-compatible bitmap commands: - GETBIT/SETBIT: per-bit read/write with big-endian ordering - BITCOUNT: popcount with byte/bit range modes (Redis 7.0 BIT option) - BITOP: AND/OR/XOR/NOT with unequal-length zero-padding - BITPOS: first 0/1 search with byte/bit range modes - Read-only dispatch variants for GETBIT, BITCOUNT, BITPOS - 21 unit tests, entries in test-commands.sh and test-consistency.sh
Full Redis-compatible SORT for lists, sets, and sorted sets: - Numeric (default) and ALPHA lexicographic sorting - ASC/DESC ordering - LIMIT offset count pagination - BY pattern for external key-based sorting - GET pattern for external key retrieval (# = element itself) - STORE destination to persist results as list - BY nosort to skip sorting - 7 unit tests, entries in test-commands.sh and test-consistency.sh
…, GEOSEARCH, GEOSEARCHSTORE) Full Redis-compatible GEO commands backed by sorted sets with geohash scores: - GEOADD with NX/XX/CH flags and coordinate validation - GEOPOS returns (lon, lat) pairs for members - GEODIST with M/KM/FT/MI unit support - GEOHASH returns 11-char base32 geohash strings - GEOSEARCH with FROMLONLAT/FROMMEMBER, BYRADIUS/BYBOX, ASC/DESC, COUNT, WITHCOORD/WITHDIST/WITHHASH options - GEOSEARCHSTORE for persisting search results - Haversine distance formula (6372797m earth radius) - 52-bit integer geohash encoding (Redis-compatible) - 8 unit tests, entries in test-commands.sh and test-consistency.sh
BGSAVE shares the (6, b'b') bucket with BITPOS — the prefilter is intentionally coarse (documented behavior), so remove BGSAVE from the expected-false list.
CONFIG REWRITE serializes runtime + server config to moon.conf: - Atomic write (tmpfile + rename) to <dir>/moon.conf - Redis-style config format (key value per line) - Persists all CONFIG SET-able parameters - CONFIG RESETSTAT stub for compatibility
CONFIG REWRITE: - Atomic write (tmpfile + rename) to <dir>/moon.conf - Redis-style key-value format, all CONFIG SET-able params CLIENT PAUSE/UNPAUSE: - CLIENT PAUSE timeout [WRITE|ALL] — delays command processing - CLIENT UNPAUSE — clears pause immediately - Lock-free: extracts deadline before sleeping, no lock held across await - CLIENT INFO, CLIENT LIST (stub), CLIENT NO-EVICT/NO-TOUCH (accepted) MEMORY USAGE/DOCTOR/HELP: - MEMORY USAGE key — returns estimated bytes via estimate_memory() - MEMORY DOCTOR — health report stub - MEMORY HELP — subcommand listing Lazyfree configuration: - CONFIG SET lazyfree-threshold N (default 64) - CONFIG GET lazyfree-threshold - Stored in RuntimeConfig for future use by unlink/eviction
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds many Redis-parity commands and helpers: bit/bitmap ops and bitfield, geospatial subsystem, COPY/SORT/MEMORY key ops, CONFIG REWRITE/RESETSTAT, CLIENT PAUSE/UNPAUSE, expiry/time/touch/randomkey/flushdb, dispatch/metadata updates, runtime-config fields, and expanded tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler as "Conn Handler"
participant Runtime as "RuntimeConfig"
participant DB as "Database"
participant FS as "Filesystem"
Client->>Handler: SEND COMMAND (e.g., CLIENT PAUSE / CONFIG REWRITE / GEOSEARCH)
Handler->>Runtime: read client_pause_deadline_ms & client_pause_write_only
alt Pause active for this command
Handler->>Handler: compute remaining wait
Handler->>Handler: sleep/poll loop (50ms chunks)
Handler-->>Client: resume processing
end
alt CONFIG REWRITE
Handler->>FS: write tmp file (moon.conf.tmp)
FS-->>Handler: write OK
Handler->>FS: rename tmp -> moon.conf (atomic)
FS-->>Handler: rename OK
Handler-->>Client: "+OK"
else GEOSEARCH or DB command
Handler->>DB: execute command (e.g., GEOSEARCH filter/sort)
DB-->>Handler: results
Handler-->>Client: formatted response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoImplement 20 missing Redis commands (bit ops, geospatial, COPY, SORT, CONFIG, CLIENT PAUSE)
WalkthroughsDescription• Implements 20 new Redis commands across bit operations, geospatial, key operations, and configuration management • **Bit operations**: GETBIT, SETBIT, BITCOUNT, BITOP, BITPOS with BYTE/BIT mode support and read-only variants • **Geospatial commands**: GEOADD (NX/XX/CH), GEOPOS, GEODIST, GEOHASH, GEOSEARCH, GEOSEARCHSTORE with haversine distance calculations and multiple search shapes • **Key operations**: COPY with REPLACE option, SORT with BY/GET/LIMIT/ALPHA/ASC/DESC/STORE, MEMORY USAGE/DOCTOR/HELP • **Configuration**: CONFIG REWRITE with atomic tmpfile+rename, CONFIG RESETSTAT, lazyfree-threshold parameter support • **Client management**: CLIENT PAUSE/UNPAUSE with WRITE-only mode, `CLIENT INFO/LIST/NO-EVICT/NO-TOUCH` • Adds PHF metadata entries, dispatch table routing, and comprehensive unit tests (43 total) • Includes integration and consistency test scripts validating behavioral parity with Redis • Total: +2,755 lines, 1,981 tests passing Diagramflowchart LR
A["New Commands<br/>20 total"] --> B["Bit Operations<br/>GETBIT/SETBIT/BITCOUNT/BITOP/BITPOS"]
A --> C["Geospatial<br/>GEOADD/GEOPOS/GEODIST/GEOHASH/GEOSEARCH"]
A --> D["Key Operations<br/>COPY/SORT/MEMORY"]
A --> E["Configuration<br/>CONFIG REWRITE/RESETSTAT<br/>CLIENT PAUSE/UNPAUSE"]
B --> F["Dispatch Routing"]
C --> F
D --> F
E --> F
F --> G["PHF Metadata"]
F --> H["Unit Tests"]
F --> I["Integration Tests"]
File Changes1. src/command/string/string_bit.rs
|
Code Review by Qodo
|
key.rs was 1966 lines, exceeding the 1500-line limit. Extracted COPY, SORT, MEMORY commands + apply_pattern helper + tests into key_extra.rs (530 lines). key.rs is now 1474 lines.
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (1)
scripts/test-consistency.sh (1)
479-528: Comprehensive consistency test coverage for new commands.The new test block provides good cross-server validation for COPY, bit operations, SORT, and GEO commands. One minor observation:
BITOP XORis tested intest-commands.shbut missing from the consistency tests here. Consider adding it for completeness:assert_both "BITOP XOR" BITOP XOR edge:bopdst edge:bop1 edge:bop2 assert_both "GET BITOP XOR" GET edge:bopdstOtherwise, the coverage aligns well with the new command implementations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-consistency.sh` around lines 479 - 528, Add tests for the missing BITOP XOR case by inserting assert_both calls alongside the other BITOP checks (near the both SET edge:bop1/edge:bop2 and existing BITOP AND/OR/NOT checks): add assert_both "BITOP XOR" BITOP XOR edge:bopdst edge:bop1 edge:bop2 and assert_both "GET BITOP XOR" GET edge:bopdst so the XOR operation is validated in the consistency suite.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/command/config.rs`:
- Around line 225-250: The CONFIG rewrite omits
runtime_config.lazyfree_threshold so the value is lost; add emission for it in
the memory settings block by inserting a line like
lines.push(format!("lazyfree-threshold {}", runtime_config.lazyfree_threshold));
(or if lazyfree_threshold is Option<T> follow the existing pattern used for
save/aclfile and emit only when Some, e.g. if let Some(ref v) =
runtime_config.lazyfree_threshold { lines.push(format!("lazyfree-threshold {}",
v)); }) alongside the other memory settings before the blank String::new().
In `@src/command/geo/geo_cmd.rs`:
- Around line 387-402: The code converts query radii to meters using unit_mult
(from parse_unit/extract_bytes) but then always returns distances in meters
(affecting WITHDIST for BYRADIUS/BYBOX). Preserve the requested unit multiplier
(e.g., store it as requested_unit_mult or keep unit_mult alongside radius_m)
when you parse units in the BYRADIUS/BYBOX path, and when constructing WITHDIST
output (the code that emits dist at WITHDIST/line ~517), convert the computed
meter distance back to the requested unit by dividing by that multiplier (or
multiplying by the inverse) before formatting; update uses of unit_mult,
radius_m, BYRADIUS/BYBOX, WITHDIST, parse_unit, and extract_bytes accordingly.
- Around line 286-296: geosearch_inner is called but its second return value
(the actual matching members) is ignored, so GEOSEARCHSTORE reports a count but
never writes the matches to dest; capture the full return (let (count, members)
= geosearch_inner(...)), and when count > 0 create/replace the destination
sorted set and insert the returned members with their appropriate scores (use
the same member/score representation that other geo commands use), otherwise
remove dest as already done; place this logic in geo_cmd.rs around the existing
call to geosearch_inner so GEOSEARCHSTORE actually stores results in dest
instead of only returning the count.
- Around line 380-431: The parser currently allows both BYRADIUS and BYBOX
options together; update the parsing logic around the BYRADIUS and BYBOX
branches so they are mutually exclusive: when setting radius_m in the BYRADIUS
branch, check if box_width_m or box_height_m is already Some and return a
syntax/error Frame if so, and similarly when setting box_width_m/box_height_m in
the BYBOX branch check if radius_m is already Some and return an error;
reference the BYRADIUS and BYBOX branches and the variables radius_m,
box_width_m, and box_height_m to locate where to add these checks.
- Around line 372-459: The argument-parsing loop in the geo search handler (the
while i < args.len() loop using variables i, args and local arg) silently
ignores unknown tokens (e.g., anything not matching BYRADIUS, BYBOX, ASC, DESC,
COUNT, WITHCOORD, WITHDIST, WITHHASH), so add a terminal else branch that
returns a syntax error instead of falling through; specifically, after the
existing else-if chain replace the final falling-through behavior with: return
(0, Frame::Error(Bytes::from_static(b"ERR syntax error"))); so unrecognized
tokens immediately cause an error (ensure this is done before the `i += 1` that
advances the loop).
- Around line 135-168: The hot path clones the entire members map and uses heap
allocations (members.clone(), Vec::with_capacity/Vec::new and format!) which
violates the no-allocation rule; change the code to avoid cloning by
matching/looking up against the existing members_map by reference (use
members_map.get(member) directly), replace heap-allocated temporary Vecs with
SmallVec for the two-element coordinate array and for small temporary
collections, and replace format! with stack formatting via itoa (use an
itoa::Buffer to write lon/lat into a small stack buffer and build Bytes from
that slice) when creating Frame::BulkString; keep using geohash_decode and
extract_bytes as-is but ensure no clone() or String/Vec allocations occur in the
loop.
In `@src/command/key.rs`:
- Around line 539-562: The COPY implementation currently returns an error or 1
in cases where Redis semantics require integer 0; update the branches in the key
copy logic so that when db.exists(src) is false, when src == dst, and when
db.get(src) returns None due to lazy expiry (the race between exists() and
get()), they return Frame::Integer(0) instead of Frame::Error(...) or
Frame::Integer(1); keep the existing db.set(dst, cloned) and the existing
destination-exists check with !replace unchanged.
- Around line 739-759: The sort closure currently coerces parse failures to 0.0
via unwrap_or(0.0) (see indices.sort_by closure using sort_keys and alpha),
which silently accepts non-numeric keys; instead detect parse failures and
return an error from the enclosing operation. Change the logic that
builds/compares numeric keys so that when alpha is false and from_utf8 or
parse::<f64>() fails for a present sort key, you propagate/return an Err with a
clear message rather than using 0.0; this may require converting the enclosing
function to return Result and replacing indices.sort_by's inline
numeric-coercion with a checked parse step that triggers the error on invalid
numeric input.
- Around line 518-537: The loop parsing optional COPY options skips validating
the token after REPLACE because i is incremented inside the REPLACE branch and
again at loop end; remove the i += 1 from the REPLACE branch and ensure the
index advances exactly once per iteration (e.g., keep a single unconditional i
+= 1 at the end of the while i < args.len() loop) so that extract_key/arg
validation runs for every token; update the parsing block that uses replace,
extract_key, and the while i < args.len() loop accordingly.
In `@src/command/mod.rs`:
- Around line 208-210: The SORT command is being classified as a write solely by
command name which is wrong because it only mutates when a STORE subargument is
present; change the logic around cmd handling so that instead of immediately
returning resp(key::sort(db, args)) for cmd.eq_ignore_ascii_case(b"SORT"), you
parse the SORT arguments (args) to detect a "STORE" target (or otherwise inspect
the parsed options) and only take the write path when STORE is present,
otherwise treat it as a read; adjust the decision point so key::sort(db, args)
is invoked under the correct read/write classification after argument
inspection.
- Around line 406-408: The listed commands (geo::geopos, geo::geodist,
geo::geohash, geo::geosearch and the MEMORY subcommands) are currently routed
via the write path (dispatch) — change their routing so they go through the
read-path handler (dispatch_read) instead; locate the existing
if/eq_ignore_ascii_case checks around the shown block (and the other ranges
noted: 445-462, 560-678) and replace the direct write-path responses
(resp(...)/dispatch(...)) with the equivalent read-path invocation used
elsewhere (call dispatch_read() or the resp_read-style wrapper used for other
read-only commands) so these commands are treated as non-write/read-only and
respect replica/AOF/CLIENT PAUSE WRITE semantics.
In `@src/command/string/string_bit.rs`:
- Around line 715-722: The normalize_index function currently clamps any
out-of-range index to len-1, causing past-end start indexes (e.g.,
normalize_index(100, 1)) to point to the last byte instead of representing an
empty range; update normalize_index so that if the computed normalized index >=
len it returns len (signaling an empty/one-past-end position), if normalized < 0
it returns 0, otherwise return the normalized index; adjust the logic in fn
normalize_index(idx: i64, len: i64) to compute normalized = if idx < 0 { len +
idx } else { idx } and then return normalized if 0 <= normalized < len, 0 if
negative, or len if >= len so BITCOUNT/BITPOS callers treat past-end starts as
empty.
In `@src/server/conn/handler_sharded.rs`:
- Around line 684-710: The current single full-duration sleep (using
pause_wait_ms) blocks processing of a subsequent CLIENT UNPAUSE because it never
rechecks ctx.runtime_config; replace it with an interruptible recheck loop:
repeatedly read ctx.runtime_config (checking client_pause_deadline_ms and
client_pause_write_only via the same logic using is_write(cmd)), compute the
remaining wait, break immediately if zero, otherwise sleep for a short slice
(e.g. max 50–200ms or min(remaining, slice)) and then re-evaluate; implement the
loop for both runtime-tokio and runtime-monoio paths so sleeping tasks can
observe updates to client_pause_deadline_ms and exit early when UNPAUSE clears
the deadline.
- Around line 936-965: The CLIENT PAUSE handler in handler_sharded.rs
incorrectly treats any third argument as WRITE and ignores extra trailing args
and can overflow when computing deadline; update the branch that parses cmd_args
in the CLIENT PAUSE handling to strictly validate arity/mode (allow only no mode
or a single mode which must be case-insensitive "WRITE" or "ALL", and return an
ERR wrong number of arguments if extra args are present or ERR syntax if mode
invalid), parse the timeout as before, then compute the deadline using a
saturating/checked add (e.g., call
crate::storage::entry::current_time_ms().saturating_add(ms) or use checked_add
and cap to u64::MAX) before writing into rt.client_pause_deadline_ms and set
rt.client_pause_write_only only if mode == "WRITE"; reference cmd_args,
extract_bytes, current_time_ms(), runtime_config.write(),
client_pause_deadline_ms, and client_pause_write_only to locate and change the
logic.
---
Nitpick comments:
In `@scripts/test-consistency.sh`:
- Around line 479-528: Add tests for the missing BITOP XOR case by inserting
assert_both calls alongside the other BITOP checks (near the both SET
edge:bop1/edge:bop2 and existing BITOP AND/OR/NOT checks): add assert_both
"BITOP XOR" BITOP XOR edge:bopdst edge:bop1 edge:bop2 and assert_both "GET BITOP
XOR" GET edge:bopdst so the XOR operation is validated in the consistency suite.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cde4d96c-729e-4994-8086-838b3603e37a
📒 Files selected for processing (14)
scripts/test-commands.shscripts/test-consistency.shsrc/command/config.rssrc/command/geo/geo_cmd.rssrc/command/geo/mod.rssrc/command/key.rssrc/command/metadata.rssrc/command/mod.rssrc/command/string/mod.rssrc/command/string/string_bit.rssrc/config.rssrc/server/conn/handler_sharded.rssrc/server/conn/shared.rssrc/storage/eviction.rs
| Ok(Some((members, _))) => Some(members.clone()), | ||
| Ok(None) => None, | ||
| Err(e) => return e, | ||
| }; | ||
|
|
||
| let mut results = Vec::with_capacity(args.len() - 1); | ||
| for arg in &args[1..] { | ||
| let member = match extract_bytes(arg) { | ||
| Some(m) => m, | ||
| None => { | ||
| results.push(Frame::Null); | ||
| continue; | ||
| } | ||
| }; | ||
|
|
||
| match &members_map { | ||
| Some(m) => match m.get(member) { | ||
| Some(&score) => { | ||
| let (lon, lat) = geohash_decode(score); | ||
| results.push(Frame::Array( | ||
| vec![ | ||
| Frame::BulkString(Bytes::from(format!("{:.4}", lon))), | ||
| Frame::BulkString(Bytes::from(format!("{:.4}", lat))), | ||
| ] | ||
| .into(), | ||
| )); | ||
| } | ||
| None => results.push(Frame::Null), | ||
| }, | ||
| None => results.push(Frame::Null), | ||
| } | ||
| } | ||
|
|
||
| Frame::Array(results.into()) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Hot-path allocations in command handling violate repo performance constraints.
This path repeatedly clones full maps and allocates via format!/Vec::new in command execution. In src/command/**, those allocations are explicitly disallowed on hot paths.
As per coding guidelines "No allocations (Box::new(), Vec::new(), String::new(), Arc::new(), clone(), format!(), to_string()) on hot paths: command dispatch..." and "Use SmallVec and itoa instead of allocating Vec and format!() on hot paths."
Also applies to: 204-208, 237-241, 471-537
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/geo/geo_cmd.rs` around lines 135 - 168, The hot path clones the
entire members map and uses heap allocations (members.clone(),
Vec::with_capacity/Vec::new and format!) which violates the no-allocation rule;
change the code to avoid cloning by matching/looking up against the existing
members_map by reference (use members_map.get(member) directly), replace
heap-allocated temporary Vecs with SmallVec for the two-element coordinate array
and for small temporary collections, and replace format! with stack formatting
via itoa (use an itoa::Buffer to write lon/lat into a small stack buffer and
build Bytes from that slice) when creating Frame::BulkString; keep using
geohash_decode and extract_bytes as-is but ensure no clone() or String/Vec
allocations occur in the loop.
| if cmd.eq_ignore_ascii_case(b"SORT") { | ||
| return resp(key::sort(db, args)); | ||
| } |
There was a problem hiding this comment.
SORT cannot be classified by command name alone.
The handler chooses read vs write before parsing args, but SORT only mutates when STORE is present. Marking all SORT calls as write will AOF/reject plain sorts on replicas; marking it read will make SORT ... STORE miss the write path entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/mod.rs` around lines 208 - 210, The SORT command is being
classified as a write solely by command name which is wrong because it only
mutates when a STORE subargument is present; change the logic around cmd
handling so that instead of immediately returning resp(key::sort(db, args)) for
cmd.eq_ignore_ascii_case(b"SORT"), you parse the SORT arguments (args) to detect
a "STORE" target (or otherwise inspect the parsed options) and only take the
write path when STORE is present, otherwise treat it as a read; adjust the
decision point so key::sort(db, args) is invoked under the correct read/write
classification after argument inspection.
| if cmd.eq_ignore_ascii_case(b"GEOPOS") { | ||
| return resp(geo::geopos(db, args)); | ||
| } |
There was a problem hiding this comment.
These read-only commands need a dispatch_read() path.
GEOPOS, GEODIST, GEOHASH, GEOSEARCH, and the MEMORY subcommands are only routed through dispatch(). In src/server/conn/handler_sharded.rs, non-write commands go straight to dispatch_read(), so these either fall through as unknown commands or have to be mislabeled as writes, which then breaks replica/AOF/CLIENT PAUSE WRITE semantics.
Also applies to: 445-462, 560-678
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/mod.rs` around lines 406 - 408, The listed commands (geo::geopos,
geo::geodist, geo::geohash, geo::geosearch and the MEMORY subcommands) are
currently routed via the write path (dispatch) — change their routing so they go
through the read-path handler (dispatch_read) instead; locate the existing
if/eq_ignore_ascii_case checks around the shown block (and the other ranges
noted: 445-462, 560-678) and replace the direct write-path responses
(resp(...)/dispatch(...)) with the equivalent read-path invocation used
elsewhere (call dispatch_read() or the resp_read-style wrapper used for other
read-only commands) so these commands are treated as non-write/read-only and
respect replica/AOF/CLIENT PAUSE WRITE semantics.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
src/command/config.rs (1)
220-232:⚠️ Potential issue | 🟠 Major
CONFIG REWRITEstill dropslazyfree-threshold.The memory block serializes the other runtime knobs but not
runtime_config.lazyfree_threshold, so a rewrite followed by restart silently resets the value to the default.💡 Suggested fix
lines.push(format!("lfu-log-factor {}", runtime_config.lfu_log_factor)); lines.push(format!("lfu-decay-time {}", runtime_config.lfu_decay_time)); + lines.push(format!( + "lazyfree-threshold {}", + runtime_config.lazyfree_threshold + )); lines.push(String::new());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/config.rs` around lines 220 - 232, The memory serialization block in src/command/config.rs omits runtime_config.lazyfree_threshold so CONFIG REWRITE loses that setting; update the block that pushes memory-related lines (the vector built where lines.push(...) is called for maxmemory, maxmemory-policy, maxmemory-samples, lfu-log-factor, lfu-decay-time) to also push a formatted "lazyfree-threshold {value}" line using runtime_config.lazyfree_threshold so the value is preserved across rewrites and restarts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/command/config.rs`:
- Around line 178-186: The match arm currently maps both
"lazyfree-lazy-eviction" and "lazyfree-threshold" into the single field
runtime_config.lazyfree_threshold, causing CONFIG GET/SET to be inconsistent;
either reject the alias or persist it separately. Easiest fix: split the match
into two branches — keep the existing parsing and assignment for
"lazyfree-threshold" using runtime_config.lazyfree_threshold, and for
"lazyfree-lazy-eviction" return a Frame::Error explaining the alias is
unsupported (so the CLI/user gets a clear message). Alternatively, if you want
full support, add a new config field (e.g.,
runtime_config.lazyfree_lazy_eviction), wire it into CONFIG GET/SET code paths
and validate/parse it separately instead of reusing lazyfree_threshold.
- Around line 394-414: The test test_config_rewrite currently writes moon.conf
into the shared system temp dir (rt.dir), which can race in parallel; modify the
test to create a unique per-test temp directory (e.g., using tempfile::tempdir()
or creating a subdir under std::env::temp_dir() with a UUID) and set rt.dir to
that path before calling config_rewrite(&rt, &sc), assert against the conf file
inside that unique dir, and then remove/let the tempdir cleanup handle deletion
to avoid cross-test interference; keep references to RuntimeConfig (rt),
config_rewrite, and the moon.conf filename when updating the assertions and
cleanup.
In `@src/command/key_extra.rs`:
- Around line 52-60: The COPY handler currently returns an error for missing
source and unconditionally returns 1 for identical src/dst; change the early
exits to match Redis: if !db.exists(src) return Frame::Integer(0) (not an
error), and if src == dst return Frame::Integer(1) only when the REPLACE option
is set, otherwise return Frame::Integer(0). Update the branch that checks
src==dst to inspect the COPY REPLACE flag (e.g., opts.replace or replace) and
use db.exists(src) for the existence check so lazy expiry semantics remain
respected.
- Around line 81-96: The MEMORY USAGE implementation (memory_usage) currently
ignores trailing args; update it to parse optional parameters in a loop like
SORT does: after extracting the key, iterate remaining args and accept only the
token "SAMPLES" followed by a valid positive integer (parseable via the same
parser used elsewhere or standard integer parse), storing/using the sample count
if needed, and on any malformed token (unknown option or non-integer count)
return err_wrong_args("MEMORY"); ensure unknown extra arguments also trigger
err_wrong_args("MEMORY") so calls like `MEMORY USAGE foo garbage` or `MEMORY
USAGE foo SAMPLES nope` fail instead of succeeding.
- Around line 196-205: The STORE branch for SORT currently counts all entries in
results but only pushes Frame::BulkString into the destination list, which drops
Frame::Null values and causes the stored list length to diverge from the
returned integer; update the storage logic in the block that handles store_dest
so it preserves nils by pushing explicit nil entries into the destination list
for Frame::Null (e.g., create list elements that represent nil instead of
skipping), ensure the count remains results.len() cast to i64, and continue to
call db.set(Bytes::copy_from_slice(dest), entry) with the updated Entry (use the
same code paths that construct entries elsewhere, e.g., Entry::new_list or
equivalent, and ensure Frame::BulkString values are stored as before while
Frame::Null produces a nil list element).
---
Duplicate comments:
In `@src/command/config.rs`:
- Around line 220-232: The memory serialization block in src/command/config.rs
omits runtime_config.lazyfree_threshold so CONFIG REWRITE loses that setting;
update the block that pushes memory-related lines (the vector built where
lines.push(...) is called for maxmemory, maxmemory-policy, maxmemory-samples,
lfu-log-factor, lfu-decay-time) to also push a formatted "lazyfree-threshold
{value}" line using runtime_config.lazyfree_threshold so the value is preserved
across rewrites and restarts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 56bf75e6-d282-483d-a300-ed14603b2948
📒 Files selected for processing (5)
CHANGELOG.mdsrc/command/config.rssrc/command/key.rssrc/command/key_extra.rssrc/command/mod.rs
✅ Files skipped from review due to trivial changes (2)
- src/command/key.rs
- CHANGELOG.md
Critical bugs: - GEOSEARCHSTORE now writes matches to destination sorted set - CLIENT PAUSE uses 50ms polling loop so UNPAUSE takes effect immediately - normalize_index split into normalize_start/normalize_end; past-end start no longer clamped to len-1 (produces correct empty ranges) Correctness: - COPY returns 0 (not ERR) for missing source and same-key (Redis compat) - COPY REPLACE no longer double-increments i (accepts valid trailing args) - SORT numeric mode returns ERR for non-numeric elements without ALPHA - GEOSEARCH rejects unknown options with ERR instead of silently ignoring - GEOSEARCH enforces BYRADIUS/BYBOX mutual exclusivity - WITHDIST reports distance in query unit (km/mi/ft), not always meters - CLIENT PAUSE validates mode arg (WRITE|ALL only), rejects trailing garbage - CLIENT PAUSE deadline uses saturating_add to prevent overflow Config: - CONFIG REWRITE now persists lazyfree-threshold - Removed misleading lazyfree-lazy-eviction alias from CONFIG SET Performance: - geopos() no longer clones entire members HashMap; collects scores via short borrow then formats outside the borrow scope
…EY, TOUCH) Closes 7 command gaps identified in Redis parity audit: - EXPIREAT/PEXPIREAT: absolute Unix timestamp expiry (seconds/ms) - EXPIRETIME/PEXPIRETIME: read back absolute expiry timestamp - FLUSHDB/FLUSHALL: clear all keys in current database - TIME: return server clock as [seconds, microseconds] - RANDOMKEY: return a random key from the database - TOUCH: refresh LRU/LFU access time, return count of existing keys - SHUTDOWN: dispatch entry (returns ERR, actual shutdown via signal) Also adds Database::random_key() helper. 11 unit tests, entries in test-commands.sh and test-consistency.sh.
Implements 10 commands, closing every actionable gap: New commands: - BITFIELD: GET/SET/INCRBY with type specifiers (u8/i16/u32/...), OVERFLOW WRAP/SAT/FAIL, bit-level packed integer array - LCS: Longest Common Substring with LEN option (DP algorithm) - XSETID: set stream last-delivered ID without adding entries - GEORADIUS: deprecated wrapper translating to GEOSEARCH - GEORADIUSBYMEMBER: deprecated wrapper translating to GEOSEARCH - LOLWUT: Easter egg returning "Moon v0.1.0" - SWAPDB: dispatch entry (returns ERR in sharded mode) Enhanced commands: - OBJECT FREQ: return LFU access counter - OBJECT IDLETIME: return idle seconds (16-bit wraparound-safe) - OBJECT REFCOUNT: always returns 1 (no reference counting) Metadata added: LCS, XSETID
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/command/mod.rs (1)
54-892: 🛠️ Refactor suggestion | 🟠 MajorSplit this registry back under the 1500-line cap.
This file is now well past the repo limit, and adding more command buckets here will keep making routing changes harder to review safely. Please split the dispatcher again before it grows further. As per coding guidelines, "No single
.rsfile should exceed 1500 lines."Also applies to: 975-1254
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/mod.rs` around lines 54 - 892, The dispatcher in dispatch_inner has grown past the 1500-line limit; split the big match into smaller, focused modules and delegate so the top-level dispatch_inner stays short. Create new helper functions (e.g., dispatch_len_3, dispatch_len_4, dispatch_len_5, etc.) or per-category modules (e.g., registry_string, registry_hash, registry_list) that accept the same parameters (db, cmd, args, selected_db, db_count) or a subset and return an Option<DispatchResult> (or DispatchResult) and move the corresponding match arms into those functions; keep the inline resp helper or expose a small wrapper for responses. Replace the huge match in dispatch_inner with calls to those new dispatch_* helpers in length order and return early on Some(result), falling back to err_unknown(cmd) if none match so behavior is unchanged. Ensure unique symbols moved include dispatch_inner, resp, and the existing function calls like string::get/set, key::ttl, hash::hset/hget, list::lpop/llen, stream::xadd/xlen, sorted_set::zadd/zrem, etc., so reviewers can locate the moved code.
♻️ Duplicate comments (4)
src/command/key_extra.rs (2)
81-96:⚠️ Potential issue | 🟡 MinorValidate
MEMORY USAGEoptions instead of ignoring them.This only checks the key, so calls like
MEMORY USAGE k garbageorMEMORY USAGE k SAMPLES nopecurrently succeed. Since the command advertisesSAMPLES count, the trailing args need to be parsed and rejected on invalid syntax.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key_extra.rs` around lines 81 - 96, The memory_usage function currently only validates the key and ignores trailing options; update memory_usage to parse and validate optional arguments after the key: accept either no trailing args, or exactly two trailing args where args[1] equals "SAMPLES" (case-insensitive) and args[2] parses as a positive integer; any other form (extra args, unknown flags, non-integer sample count, or negative/zero count) should return err_wrong_args("MEMORY"). Keep the existing extract_key usage for args[0], then validate args.len() and the contents of args[1]/args[2] before calling db.get(key); ensure no extra trailing arguments are allowed.
331-345:⚠️ Potential issue | 🟠 Major
SORT ... STOREdropsNULLGET results.Line 333 returns
results.len(), but Lines 335-338 only persistFrame::BulkStringitems. Any missingGETlookup increments the returned count while silently disappearing from the stored list, so the stored value diverges from the reply shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key_extra.rs` around lines 331 - 345, The code counts all results but only stores Frame::BulkString items, dropping NULLs; change the collection logic so the stored list preserves NULLs in the same positions as the reply. Instead of building a VecDeque<Bytes>, iterate over results and push either crate::storage::entry::RedisValue::BulkString(b) for Frame::BulkString(b) or crate::storage::entry::RedisValue::Null for other frames into a VecDeque<crate::storage::entry::RedisValue>, then call CompactValue::from_redis_value(crate::storage::entry::RedisValue::List(list)) and db.set(...) so the stored Entry (created by Entry::new_list()) matches the reply length and positions, while still returning Frame::Integer(count).src/command/mod.rs (2)
215-217:⚠️ Potential issue | 🟠 MajorRoute
SORTafter inspectingSTORE.Line 215 still sends every
SORTinvocation down the write path. PlainSORT keyis read-only, whileSORT ... STORE dstmutates. Without parsing forSTOREbefore routing, one side of that split will stay misclassified.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/mod.rs` around lines 215 - 217, The SORT command is currently always routed to the write path because of the check if cmd.eq_ignore_ascii_case(b"SORT") returning resp(key_extra::sort(db, args)); to fix, parse the args passed to key_extra::sort for the presence of the "STORE" token (case-insensitive) and the destination key: if "STORE" is present route the call down the write path (keep the existing behavior), otherwise treat plain "SORT key" as read-only and route it to the read path instead; update the routing logic around cmd.eq_ignore_ascii_case(b"SORT") to inspect args for "STORE" before deciding to call key_extra::sort on the write side.
224-225:⚠️ Potential issue | 🟠 MajorAdd read-path handlers for the new read-only commands.
dispatch_read_inner()still has no arms forTIME,GEOPOS,GEODIST,GEOHASH,GEOSEARCH,RANDOMKEY,EXPIRETIME,PEXPIRETIME, or theMEMORYsubcommands. As wired now, these commands either have to stay mislabeled as writes or start returningERR unknown commandonce metadata marks them read.Also applies to: 422-424, 462-480, 596-600, 733-735, 748-750, 782-818
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/mod.rs` around lines 224 - 225, dispatch_read_inner() is missing read-path arms for several read-only commands so they are misrouted; add match arms for "TIME", "GEOPOS", "GEODIST", "GEOHASH", "GEOSEARCH", "RANDOMKEY", "EXPIRETIME", "PEXPIRETIME", and the "MEMORY" subcommands and forward them to their existing read handlers (e.g., route TIME -> key::time(), GEOPOS/GEODIST/GEOHASH/GEOSEARCH -> the geo module's read functions, RANDOMKEY -> key::randomkey(), EXPIRETIME/PEXPIRETIME -> key::expiretime()/key::pexpiretime(), and MEMORY subcommands -> memory::... handlers) within dispatch_read_inner() so they return read responses instead of being treated as writes or unknown; ensure you use the same resp(...) wrapper and the same bytewise comparison style (cmd.eq_ignore_ascii_case(...)) as the TIME example to match other occurrences noted (also update the other instances referenced).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/command/geo/geo_cmd.rs`:
- Around line 539-550: The COUNT branch currently uses parse_f64 and truncates
via as usize which accepts fractional inputs (e.g., 1.9 → 1); change the
parsing/validation in the COUNT handling (the block that uses parse_f64, i, and
sets count_limit) to only accept positive integers: either parse the argument as
an integer type (usize/usize parsing helper) or, if you keep parse_f64,
explicitly reject values whose fractional part is non-zero (v.fract() != 0.0)
and return the same ERR syntax error; ensure count_limit is set only when a
valid integer > 0 is provided.
- Around line 403-424: After parsing center_lon and center_lat in the FROMLONLAT
branch, validate they fall within Redis geo bounds (longitude between -180 and
180, latitude between -85 and 85 or the exact bounds used by your GEOADD
implementation) and return the same Frame::Error used by GEOADD on out-of-range
values instead of proceeding; update the FROMLONLAT handling in geo_cmd.rs (the
branch that sets center_lon, center_lat and found_from) to perform these range
checks and return the identical error message/byte payload as GEOADD when
invalid.
- Around line 324-329: geosearch_inner returns (matches, error) but the code
currently ignores the error by destructuring with `_` and treats empty matches
as a valid "no results" case; update the call in geo_cmd.rs (where
geosearch_inner is invoked for GEOSEARCHSTORE) to capture the second return
value (e.g., let (matches, maybe_err) = geosearch_inner(...)), check if that
error is present and immediately return it (propagate the error Frame) before
evaluating matches.is_empty(); only if there is no error should you proceed to
remove dest and return Frame::Integer(0).
In `@src/command/key.rs`:
- Around line 375-377: The flushdb function currently clears the DB regardless
of arguments; update flushdb to validate _args so only zero args or a single
ASYNC or SYNC token are allowed before calling db.clear(): if _args is empty
proceed, if it has one arg accept only the case-insensitive tokens "ASYNC" or
"SYNC", otherwise return a protocol syntax error Frame (use the same Frame error
variant used elsewhere in the codebase) and do not call db.clear(); ensure you
reference db.clear(), the flushdb function, and the ASYNC/SYNC tokens in the
validation.
- Around line 231-244: Change the EXPIREAT handling to accept any parsed integer
instead of rejecting non-positive timestamps: use parse_int(&args[1]) to parse
the value and if parsing fails return the same invalid integer error, but if the
parsed value is <= 0 treat it as an immediate expiry (compute expires_at_ms
accordingly — e.g. set to 0 or a value that your db.set_expiry interprets as
expired) and call db.set_expiry(key, expires_at_ms) as usual; do the same change
for the PEXPIREAT branch (the block referenced around lines 258-270) so both
absolute-expiry commands mark the key expired when a past timestamp is supplied
rather than returning an error (referencing parse_int, db.set_expiry, and the
EXPIREAT/PEXPIREAT handling blocks).
- Around line 345-348: TOUCH currently uses db.exists(key) which only checks
expiry and does not update access metadata; change the logic in the TOUCH
handler in src/command/key.rs to call the database touch/update-access function
(e.g., db.touch(key) or db.update_last_access(key)) for keys that are
present/valid instead of or in addition to db.exists(key), ensuring you still
perform lazy expiry checks before touching so expiry behavior is preserved and
OBJECT IDLETIME reflects the touch.
In `@src/command/metadata.rs`:
- Around line 304-305: The CommandMeta entry for "XSETID" currently uses
acl_categories: STR; update that entry to use the stream ACL category
(acl_categories: STM) so ACL/metadata consumers treat XSETID as a stream
command—modify the "XSETID" CommandMeta initializer in the dispatch table (the
line with "XSETID" => CommandMeta { ... acl_categories: ... }) to replace STR
with STM.
In `@src/command/stream/stream_write.rs`:
- Around line 849-852: The XSETID handling in the match arm where you call
db.get_stream_mut(key) must prevent moving stream.last_id backwards: before
assigning stream.last_id = id in that block (inside the match arm for
Ok(Some(stream))), compare the incoming id against stream.last_id and if id <=
stream.last_id return an error reply (same error type used elsewhere for invalid
IDs) instead of assigning; only set stream.last_id = id when id is strictly
greater than the current stream.last_id so XADD auto-ID generation cannot
regress or collide. Use the existing response/error pattern used by this module
for consistency and reference the get_stream_mut(key) match arm, stream.last_id,
and the XSETID/XADD code paths when making the change.
In `@src/command/string/string_bit.rs`:
- Around line 169-230: The BITCOUNT implementation accepts invalid arities (e.g.
"BITCOUNT key 0") and silently treats them as full-string counts; add strict
arity checks in bitcount() and bitcount_readonly(): allow only args.len() == 1
(key only), 3 (key start end), or 4 (key start end mode); if args.len() == 2 or
args.len() > 4 return an argument error (use err_wrong_args("BITCOUNT") or the
existing Frame::Error pattern), and then parse start/end/mode only when
args.len() >= 3 as currently done.
- Around line 821-828: The signed bitfield wrap-around uses 1i64 << bits in
bf_incr(), which overflows when bits == 64; update bf_incr() to avoid shifting
into i64 by using a wider intermediate (e.g., compute range as 1i128 << bits and
perform modulo/arithmetic in i128 then cast back), or alternatively reject
64-bit signed fields in parse_encoding() validation; specifically modify
bf_incr()’s range and wrap arithmetic to use i128 intermediates (adjusting all
temporaries and casts) or add a guard in parse_encoding()/wherever
parse_encoding() is used to return None for signed bits == 64 so the earlier
error path triggers. Ensure references are to bf_incr(), parse_encoding(), bits,
and signed so the change is localized.
In `@src/command/string/string_read.rs`:
- Around line 394-405: The loop that parses options (using args, extract_bytes,
want_len) currently accepts and ignores any token other than "LEN"; change it to
validate supported options and return a syntax error for unsupported ones:
inside the while loop after extracting arg, match arg (case-insensitive) against
the supported set ("LEN", "IDX", "MINMATCHLEN", "WITHMATCHLEN") and handle each
branch properly—set want_len for "LEN", parse and validate the expected
numeric/value arguments for "IDX" and "MINMATCHLEN" (using extract_bytes and
converting to integer), set flags for "WITHMATCHLEN" if present, and for any
other token return Frame::Error(Bytes::from_static(b"ERR syntax error")) so
garbage or unsupported options are not silently ignored. Ensure to advance i by
the correct number of consumed args when parsing multi-arg options.
In `@src/storage/db.rs`:
- Around line 293-304: random_key currently samples from self.data without
checking expirations, so it can return keys that GET/EXISTS treat as expired;
update random_key to skip expired entries by using the same expiry check logic
as GET/EXISTS (e.g., call the existing expiration check function or inspect the
stored expiry metadata for entries in self.data), build a collection of
non-expired keys (or iterate until you find a non-expired candidate), pick a
random index from that filtered collection using current_time_ms() % len, and
return None if no non-expired keys remain; ensure you reference and reuse the
same expiry-checking helper used elsewhere to avoid duplicating expiry logic.
---
Outside diff comments:
In `@src/command/mod.rs`:
- Around line 54-892: The dispatcher in dispatch_inner has grown past the
1500-line limit; split the big match into smaller, focused modules and delegate
so the top-level dispatch_inner stays short. Create new helper functions (e.g.,
dispatch_len_3, dispatch_len_4, dispatch_len_5, etc.) or per-category modules
(e.g., registry_string, registry_hash, registry_list) that accept the same
parameters (db, cmd, args, selected_db, db_count) or a subset and return an
Option<DispatchResult> (or DispatchResult) and move the corresponding match arms
into those functions; keep the inline resp helper or expose a small wrapper for
responses. Replace the huge match in dispatch_inner with calls to those new
dispatch_* helpers in length order and return early on Some(result), falling
back to err_unknown(cmd) if none match so behavior is unchanged. Ensure unique
symbols moved include dispatch_inner, resp, and the existing function calls like
string::get/set, key::ttl, hash::hset/hget, list::lpop/llen, stream::xadd/xlen,
sorted_set::zadd/zrem, etc., so reviewers can locate the moved code.
---
Duplicate comments:
In `@src/command/key_extra.rs`:
- Around line 81-96: The memory_usage function currently only validates the key
and ignores trailing options; update memory_usage to parse and validate optional
arguments after the key: accept either no trailing args, or exactly two trailing
args where args[1] equals "SAMPLES" (case-insensitive) and args[2] parses as a
positive integer; any other form (extra args, unknown flags, non-integer sample
count, or negative/zero count) should return err_wrong_args("MEMORY"). Keep the
existing extract_key usage for args[0], then validate args.len() and the
contents of args[1]/args[2] before calling db.get(key); ensure no extra trailing
arguments are allowed.
- Around line 331-345: The code counts all results but only stores
Frame::BulkString items, dropping NULLs; change the collection logic so the
stored list preserves NULLs in the same positions as the reply. Instead of
building a VecDeque<Bytes>, iterate over results and push either
crate::storage::entry::RedisValue::BulkString(b) for Frame::BulkString(b) or
crate::storage::entry::RedisValue::Null for other frames into a
VecDeque<crate::storage::entry::RedisValue>, then call
CompactValue::from_redis_value(crate::storage::entry::RedisValue::List(list))
and db.set(...) so the stored Entry (created by Entry::new_list()) matches the
reply length and positions, while still returning Frame::Integer(count).
In `@src/command/mod.rs`:
- Around line 215-217: The SORT command is currently always routed to the write
path because of the check if cmd.eq_ignore_ascii_case(b"SORT") returning
resp(key_extra::sort(db, args)); to fix, parse the args passed to
key_extra::sort for the presence of the "STORE" token (case-insensitive) and the
destination key: if "STORE" is present route the call down the write path (keep
the existing behavior), otherwise treat plain "SORT key" as read-only and route
it to the read path instead; update the routing logic around
cmd.eq_ignore_ascii_case(b"SORT") to inspect args for "STORE" before deciding to
call key_extra::sort on the write side.
- Around line 224-225: dispatch_read_inner() is missing read-path arms for
several read-only commands so they are misrouted; add match arms for "TIME",
"GEOPOS", "GEODIST", "GEOHASH", "GEOSEARCH", "RANDOMKEY", "EXPIRETIME",
"PEXPIRETIME", and the "MEMORY" subcommands and forward them to their existing
read handlers (e.g., route TIME -> key::time(), GEOPOS/GEODIST/GEOHASH/GEOSEARCH
-> the geo module's read functions, RANDOMKEY -> key::randomkey(),
EXPIRETIME/PEXPIRETIME -> key::expiretime()/key::pexpiretime(), and MEMORY
subcommands -> memory::... handlers) within dispatch_read_inner() so they return
read responses instead of being treated as writes or unknown; ensure you use the
same resp(...) wrapper and the same bytewise comparison style
(cmd.eq_ignore_ascii_case(...)) as the TIME example to match other occurrences
noted (also update the other instances referenced).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c49255f-3088-4554-bd0f-bcef31e26892
📒 Files selected for processing (14)
CHANGELOG.mdscripts/test-commands.shscripts/test-consistency.shsrc/command/config.rssrc/command/geo/geo_cmd.rssrc/command/key.rssrc/command/key_extra.rssrc/command/metadata.rssrc/command/mod.rssrc/command/stream/stream_write.rssrc/command/string/string_bit.rssrc/command/string/string_read.rssrc/server/conn/handler_sharded.rssrc/storage/db.rs
✅ Files skipped from review due to trivial changes (2)
- CHANGELOG.md
- src/server/conn/handler_sharded.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- scripts/test-consistency.sh
- scripts/test-commands.sh
- src/command/config.rs
| // Shift args so args[0] is now the source key | ||
| let (matches, _) = geosearch_inner(db, &args[1..], true); | ||
|
|
||
| if matches.is_empty() { | ||
| db.remove(&dest); | ||
| return Frame::Integer(0); |
There was a problem hiding this comment.
Propagate geosearch_inner() errors before treating the result as empty.
geosearch_inner() reports syntax/unit/WRONGTYPE failures through its second return value. Right now those failures fall into matches.is_empty(), so GEOSEARCHSTORE can return 0 and remove dest on invalid input instead of surfacing the error.
💡 Minimal fix
- let (matches, _) = geosearch_inner(db, &args[1..], true);
+ let (matches, result) = geosearch_inner(db, &args[1..], true);
+ if let Frame::Error(_) = result {
+ return result;
+ }
if matches.is_empty() {
db.remove(&dest);
return Frame::Integer(0);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/geo/geo_cmd.rs` around lines 324 - 329, geosearch_inner returns
(matches, error) but the code currently ignores the error by destructuring with
`_` and treats empty matches as a valid "no results" case; update the call in
geo_cmd.rs (where geosearch_inner is invoked for GEOSEARCHSTORE) to capture the
second return value (e.g., let (matches, maybe_err) = geosearch_inner(...)),
check if that error is present and immediately return it (propagate the error
Frame) before evaluating matches.is_empty(); only if there is no error should
you proceed to remove dest and return Frame::Integer(0).
| } else if arg.eq_ignore_ascii_case(b"FROMLONLAT") { | ||
| i += 1; | ||
| center_lon = match args.get(i).and_then(|f| parse_f64(f)) { | ||
| Some(v) => v, | ||
| None => { | ||
| return ( | ||
| Vec::new(), | ||
| Frame::Error(Bytes::from_static(b"ERR syntax error")), | ||
| ); | ||
| } | ||
| }; | ||
| i += 1; | ||
| center_lat = match args.get(i).and_then(|f| parse_f64(f)) { | ||
| Some(v) => v, | ||
| None => { | ||
| return ( | ||
| Vec::new(), | ||
| Frame::Error(Bytes::from_static(b"ERR syntax error")), | ||
| ); | ||
| } | ||
| }; | ||
| found_from = true; |
There was a problem hiding this comment.
Validate FROMLONLAT coordinates against Redis geo bounds.
This path accepts out-of-range longitude/latitude values, while GEOADD already rejects them. Malformed GEOSEARCH/GEORADIUS queries should fail with the same range error instead of running a search with invalid coordinates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/geo/geo_cmd.rs` around lines 403 - 424, After parsing center_lon
and center_lat in the FROMLONLAT branch, validate they fall within Redis geo
bounds (longitude between -180 and 180, latitude between -85 and 85 or the exact
bounds used by your GEOADD implementation) and return the same Frame::Error used
by GEOADD on out-of-range values instead of proceeding; update the FROMLONLAT
handling in geo_cmd.rs (the branch that sets center_lon, center_lat and
found_from) to perform these range checks and return the identical error
message/byte payload as GEOADD when invalid.
| } else if arg.eq_ignore_ascii_case(b"COUNT") { | ||
| i += 1; | ||
| let c = match args.get(i).and_then(|f| parse_f64(f)) { | ||
| Some(v) if v > 0.0 => v as usize, | ||
| _ => { | ||
| return ( | ||
| Vec::new(), | ||
| Frame::Error(Bytes::from_static(b"ERR syntax error")), | ||
| ); | ||
| } | ||
| }; | ||
| count_limit = Some(c); |
There was a problem hiding this comment.
Reject fractional COUNT values.
COUNT is parsed with parse_f64() and then truncated with as usize, so COUNT 1.9 is accepted as 1. That silently changes malformed input instead of returning an error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/geo/geo_cmd.rs` around lines 539 - 550, The COUNT branch
currently uses parse_f64 and truncates via as usize which accepts fractional
inputs (e.g., 1.9 → 1); change the parsing/validation in the COUNT handling (the
block that uses parse_f64, i, and sets count_limit) to only accept positive
integers: either parse the argument as an integer type (usize/usize parsing
helper) or, if you keep parse_f64, explicitly reject values whose fractional
part is non-zero (v.fract() != 0.0) and return the same ERR syntax error; ensure
count_limit is set only when a valid integer > 0 is provided.
| if db.exists(key) { | ||
| // exists() already does lazy expiry + access tracking | ||
| count += 1; | ||
| } |
There was a problem hiding this comment.
TOUCH never updates last_access.
db.exists() only checks presence/expiry. The command returns the right count, but access metadata stays unchanged, so OBJECT IDLETIME won’t reflect the touch.
💡 Minimal fix
- if db.exists(key) {
- // exists() already does lazy expiry + access tracking
- count += 1;
- }
+ if db.exists(key) {
+ db.touch_access(key);
+ count += 1;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key.rs` around lines 345 - 348, TOUCH currently uses
db.exists(key) which only checks expiry and does not update access metadata;
change the logic in the TOUCH handler in src/command/key.rs to call the database
touch/update-access function (e.g., db.touch(key) or db.update_last_access(key))
for keys that are present/valid instead of or in addition to db.exists(key),
ensuring you still perform lazy expiry checks before touching so expiry behavior
is preserved and OBJECT IDLETIME reflects the touch.
| match db.get_stream_mut(key) { | ||
| Ok(Some(stream)) => { | ||
| stream.last_id = id; | ||
| Frame::SimpleString(Bytes::from_static(b"OK")) |
There was a problem hiding this comment.
Don’t let XSETID move last_id backwards.
Later XADD auto-ID generation in this module uses stream.last_id. Lowering it here can make the next generated IDs go backwards or collide with existing entries. Reject IDs smaller than the current last_id before assignment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/stream/stream_write.rs` around lines 849 - 852, The XSETID
handling in the match arm where you call db.get_stream_mut(key) must prevent
moving stream.last_id backwards: before assigning stream.last_id = id in that
block (inside the match arm for Ok(Some(stream))), compare the incoming id
against stream.last_id and if id <= stream.last_id return an error reply (same
error type used elsewhere for invalid IDs) instead of assigning; only set
stream.last_id = id when id is strictly greater than the current stream.last_id
so XADD auto-ID generation cannot regress or collide. Use the existing
response/error pattern used by this module for consistency and reference the
get_stream_mut(key) match arm, stream.last_id, and the XSETID/XADD code paths
when making the change.
| pub fn bitcount(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.is_empty() { | ||
| return err_wrong_args("BITCOUNT"); | ||
| } | ||
| let key = match extract_bytes(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("BITCOUNT"), | ||
| }; | ||
|
|
||
| let data = match db.get(key) { | ||
| Some(entry) => match entry.value.as_bytes() { | ||
| Some(v) => v, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"WRONGTYPE Operation against a key holding the wrong kind of value", | ||
| )); | ||
| } | ||
| }, | ||
| None => return Frame::Integer(0), | ||
| }; | ||
|
|
||
| if data.is_empty() { | ||
| return Frame::Integer(0); | ||
| } | ||
|
|
||
| // Parse optional range | ||
| let (start, end, use_bit) = if args.len() >= 3 { | ||
| let s = match parse_i64(&args[1]) { | ||
| Some(v) => v, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR value is not an integer or out of range", | ||
| )); | ||
| } | ||
| }; | ||
| let e = match parse_i64(&args[2]) { | ||
| Some(v) => v, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR value is not an integer or out of range", | ||
| )); | ||
| } | ||
| }; | ||
| let use_bit = if args.len() >= 4 { | ||
| let mode = match extract_bytes(&args[3]) { | ||
| Some(m) => m, | ||
| None => return Frame::Error(Bytes::from_static(b"ERR syntax error")), | ||
| }; | ||
| if mode.eq_ignore_ascii_case(b"BIT") { | ||
| true | ||
| } else if mode.eq_ignore_ascii_case(b"BYTE") { | ||
| false | ||
| } else { | ||
| return Frame::Error(Bytes::from_static(b"ERR syntax error")); | ||
| } | ||
| } else { | ||
| false | ||
| }; | ||
| (s, e, use_bit) | ||
| } else { | ||
| (0i64, -1i64, false) | ||
| }; |
There was a problem hiding this comment.
Reject invalid BITCOUNT arities.
Line 195 treats any call with fewer than three args as a full-string count, so BITCOUNT key 0 succeeds instead of erroring. Extra trailing args are also ignored. Line 282 mirrors the same bug in the readonly path.
Suggested guard
pub fn bitcount(db: &mut Database, args: &[Frame]) -> Frame {
- if args.is_empty() {
+ if !matches!(args.len(), 1 | 3 | 4) {
return err_wrong_args("BITCOUNT");
}Apply the same guard to bitcount_readonly().
Also applies to: 257-317
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_bit.rs` around lines 169 - 230, The BITCOUNT
implementation accepts invalid arities (e.g. "BITCOUNT key 0") and silently
treats them as full-string counts; add strict arity checks in bitcount() and
bitcount_readonly(): allow only args.len() == 1 (key only), 3 (key start end),
or 4 (key start end mode); if args.len() == 2 or args.len() > 4 return an
argument error (use err_wrong_args("BITCOUNT") or the existing Frame::Error
pattern), and then parse start/end/mode only when args.len() >= 3 as currently
done.
| let (signed, bits) = match parse_encoding(enc) { | ||
| Some(v) => v, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR Invalid bitfield type. Use something like i8 u8 i16 u16 ...", | ||
| )); | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Rust, how do 1i64 << 63, 1i64 << 64, and (1i64 << 63) - 1 behave, and what integer width should be used to safely model 2^63 / 2^64 ranges?
💡 Result:
In Rust, the shift operator << on integers panics in debug mode if the right-hand side is greater than or equal to the bit width of the left-hand type (64 for i64), but in release mode it typically behaves like wrapping_shl, using rhs % 64. - 1i64 << 63: Shifts 1 left by 63 positions, setting the sign bit, resulting in i64::MIN (-9223372036854775808). This succeeds because 63 < 64. - 1i64 << 64: rhs == 64, so panics in debug mode. In release mode, equivalent to shift by 0 (64 % 64 == 0), resulting in 1. - (1i64 << 63) - 1: i64::MAX (9223372036854775807), the largest representable i64 value. To safely model the range 2^63 (9,223,372,036,854,775,808) to 2^64 (18,446,744,073,709,551,616), use u64 for [2^63, 2^64-1] since u64::MAX == 2^64 - 1, or i128/u128 for signed/unsigned values including exactly 2^64 if needed. u64 is sufficient and efficient for this unsigned range.
Citations:
- 1: https://doc.rust-lang.org/std/primitive.i64.html
- 2: https://doc.rust-lang.org/stable/std/primitive.i64.html
- 3: https://doc.rust-lang.org/stable/reference/expressions/operator-expr.html?highlight=overflow
- 4: https://doc.rust-lang.org/stable/reference/expressions/operator-expr.html
- 5: https://stackoverflow.com/questions/51571066/what-are-the-exact-semantics-of-rusts-shift-operators
- 6: https://doc.rust-lang.org/stable/reference/types/numeric.html
- 7: https://www.kernel.org/doc/rustdoc/next/core/primitive.i64.html
- 8: https://doc.rust-lang.org/std/primitive.u64.html
🏁 Script executed:
# Read the file and examine lines around the ones mentioned
head -n 1050 src/command/string/string_bit.rs | tail -n +800 | cat -nRepository: pilotspace/moon
Length of output: 10042
🏁 Script executed:
# Search for bf_incr and bf_saturate functions
rg -n "fn bf_incr|fn bf_saturate" src/command/string/string_bit.rsRepository: pilotspace/moon
Length of output: 211
🏁 Script executed:
# Extract just the critical lines from bf_incr and bf_saturate
sed -n '166,196p; 198,212p' src/command/string/string_bit.rs | cat -nRepository: pilotspace/moon
Length of output: 1797
🏁 Script executed:
# Check if there are any existing guards or tests for i63/i64
rg -n "bits.*63|bits.*64|i63|i64" src/command/string/string_bit.rs | head -20Repository: pilotspace/moon
Length of output: 1007
🏁 Script executed:
# Check the validation logic more carefully
sed -n '101,115p' src/command/string/string_bit.rs | cat -nRepository: pilotspace/moon
Length of output: 653
🏁 Script executed:
# Also check if there are any tests covering i63 or i64
rg -n "i63|i64|bits.*6[34]" src/command/string/string_bit.rs -A2 -B2Repository: pilotspace/moon
Length of output: 10277
Fix 1i64 << bits shift overflow in signed bitfield wrap-around calculation.
In bf_incr() at line 973, computing 1i64 << bits causes undefined behavior when bits==64 (panics in debug, wraps to 1 in release). The range calculation for signed wrap-around overflows outright at this width. Replace with i128 intermediate: let range = 1i128 << bits; and adjust the modulo arithmetic accordingly, or reject bits==64 in validation until implemented safely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_bit.rs` around lines 821 - 828, The signed bitfield
wrap-around uses 1i64 << bits in bf_incr(), which overflows when bits == 64;
update bf_incr() to avoid shifting into i64 by using a wider intermediate (e.g.,
compute range as 1i128 << bits and perform modulo/arithmetic in i128 then cast
back), or alternatively reject 64-bit signed fields in parse_encoding()
validation; specifically modify bf_incr()’s range and wrap arithmetic to use
i128 intermediates (adjusting all temporaries and casts) or add a guard in
parse_encoding()/wherever parse_encoding() is used to return None for signed
bits == 64 so the earlier error path triggers. Ensure references are to
bf_incr(), parse_encoding(), bits, and signed so the change is localized.
| let mut want_len = false; | ||
| let mut i = 2; | ||
| while i < args.len() { | ||
| let arg = match extract_bytes(&args[i]) { | ||
| Some(a) => a, | ||
| None => return Frame::Error(Bytes::from_static(b"ERR syntax error")), | ||
| }; | ||
| if arg.eq_ignore_ascii_case(b"LEN") { | ||
| want_len = true; | ||
| } | ||
| i += 1; | ||
| } |
There was a problem hiding this comment.
Don’t silently ignore unsupported LCS options.
This parser only handles LEN; IDX, MINMATCHLEN, WITHMATCHLEN, or arbitrary garbage are all accepted and then ignored. Either implement those branches or return a syntax error for anything outside the supported option set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_read.rs` around lines 394 - 405, The loop that
parses options (using args, extract_bytes, want_len) currently accepts and
ignores any token other than "LEN"; change it to validate supported options and
return a syntax error for unsupported ones: inside the while loop after
extracting arg, match arg (case-insensitive) against the supported set ("LEN",
"IDX", "MINMATCHLEN", "WITHMATCHLEN") and handle each branch properly—set
want_len for "LEN", parse and validate the expected numeric/value arguments for
"IDX" and "MINMATCHLEN" (using extract_bytes and converting to integer), set
flags for "WITHMATCHLEN" if present, and for any other token return
Frame::Error(Bytes::from_static(b"ERR syntax error")) so garbage or unsupported
options are not silently ignored. Ensure to advance i by the correct number of
consumed args when parsing multi-arg options.
Previously Frame::Null from GET pattern lookups were silently dropped, causing stored list length to diverge from returned count. Redis stores nil results as empty strings — now Moon does the same.
…MPLES, geo test layout - Implement CLIENT PAUSE timeout [WRITE|ALL] and CLIENT UNPAUSE - Fix COPY same-key returning 0 instead of 1 with REPLACE flag - RANDOMKEY now filters expired keys instead of returning stale entries - MEMORY USAGE accepts optional SAMPLES count parameter - Fix XSETID acl category from STR to STM - Isolate config_rewrite test to unique temp dir - Move geo command tests from geo_cmd.rs to geo/mod.rs per split convention
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (3)
src/command/geo/geo_cmd.rs (3)
324-330:⚠️ Potential issue | 🟠 MajorError from
geosearch_inneris silently ignored.The second return value (which may contain a syntax/validation error) is discarded. If
geosearch_innerfails, this returns0and removesdestinstead of propagating the error.🐛 Propagate errors before checking matches
// Shift args so args[0] is now the source key - let (matches, _) = geosearch_inner(db, &args[1..], true); + let (matches, result) = geosearch_inner(db, &args[1..], true); + if let Frame::Error(_) = result { + return result; + } if matches.is_empty() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 324 - 330, The code discards the error returned by geosearch_inner and proceeds to remove dest and return 0; capture the second return value (e.g., let (matches, err_opt) = geosearch_inner(...)), check if the error is present before inspecting matches, and if so return/propagate an error Frame (or convert the error into the appropriate Frame error) instead of deleting dest; update the block around geosearch_inner, matches, and dest removal so error handling happens first.
539-550:⚠️ Potential issue | 🟡 MinorFractional COUNT values are silently truncated instead of rejected.
COUNT 1.9is accepted as1becauseparse_f64()succeeds and the value is cast withas usize. Redis rejects non-integer COUNT.🐛 Reject fractional values
} else if arg.eq_ignore_ascii_case(b"COUNT") { i += 1; - let c = match args.get(i).and_then(|f| parse_f64(f)) { - Some(v) if v > 0.0 => v as usize, + let c = match args.get(i).and_then(|f| parse_f64(f)) { + Some(v) if v > 0.0 && v.fract() == 0.0 => v as usize, _ => { return ( Vec::new(), Frame::Error(Bytes::from_static(b"ERR syntax error")), ); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 539 - 550, The COUNT branch accepts fractional values because it parses with parse_f64 and casts with `as usize`; modify the validation in the `else if arg.eq_ignore_ascii_case(b"COUNT")` block (the code that uses `args.get(i).and_then(|f| parse_f64(f))`) to explicitly reject non-integer values: after parsing the f64, ensure the value is > 0 and has no fractional part (e.g., `v.fract() == 0.0`) before converting to usize; if the value has a fractional part or is invalid, return the same syntax error (Frame::Error(...)) instead of silently truncating, and set `count_limit = Some(c)` only when the check passes.
403-427:⚠️ Potential issue | 🟡 MinorMissing coordinate bounds validation for FROMLONLAT.
GEOADDvalidates longitude[-180, 180]and latitude[-85.05112878, 85.05112878], butFROMLONLATaccepts any values. Out-of-range center coordinates will produce undefined search behavior.🛡️ Add bounds validation after parsing
center_lat = match args.get(i).and_then(|f| parse_f64(f)) { Some(v) => v, None => { return ( Vec::new(), Frame::Error(Bytes::from_static(b"ERR syntax error")), ); } }; + if !(-180.0..=180.0).contains(¢er_lon) + || !(-85.05112878..=85.05112878).contains(¢er_lat) + { + return ( + Vec::new(), + Frame::Error(Bytes::from_static( + b"ERR value is not a valid float or out of range", + )), + ); + } found_from = true;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 403 - 427, The FROMLONLAT parsing branch currently accepts any numbers for center_lon and center_lat; add bounds checks immediately after the parse_f64 matches in the FROMLONLAT block (the same limits used by GEOADD): require center_lon ∈ [-180.0, 180.0] and center_lat ∈ [-85.05112878, 85.05112878]; if either is out of range return the same Frame::Error (e.g., ERR syntax error) as other parse failures. Update the logic inside the FROMLONLAT branch (where center_lon and center_lat are assigned after parse_f64) to validate these ranges before setting found_from = true.
🧹 Nitpick comments (5)
src/command/geo/mod.rs (1)
71-83: Simplify string conversion since alphabet is ASCII-only.The
ALPHABETcontains only ASCII characters, sofrom_utf8_lossyis unnecessary and creates an intermediateCow. Since this is result-building (not hot dispatch), a simpler approach works.♻️ Suggested simplification
pub(crate) fn geohash_to_string(score: f64) -> String { const ALPHABET: &[u8] = b"0123456789bcdefghjkmnpqrstuvwxyz"; let hash = score as u64; // Redis uses 11 characters (55 bits, but we only have 52 so pad with 0) let padded = hash << 3; // shift left 3 to fill 55 bits let mut result = [0u8; 11]; for i in 0..11 { let idx = ((padded >> (50 - i * 5)) & 0x1F) as usize; result[i] = ALPHABET[idx]; } - String::from_utf8_lossy(&result).to_string() + // SAFETY: ALPHABET contains only ASCII bytes, so result is always valid UTF-8 + unsafe { String::from_utf8_unchecked(result.to_vec()) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/mod.rs` around lines 71 - 83, The conversion at the end of geohash_to_string uses from_utf8_lossy unnecessarily; replace the String::from_utf8_lossy(&result).to_string() call with a direct UTF-8 conversion that avoids the Cow, e.g. convert the byte array from result into a String via std::str::from_utf8(&result).unwrap().to_string() or String::from_utf8(result.to_vec()).unwrap(), keeping ALPHABET and the byte-array construction in place and retaining the function geohash_to_string behavior.src/command/geo/geo_cmd.rs (3)
236-264: Unnecessary map clone; consider extracting scores in borrow scope.Same pattern as
geodist- the entire map is cloned when scores could be extracted while holding the borrow.♻️ Extract scores within borrow scope
- let members_map = match db.get_sorted_set(key) { - Ok(Some((members, _))) => Some(members.clone()), - Ok(None) => None, - Err(e) => return e, - }; - - let mut results = Vec::with_capacity(args.len() - 1); - for arg in &args[1..] { + let scores: Vec<Option<f64>> = { + let members_map = match db.get_sorted_set(key) { + Ok(Some((members, _))) => Some(members), + Ok(None) => None, + Err(e) => return e, + }; + args[1..] + .iter() + .map(|arg| { + let member = extract_bytes(arg)?; + members_map.as_ref()?.get(member).copied() + }) + .collect() + }; + + let mut results = Vec::with_capacity(scores.len()); + for opt_score in scores { // ... build results from scores }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 236 - 264, The code clones the entire members map into members_map when calling db.get_sorted_set; instead keep the borrow and extract scores inside that borrow scope to avoid the clone: match on db.get_sorted_set(key) to get Ok(Some((members, _))) and then, while still borrowing `members`, iterate over args[1..], use `extract_bytes` to get `member` and look up with `members.get(member)` to call `geohash_to_string(score)` and push results into `results` (handling Ok(None) and Err(e) as before), finally return Frame::Array(results.into()); do not call .clone() on `members`.
203-216: Unnecessary full map clone for two lookups.The entire members map is cloned just to perform two
get()lookups. This can be avoided by extracting both scores while holding the borrow.♻️ Avoid cloning by extracting scores in one borrow scope
- let members_map = match db.get_sorted_set(key) { - Ok(Some((members, _))) => members.clone(), - Ok(None) => return Frame::Null, - Err(e) => return e, - }; - - let score1 = match members_map.get(m1) { - Some(&s) => s, - None => return Frame::Null, - }; - let score2 = match members_map.get(m2) { - Some(&s) => s, - None => return Frame::Null, - }; + let (score1, score2) = match db.get_sorted_set(key) { + Ok(Some((members, _))) => { + let s1 = match members.get(m1) { + Some(&s) => s, + None => return Frame::Null, + }; + let s2 = match members.get(m2) { + Some(&s) => s, + None => return Frame::Null, + }; + (s1, s2) + } + Ok(None) => return Frame::Null, + Err(e) => return e, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 203 - 216, The code currently clones the entire members map into members_map to read two scores; instead, avoid the clone by matching on db.get_sorted_set(key) and binding the borrowed map (e.g., let members = members) then perform two lookups on that borrow: retrieve score1 via members.get(m1) and score2 via members.get(m2), returning Frame::Null if either is None and preserving the existing Err(e) and Ok(None) handling; update the variables score1 and score2 to come from those borrows rather than from a cloned members_map.
583-608: Full map clone for iteration could be avoided.Line 585 clones the entire members map. This could be refactored to extract member/score pairs within the borrow scope, then process them after releasing the borrow. However, this is a pattern used in several GEO commands and the refactoring complexity may not be worth it for this PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/geo/geo_cmd.rs` around lines 583 - 608, Avoid cloning the entire members_map; instead process the entries while you still have the borrow from db.get_sorted_set. Remove members.clone() and move the filtering loop into the Ok(Some((members, _))) match arm (iterate over members.iter() or members.into_iter() as appropriate), compute geohash_decode(...) and haversine_distance(...) there, and push results into the existing matches Vec<(Bytes, f64, f64, f64, f64)>; ensure you only clone each member when you need to own it for matches.push((member.clone(), ...)) and do not clone the whole map up-front. Use the existing symbols members_map / db.get_sorted_set / geohash_decode / haversine_distance / matches to locate and refactor the code.src/command/key_extra.rs (1)
236-236: Consideritoafor integer-to-bytes conversion.The
to_string()call allocates aStringfor each integer. While SORT isn't a hot-path command, usingitoawould be more efficient and consistent with the codebase guidelines.-RedisValueRef::SetIntset(is) => is.iter().map(|v| Bytes::from(v.to_string())).collect(), +RedisValueRef::SetIntset(is) => { + is.iter().map(|v| { + let mut buf = itoa::Buffer::new(); + Bytes::copy_from_slice(buf.format(v).as_bytes()) + }).collect() +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key_extra.rs` at line 236, Replace the per-integer allocation via v.to_string() in the RedisValueRef::SetIntset arm with itoa to avoid allocating a String for each integer; specifically, in the RedisValueRef::SetIntset(is) => is.iter().map(|v| Bytes::from(v.to_string())).collect() expression, create an itoa::Buffer (e.g., let mut buf = itoa::Buffer::new()) and use buf.format(v) to get a &str and pass that into Bytes::from (e.g., Bytes::from(buf.format(v))); ensure the itoa crate is available in Cargo.toml if not already.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/command/config.rs`:
- Around line 260-267: The current config rewrite builds a fixed tmp_path
("moon.conf.tmp") which causes concurrent CONFIG REWRITE calls to collide when
writing/renaming; update the logic that constructs tmp_path (the variable
tmp_path used with std::fs::write and std::fs::rename) to create a unique
temporary filename per operation (e.g., include a UUID/timestamp + process id or
use a secure temp file API such as tempfile::NamedTempFile) and then perform the
same write -> rename -> cleanup flow using that unique temp file, ensuring any
error messages that reference failures from std::fs::write or std::fs::rename
still include the captured error details.
In `@src/command/key_extra.rs`:
- Around line 78-121: Add unit tests exercising the memory_usage function:
create tests that construct a mutable Database, insert a key/value entry and
call memory_usage(&mut db, &args) to assert a positive integer Frame for an
existing key; call memory_usage with a non-existent key to assert Frame::Null;
and add tests for SAMPLES parsing on memory_usage (valid count yields same
behavior, invalid count and missing value yield err_wrong_args results). Use the
same helper utilities used by other command tests to build Frame args and
Database entries, and assert results by matching memory_usage(...) against
Frame::Integer(..), Frame::Null, or the err_wrong_args("MEMORY") response to
cover valid, missing, and bad SAMPLES cases.
- Around line 256-272: The SORT BY lookup currently maps missing external keys
to None which causes missing values to sort as nulls; change the mapping in the
sort_keys construction (the closure that calls apply_pattern and db.get) so that
if db.get(&lookup_key) yields None (or e.value.as_bytes() is None) you return
Some(Bytes::from_static(b"0")) instead of None, ensuring sort_keys contains
Some("0") for missing BY keys; update the code paths that build sort_keys (the
iterator mapping over elements in key_extra.rs) and add test cases in
scripts/test-commands.sh covering SORT BY with missing external keys for both
ASC and DESC to validate Redis-like behavior.
In `@src/server/conn/handler_single.rs`:
- Around line 998-1029: The CLIENT PAUSE gate (logic using
runtime_config.read(), client_pause_deadline_ms, client_pause_write_only and
metadata::is_write) must be evaluated earlier in the command handling flow —
before any early `continue` branches (e.g., CONFIG, SAVE, PUBLISH, MULTI/EXEC,
WATCH, CLIENT) so that the pause truly blocks all commands (and prevents EXEC
from committing queued writes during a write pause). Move the entire
pause_wait_ms computation and the subsequent sleep loop to the top of the
per-command phase-1 handling (immediately after reading the command and before
any branch-specific early-returns/continues), keeping the same checks for
client_pause_write_only and using metadata::is_write to decide whether to pause
writes-only. Ensure the relocated code still uses the same runtime_config and
current_time_ms checks and breaks out immediately when the pause expires.
- Around line 501-550: The CLIENT PAUSE / UNPAUSE branches (the blocks whose
conditions use sub_bytes.eq_ignore_ascii_case(b"PAUSE") and b"UNPAUSE" and which
mutate runtime_config.client_pause_deadline_ms / client_pause_write_only)
execute before the global permission gate and thus allow unauthorized toggling;
fix by invoking the same ACL/permission check that other commands use (the
general permission gate code path) at the top of these branches and return an
error Frame if the check fails, i.e., perform the existing permission check
function/method used elsewhere before changing runtime_config and only proceed
to set client_pause_deadline_ms/client_pause_write_only or reply "OK" when the
check permits the operation.
---
Duplicate comments:
In `@src/command/geo/geo_cmd.rs`:
- Around line 324-330: The code discards the error returned by geosearch_inner
and proceeds to remove dest and return 0; capture the second return value (e.g.,
let (matches, err_opt) = geosearch_inner(...)), check if the error is present
before inspecting matches, and if so return/propagate an error Frame (or convert
the error into the appropriate Frame error) instead of deleting dest; update the
block around geosearch_inner, matches, and dest removal so error handling
happens first.
- Around line 539-550: The COUNT branch accepts fractional values because it
parses with parse_f64 and casts with `as usize`; modify the validation in the
`else if arg.eq_ignore_ascii_case(b"COUNT")` block (the code that uses
`args.get(i).and_then(|f| parse_f64(f))`) to explicitly reject non-integer
values: after parsing the f64, ensure the value is > 0 and has no fractional
part (e.g., `v.fract() == 0.0`) before converting to usize; if the value has a
fractional part or is invalid, return the same syntax error (Frame::Error(...))
instead of silently truncating, and set `count_limit = Some(c)` only when the
check passes.
- Around line 403-427: The FROMLONLAT parsing branch currently accepts any
numbers for center_lon and center_lat; add bounds checks immediately after the
parse_f64 matches in the FROMLONLAT block (the same limits used by GEOADD):
require center_lon ∈ [-180.0, 180.0] and center_lat ∈ [-85.05112878,
85.05112878]; if either is out of range return the same Frame::Error (e.g., ERR
syntax error) as other parse failures. Update the logic inside the FROMLONLAT
branch (where center_lon and center_lat are assigned after parse_f64) to
validate these ranges before setting found_from = true.
---
Nitpick comments:
In `@src/command/geo/geo_cmd.rs`:
- Around line 236-264: The code clones the entire members map into members_map
when calling db.get_sorted_set; instead keep the borrow and extract scores
inside that borrow scope to avoid the clone: match on db.get_sorted_set(key) to
get Ok(Some((members, _))) and then, while still borrowing `members`, iterate
over args[1..], use `extract_bytes` to get `member` and look up with
`members.get(member)` to call `geohash_to_string(score)` and push results into
`results` (handling Ok(None) and Err(e) as before), finally return
Frame::Array(results.into()); do not call .clone() on `members`.
- Around line 203-216: The code currently clones the entire members map into
members_map to read two scores; instead, avoid the clone by matching on
db.get_sorted_set(key) and binding the borrowed map (e.g., let members =
members) then perform two lookups on that borrow: retrieve score1 via
members.get(m1) and score2 via members.get(m2), returning Frame::Null if either
is None and preserving the existing Err(e) and Ok(None) handling; update the
variables score1 and score2 to come from those borrows rather than from a cloned
members_map.
- Around line 583-608: Avoid cloning the entire members_map; instead process the
entries while you still have the borrow from db.get_sorted_set. Remove
members.clone() and move the filtering loop into the Ok(Some((members, _)))
match arm (iterate over members.iter() or members.into_iter() as appropriate),
compute geohash_decode(...) and haversine_distance(...) there, and push results
into the existing matches Vec<(Bytes, f64, f64, f64, f64)>; ensure you only
clone each member when you need to own it for matches.push((member.clone(),
...)) and do not clone the whole map up-front. Use the existing symbols
members_map / db.get_sorted_set / geohash_decode / haversine_distance / matches
to locate and refactor the code.
In `@src/command/geo/mod.rs`:
- Around line 71-83: The conversion at the end of geohash_to_string uses
from_utf8_lossy unnecessarily; replace the
String::from_utf8_lossy(&result).to_string() call with a direct UTF-8 conversion
that avoids the Cow, e.g. convert the byte array from result into a String via
std::str::from_utf8(&result).unwrap().to_string() or
String::from_utf8(result.to_vec()).unwrap(), keeping ALPHABET and the byte-array
construction in place and retaining the function geohash_to_string behavior.
In `@src/command/key_extra.rs`:
- Line 236: Replace the per-integer allocation via v.to_string() in the
RedisValueRef::SetIntset arm with itoa to avoid allocating a String for each
integer; specifically, in the RedisValueRef::SetIntset(is) => is.iter().map(|v|
Bytes::from(v.to_string())).collect() expression, create an itoa::Buffer (e.g.,
let mut buf = itoa::Buffer::new()) and use buf.format(v) to get a &str and pass
that into Bytes::from (e.g., Bytes::from(buf.format(v))); ensure the itoa crate
is available in Cargo.toml if not already.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 57f2ed88-a7e0-4151-9bdd-78abbaf6b4fd
📒 Files selected for processing (7)
src/command/config.rssrc/command/geo/geo_cmd.rssrc/command/geo/mod.rssrc/command/key_extra.rssrc/command/metadata.rssrc/server/conn/handler_single.rssrc/storage/db.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src/storage/db.rs
- src/command/metadata.rs
| let tmp_path = std::path::Path::new(dir).join("moon.conf.tmp"); | ||
|
|
||
| if let Err(e) = std::fs::write(&tmp_path, content.as_bytes()) { | ||
| return Frame::Error(Bytes::from(format!("ERR failed to write config: {e}"))); | ||
| } | ||
| if let Err(e) = std::fs::rename(&tmp_path, &conf_path) { | ||
| let _ = std::fs::remove_file(&tmp_path); | ||
| return Frame::Error(Bytes::from(format!("ERR failed to rename config: {e}"))); |
There was a problem hiding this comment.
Use a unique temp filename to avoid concurrent rewrite collisions.
CONFIG REWRITE currently uses a shared temp path (moon.conf.tmp). Concurrent calls can clobber each other and make one call fail on Line 265 even when content is valid.
💡 Suggested fix
- let tmp_path = std::path::Path::new(dir).join("moon.conf.tmp");
+ let nonce = std::time::SystemTime::now()
+ .duration_since(std::time::UNIX_EPOCH)
+ .unwrap_or_default()
+ .as_nanos();
+ let tmp_path = std::path::Path::new(dir).join(format!(
+ "moon.conf.tmp.{}.{}",
+ std::process::id(),
+ nonce
+ ));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/config.rs` around lines 260 - 267, The current config rewrite
builds a fixed tmp_path ("moon.conf.tmp") which causes concurrent CONFIG REWRITE
calls to collide when writing/renaming; update the logic that constructs
tmp_path (the variable tmp_path used with std::fs::write and std::fs::rename) to
create a unique temporary filename per operation (e.g., include a UUID/timestamp
+ process id or use a secure temp file API such as tempfile::NamedTempFile) and
then perform the same write -> rename -> cleanup flow using that unique temp
file, ensuring any error messages that reference failures from std::fs::write or
std::fs::rename still include the captured error details.
| /// MEMORY USAGE key [SAMPLES count] | ||
| /// | ||
| /// Returns the number of bytes a key and its value require to be stored. | ||
| pub fn memory_usage(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.is_empty() { | ||
| return err_wrong_args("MEMORY"); | ||
| } | ||
| let key = match extract_key(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("MEMORY"), | ||
| }; | ||
|
|
||
| // Parse optional SAMPLES count; reject unknown trailing args | ||
| let mut i = 1; | ||
| let mut _samples: usize = 5; // default sample count (like Redis) | ||
| while i < args.len() { | ||
| let arg = match extract_key(&args[i]) { | ||
| Some(a) => a, | ||
| None => return err_wrong_args("MEMORY"), | ||
| }; | ||
| if arg.eq_ignore_ascii_case(b"SAMPLES") { | ||
| i += 1; | ||
| let count_arg = match args.get(i).and_then(|f| extract_key(f)) { | ||
| Some(c) => c, | ||
| None => return err_wrong_args("MEMORY"), | ||
| }; | ||
| match std::str::from_utf8(count_arg).ok().and_then(|s| s.parse::<usize>().ok()) { | ||
| Some(c) if c > 0 => _samples = c, | ||
| _ => return err_wrong_args("MEMORY"), | ||
| } | ||
| } else { | ||
| return err_wrong_args("MEMORY"); | ||
| } | ||
| i += 1; | ||
| } | ||
|
|
||
| match db.get(key) { | ||
| Some(entry) => { | ||
| let mem = key.len() + entry.value.estimate_memory() + 128; // same as entry_overhead | ||
| Frame::Integer(mem as i64) | ||
| } | ||
| None => Frame::Null, | ||
| } | ||
| } |
There was a problem hiding this comment.
Add unit tests for memory_usage() command.
The memory_usage() function lacks unit tests. Per coding guidelines, every new command needs at least one unit test. Consider adding tests for:
- Key exists: returns positive integer
- Key missing: returns
Null - SAMPLES parameter parsing (valid count, invalid count, missing value)
Based on learnings: "Every new command needs at least one unit test and one consistency test entry in scripts/test-consistency.sh and scripts/test-commands.sh."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key_extra.rs` around lines 78 - 121, Add unit tests exercising
the memory_usage function: create tests that construct a mutable Database,
insert a key/value entry and call memory_usage(&mut db, &args) to assert a
positive integer Frame for an existing key; call memory_usage with a
non-existent key to assert Frame::Null; and add tests for SAMPLES parsing on
memory_usage (valid count yields same behavior, invalid count and missing value
yield err_wrong_args results). Use the same helper utilities used by other
command tests to build Frame args and Database entries, and assert results by
matching memory_usage(...) against Frame::Integer(..), Frame::Null, or the
err_wrong_args("MEMORY") response to cover valid, missing, and bad SAMPLES
cases.
| let sort_keys: Vec<Option<Bytes>> = if let Some(pattern) = by_pattern { | ||
| if pattern == b"nosort" { | ||
| // BY nosort = skip sorting | ||
| elements.iter().map(|_| None).collect() | ||
| } else { | ||
| elements | ||
| .iter() | ||
| .map(|elem| { | ||
| let lookup_key = apply_pattern(pattern, elem); | ||
| db.get(&lookup_key) | ||
| .and_then(|e| e.value.as_bytes().map(|b| Bytes::copy_from_slice(b))) | ||
| }) | ||
| .collect() | ||
| } | ||
| } else { | ||
| elements.iter().map(|e| Some(e.clone())).collect() | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for any existing tests or documentation about BY pattern missing key behavior
rg -n "BY.*nosort|BY pattern|missing.*BY" --type rs -C 3 | head -60
# Check if there are consistency tests for SORT BY with missing external keys
rg -n "SORT.*BY" scripts/ | head -20Repository: pilotspace/moon
Length of output: 85
🏁 Script executed:
#!/bin/bash
# Search for SORT implementation and tests with proper syntax
rg -n "BY.*nosort|by_pattern|sort_keys" -C 3 | head -80
# Check for SORT test cases
rg -n "SORT.*BY" scripts/ 2>/dev/null | head -30Repository: pilotspace/moon
Length of output: 2718
🏁 Script executed:
# View the actual implementation context around lines 256-301
sed -n '240,310p' src/command/key_extra.rsRepository: pilotspace/moon
Length of output: 2737
🏁 Script executed:
# Search for any Redis documentation or comments about SORT BY behavior
rg -n "missing.*key|BY pattern|weight.*0" src/command/key_extra.rsRepository: pilotspace/moon
Length of output: 219
🏁 Script executed:
#!/bin/bash
# Search for test cases related to SORT BY
find scripts/ -name "*.sh" -type f | xargs rg -l "SORT"
# Look for test-consistency.sh entries with SORT BY
rg -n "SORT.*BY|by.*pattern" scripts/test-consistency.sh 2>/dev/null | head -20Repository: pilotspace/moon
Length of output: 111
🌐 Web query:
Redis SORT command BY pattern missing key behavior default value
💡 Result:
In the Redis SORT command with BY pattern, when the generated key from the pattern does not exist (missing key), Redis treats it as having a value of 0 for numeric sorting (default mode). This causes elements whose corresponding BY key is missing to sort as if their weight is 0, typically appearing first in ASC order or last in DESC order. This behavior is confirmed by official documentation stating that SORT uses double precision floating point numbers for comparison by default, and non-existent keys in BY patterns are handled as 0 (equivalent to an empty string converting to 0.0 in strtod). For ALPHA mode, missing keys would sort lexicographically as empty strings. Note that using a deliberately non-existent fixed key like BY nosort skips sorting entirely, preserving original order, but for pattern-generated keys that happen to miss for some elements, sorting proceeds with 0 values for those. Sources: Official Redis SORT docs (results 6,12), where numeric default is specified and non-existent BY handling implied; secondary explanations in tutorials (1,13) align with 0 treatment for missing weights.
Citations:
- 1: https://redis.io/docs/latest/commands/sort/
- 2: https://redis.io/docs/latest/commands/sort
- 3: https://redis.io/ebook/part-2-core-concepts/chapter-3-commands-in-redis/3-7-other-commands/3-7-1-sorting/
- 4: https://database.guide/redis-sort-command-explained/
🏁 Script executed:
#!/bin/bash
# Check test-consistency.sh for SORT BY tests
cat scripts/test-consistency.sh | head -100Repository: pilotspace/moon
Length of output: 3090
🏁 Script executed:
#!/bin/bash
# Search for SORT tests in test-commands.sh
rg -n "SORT" scripts/test-commands.sh -A 5 -B 1 | head -80Repository: pilotspace/moon
Length of output: 1205
🏁 Script executed:
#!/bin/bash
# Check if there's ASC/DESC testing in sort implementation
rg -n "asc|ASC|desc|DESC" src/command/key_extra.rs | head -20Repository: pilotspace/moon
Length of output: 987
Fix SORT BY pattern behavior for missing external keys to match Redis semantics.
When a BY pattern key doesn't exist, the implementation treats it as None, which sorts to the end via the comparator (lines 298–301). However, Redis defaults missing BY keys to 0.0 for numeric sorting, causing them to appear first in ASC order and last in DESC order.
Current code fails in ASC mode: missing keys incorrectly sort to the end instead of the beginning (where 0.0 belongs). DESC mode works by accident due to the reversed comparison at line 318.
Store Some(Bytes::from_static(b"0")) for missing BY pattern lookups instead of None (line 266) to match Redis semantics. Also consider adding test coverage in scripts/test-commands.sh for SORT BY with missing external keys in both ASC and DESC modes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key_extra.rs` around lines 256 - 272, The SORT BY lookup
currently maps missing external keys to None which causes missing values to sort
as nulls; change the mapping in the sort_keys construction (the closure that
calls apply_pattern and db.get) so that if db.get(&lookup_key) yields None (or
e.value.as_bytes() is None) you return Some(Bytes::from_static(b"0")) instead of
None, ensuring sort_keys contains Some("0") for missing BY keys; update the code
paths that build sort_keys (the iterator mapping over elements in key_extra.rs)
and add test cases in scripts/test-commands.sh covering SORT BY with missing
external keys for both ASC and DESC to validate Redis-like behavior.
| if sub_bytes.eq_ignore_ascii_case(b"PAUSE") { | ||
| // CLIENT PAUSE timeout [WRITE|ALL] | ||
| if cmd_args.len() < 2 { | ||
| responses.push(Frame::Error(Bytes::from_static( | ||
| b"ERR wrong number of arguments for 'CLIENT PAUSE' command", | ||
| ))); | ||
| } else { | ||
| let timeout_ms = match extract_bytes(&cmd_args[1]) { | ||
| Some(b) => std::str::from_utf8(&b).ok().and_then(|s| s.parse::<u64>().ok()), | ||
| None => None, | ||
| }; | ||
| let mode_valid = if cmd_args.len() >= 3 { | ||
| match extract_bytes(&cmd_args[2]) { | ||
| Some(b) => b.eq_ignore_ascii_case(b"WRITE") || b.eq_ignore_ascii_case(b"ALL"), | ||
| None => false, | ||
| } | ||
| } else { | ||
| true | ||
| }; | ||
| if cmd_args.len() > 3 || !mode_valid { | ||
| responses.push(Frame::Error(Bytes::from_static( | ||
| b"ERR syntax error", | ||
| ))); | ||
| } else { | ||
| match timeout_ms { | ||
| Some(ms) => { | ||
| let write_only = cmd_args.get(2) | ||
| .and_then(|f| extract_bytes(f)) | ||
| .is_some_and(|b| b.eq_ignore_ascii_case(b"WRITE")); | ||
| let deadline = crate::storage::entry::current_time_ms().saturating_add(ms); | ||
| let mut rt = runtime_config.write(); | ||
| rt.client_pause_deadline_ms = deadline; | ||
| rt.client_pause_write_only = write_only; | ||
| responses.push(Frame::SimpleString(Bytes::from_static(b"OK"))); | ||
| } | ||
| None => { | ||
| responses.push(Frame::Error(Bytes::from_static( | ||
| b"ERR timeout is not a valid integer or out of range", | ||
| ))); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| continue; | ||
| } | ||
| if sub_bytes.eq_ignore_ascii_case(b"UNPAUSE") { | ||
| let mut rt = runtime_config.write(); | ||
| rt.client_pause_deadline_ms = 0; | ||
| rt.client_pause_write_only = false; | ||
| responses.push(Frame::SimpleString(Bytes::from_static(b"OK"))); |
There was a problem hiding this comment.
Enforce ACLs before CLIENT PAUSE / CLIENT UNPAUSE.
This branch returns before the general permission gate on Line 951, so any authenticated user can toggle a server-wide pause. That is an authorization bypass with obvious availability impact.
🔐 Minimal fix
+ if sub_bytes.eq_ignore_ascii_case(b"PAUSE")
+ || sub_bytes.eq_ignore_ascii_case(b"UNPAUSE")
+ {
+ #[allow(clippy::unwrap_used)] // std RwLock: poison = prior panic = unrecoverable
+ if let Some(deny_reason) = acl_table.read().unwrap().check_command_permission(
+ &conn.current_user,
+ cmd,
+ cmd_args,
+ ) {
+ responses.push(Frame::Error(Bytes::from(format!(
+ "NOPERM {}",
+ deny_reason
+ ))));
+ continue;
+ }
+ }
if sub_bytes.eq_ignore_ascii_case(b"PAUSE") {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/conn/handler_single.rs` around lines 501 - 550, The CLIENT PAUSE /
UNPAUSE branches (the blocks whose conditions use
sub_bytes.eq_ignore_ascii_case(b"PAUSE") and b"UNPAUSE" and which mutate
runtime_config.client_pause_deadline_ms / client_pause_write_only) execute
before the global permission gate and thus allow unauthorized toggling; fix by
invoking the same ACL/permission check that other commands use (the general
permission gate code path) at the top of these branches and return an error
Frame if the check fails, i.e., perform the existing permission check
function/method used elsewhere before changing runtime_config and only proceed
to set client_pause_deadline_ms/client_pause_write_only or reply "OK" when the
check permits the operation.
| // === CLIENT PAUSE check === | ||
| let pause_wait_ms = { | ||
| let rt = runtime_config.read(); | ||
| let deadline = rt.client_pause_deadline_ms; | ||
| if deadline > 0 { | ||
| let now = crate::storage::entry::current_time_ms(); | ||
| if now < deadline { | ||
| let should_pause = if rt.client_pause_write_only { | ||
| metadata::is_write(cmd) | ||
| } else { | ||
| true | ||
| }; | ||
| if should_pause { deadline.saturating_sub(now) } else { 0 } | ||
| } else { 0 } | ||
| } else { 0 } | ||
| }; | ||
| if pause_wait_ms > 0 { | ||
| let mut remaining = pause_wait_ms; | ||
| while remaining > 0 { | ||
| let chunk = remaining.min(50); | ||
| tokio::time::sleep(std::time::Duration::from_millis(chunk)).await; | ||
| remaining = remaining.saturating_sub(chunk); | ||
| let still_paused = { | ||
| let rt = runtime_config.read(); | ||
| rt.client_pause_deadline_ms > 0 | ||
| && crate::storage::entry::current_time_ms() < rt.client_pause_deadline_ms | ||
| }; | ||
| if !still_paused { | ||
| break; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Move the pause gate above the early continues.
By the time this block runs, several command paths have already escaped phase 1 (CONFIG, SAVE, PUBLISH, MULTI/EXEC, WATCH, the whole CLIENT branch, etc.). So CLIENT PAUSE ALL does not actually pause all commands here, and EXEC can still commit queued writes during a write pause.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/conn/handler_single.rs` around lines 998 - 1029, The CLIENT PAUSE
gate (logic using runtime_config.read(), client_pause_deadline_ms,
client_pause_write_only and metadata::is_write) must be evaluated earlier in the
command handling flow — before any early `continue` branches (e.g., CONFIG,
SAVE, PUBLISH, MULTI/EXEC, WATCH, CLIENT) so that the pause truly blocks all
commands (and prevents EXEC from committing queued writes during a write pause).
Move the entire pause_wait_ms computation and the subsequent sleep loop to the
top of the per-command phase-1 handling (immediately after reading the command
and before any branch-specific early-returns/continues), keeping the same checks
for client_pause_write_only and using metadata::is_write to decide whether to
pause writes-only. Ensure the relocated code still uses the same runtime_config
and current_time_ms checks and breaks out immediately when the pause expires.
1. COPY same-key: return ERR matching Redis 7.x behavior instead of returning 0/1 based on REPLACE flag. 2. EXPIREAT/PEXPIREAT: accept timestamp 0 and negative values as past-time expiry, deleting the key immediately. Previously rejected with ERR, diverging from Redis which uses these to conditionally delete keys. 3. LCS: add size guard (16M cells / 64MB) to prevent OOM from two large strings creating an unbounded O(n*m) DP table.
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (5)
src/command/string/string_read.rs (1)
394-405:⚠️ Potential issue | 🟠 MajorReject unsupported
LCSoptions instead of silently ignoring them.Lines 401-404 only handle
LEN, and all other tokens are currently accepted and ignored. This hides invalid input and breaks syntax validation.Suggested fix
while i < args.len() { let arg = match extract_bytes(&args[i]) { Some(a) => a, None => return Frame::Error(Bytes::from_static(b"ERR syntax error")), }; if arg.eq_ignore_ascii_case(b"LEN") { want_len = true; + } else { + return Frame::Error(Bytes::from_static(b"ERR syntax error")); } i += 1; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/string/string_read.rs` around lines 394 - 405, The loop parsing LCS options currently only recognizes "LEN" but silently ignores all other tokens; update the parsing in the loop that starts with i = 2 (using extract_bytes(&args[i]) and want_len) to explicitly reject any token that is not "LEN" (case-insensitive) by returning the existing Frame::Error(Bytes::from_static(b"ERR syntax error")) instead of ignoring it, so invalid options produce a syntax error; keep the existing handling for a valid "LEN" token and preserve error handling when extract_bytes returns None.src/command/key.rs (2)
361-363:⚠️ Potential issue | 🟠 Major
TOUCHstill doesn't update access metadata.Counting via
db.exists(key)makes the integer reply look right, butOBJECT IDLETIMEwill stay stale unless the live entry's access timestamp is updated as part of the touch path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key.rs` around lines 361 - 363, The TOUCH implementation uses db.exists(key) which only checks presence and doesn't update access metadata, so OBJECT IDLETIME remains stale; modify the touch path in the TOUCH handler (the code calling db.exists) to call the method that updates access metadata (e.g., a db.touch(key) or db.mark_accessed(key)) or add a step to update the entry's last-access timestamp when a key is successfully found, ensuring the code that currently increments count after db.exists(key) also triggers the access-time update on the live entry so OBJECT IDLETIME reflects the touch.
391-393:⚠️ Potential issue | 🔴 CriticalValidate
FLUSHDBoptions before clearing data.
_argsis still ignored, soFLUSHDB nopewipes the DB instead of failing. Only zero args or a singleASYNC/SYNCtoken should reachdb.clear().Minimal fix
-pub fn flushdb(db: &mut Database, _args: &[Frame]) -> Frame { +pub fn flushdb(db: &mut Database, args: &[Frame]) -> Frame { + match args { + [] => {} + [mode] => match extract_key(mode) { + Some(opt) + if opt.eq_ignore_ascii_case(b"ASYNC") || opt.eq_ignore_ascii_case(b"SYNC") => {} + _ => return Frame::Error(Bytes::from_static(b"ERR syntax error")), + }, + _ => return err_wrong_args("FLUSHDB"), + } db.clear(); Frame::SimpleString(Bytes::from_static(b"OK")) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key.rs` around lines 391 - 393, The flushdb implementation ignores _args and always clears the DB; update the flushdb(db: &mut Database, _args: &[Frame]) -> Frame function to validate arguments before calling db.clear(): accept either zero args or exactly one arg that is the token "ASYNC" or "SYNC" (case-insensitive), and for any other argument count or token return an error Frame (do not clear). Locate the call to db.clear() in flushdb and gate it behind this validation, returning the same success Frame::SimpleString(Bytes::from_static(b"OK")) only when validation passes.src/command/key_extra.rs (2)
258-320:⚠️ Potential issue | 🟠 MajorNormalize missing
BYlookups instead of sortingNonelast.When an external
BYkey is absent, theNonebranches on Lines 300-303 push that element to the end in ascending order. Redis-compatibleSORTtreats missing weights as0for numeric sorts and as an empty string forALPHA, soSORT key BY weight:*still diverges here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key_extra.rs` around lines 258 - 320, The comparator treats missing BY lookups (None in sort_keys) as greater, causing missing keys to sort to the end; change behavior to normalize missing keys to the appropriate default before sorting: when building sort_keys, map missing db.get lookups to Some(Bytes::from_static(b"")) if alpha is true (ALPHA sort) and to Some(Bytes::from_slice(b"0")) (or "0") for numeric sorts so that sort_keys contains no None for comparisons, or alternatively adjust the indices.sort_by closure to treat None as equivalent to Some(empty) for alpha and Some(b"0") for numeric; update the construction of sort_keys (the block that computes sort_keys) and/or the comparison arms in indices.sort_by (the match on (ka,kb)) to use these normalized defaults and preserve descending via the existing descending handling.
392-575:⚠️ Potential issue | 🟡 Minor
MEMORYstill has no unit coverage.This test module exercises
COPYandSORT, but notMEMORY USAGE,MEMORY DOCTOR, orMEMORY HELP. That leaves the newMEMORYhandlers unguarded in spite of the command-level test requirement.Based on learnings: "Every new command needs at least one unit test and one consistency test entry in scripts/test-consistency.sh and scripts/test-commands.sh."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key_extra.rs` around lines 392 - 575, The test module covers COPY and SORT but omits the new MEMORY command handlers; add unit tests exercising MEMORY usage paths (e.g., call the same command-entry point used by copy/sort with arguments like &[bs(b"MEMORY"), bs(b"USAGE"), bs(b"<key>")], &[bs(b"MEMORY"), bs(b"DOCTOR")], and &[bs(b"MEMORY"), bs(b"HELP")] to assert expected Frame::Integer/Frame::Array/Frame::BulkString or Frame::Error results) so memory_usage/memory_doctor/memory_help (or whichever handler functions implement MEMORY) are covered, and also add corresponding entries to scripts/test-consistency.sh and scripts/test-commands.sh to satisfy the new-command requirement.
🧹 Nitpick comments (1)
src/command/key.rs (1)
373-385: Avoid heap-heavy reply building inTIME.This path allocates a
Vecand two temporaryStrings on every call. Preferframevec![]plusitoa::Buffer(or the existing project small-buffer pattern) so the reply stays aligned with the command-path allocation rules.Based on learnings, "Applies to src/{command,protocol}//*.rs : Use
SmallVecanditoainstead of allocatingVecandformat!()on hot paths.Vec::with_capacity()is acceptable for result building at the end of a command path." and "Applies to /{src/command/,src/protocol/,src/shard/event_loop.rs,src/io/**}/*.rs : No allocations (Box::new(), Vec::new(), String::new(), Arc::new(), clone(), format!(), to_string()) on hot paths: command dispatch, protocol parsing, shard event loops, and I/O drivers."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/key.rs` around lines 373 - 385, The TIME reply currently allocates a Vec and two Strings via secs.to_string()/micros.to_string(); change the implementation of pub fn time() -> Frame to build the array without heap-heavy temporaries by using framevec![] (to avoid Vec allocation) and an itoa::Buffer to format secs and micros into small stack buffers, then wrap those buffers into Bytes (or the project's small-buffer pattern) for the two Frame::BulkString entries; update references in the function body (time, Frame::Array, Frame::BulkString, Bytes) to use framevec! and itoa::Buffer so no temporary Strings or Vec allocations occur on the hot path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/command/key_extra.rs`:
- Around line 52-77: Move the same-key check (src == dst) ahead of the
missing-source fast path and use a single db.get(src) probe to decide
presence/clone, so you don't return ERR no such key after an earlier exists()
decision; specifically, in the COPY implementation replace the current order (if
!db.exists(src) { ... } then if src == dst { ... }) with: first test if src ==
dst and return Frame::Error(Bytes::from_static(b"...")), then call let entry =
db.get(src).cloned(); if entry.is_none() return Frame::Integer(0); then perform
the destination-exists/replace check (db.exists(dst) && !replace) and finally
db.set(dst, cloned) and return Frame::Integer(1), eliminating the redundant
probes and the race that can produce ERR no such key.
- Line 238: The RedisValueRef::SetIntset(is) arm is allocating a String per
integer via v.to_string() on a hot path; replace this with itoa::Buffer to avoid
allocations: create a single itoa::Buffer (e.g., inside the closure or mapped
iterator), call buffer.format(v) for each i64 and convert the returned &str into
Bytes (Bytes::from(buffer.format(v))). Update the SORT element extraction code
path that constructs the Vec<Bytes> from the intset to use the
itoa::Buffer-based formatting instead of to_string().
In `@src/command/key.rs`:
- Around line 341-345: The RANDOMKEY handler randomkey currently ignores its
_args so calls like `RANDOMKEY extra` wrongly succeed; update the function
`randomkey(db: &mut Database, _args: &[Frame]) -> Frame` to validate the
argument count (use `args.is_empty()` or `args.len() == 0`) and return a
wrong-arity error Frame when any arguments are present (e.g. an error Frame with
message "ERR wrong number of arguments for 'randomkey' command"); only call
`db.random_key()` when there are zero arguments. Use the actual parameter name
(remove the leading underscore) so the args are checked before dispatching to
`db.random_key()`.
- Around line 231-248: The EXPIREAT handler currently multiplies the parsed i64
timestamp by 1000 into a u64 which can overflow; before computing expires_at_ms
(the variable used in db.set_expiry), validate and clamp/guard the conversion:
after parse_int(&args[1]) and the negative/zero check, ensure timestamp is
within a safe range (e.g. timestamp <= (u64::MAX / 1000) as i64) or use a
checked multiplication conversion path, and on overflow either return an error
(like "ERR invalid expire time") or clamp to u64::MAX before calling
db.set_expiry; update the code around parse_int, expires_at_ms and the
db.set_expiry call to use this checked/ clamped value so no panic or wraparound
can occur.
- Around line 220-394: Move the large block of key command handlers into a new
module to keep key.rs under 1500 lines: create src/command/key_extra.rs and cut
the functions expireat, pexpireat, expiretime, pexpiretime, randomkey, touch,
time, and flushdb from key.rs into that file; ensure key_extra.rs imports the
same symbols used in the moved code (Database, Frame, Bytes, err_wrong_args,
extract_key, parse_int, etc.) so extract_key/parse_int remain shared, then
update the parent module to declare mod key_extra; and re-export or use the
moved functions from command::key_extra so existing callers/tests keep
compiling. Also run cargo check and update any use sites or tests that
referenced these handlers if needed.
In `@src/command/string/string_read.rs`:
- Around line 378-380: The command documentation incorrectly says "substring"
but the implementation of the LCS command computes the Longest Common
Subsequence (DP + backtracking); update the doc comment for the LCS command in
string_read.rs (the comment block starting "LCS key1 key2 [LEN] [IDX]
[MINMATCHLEN len] [WITHMATCHLEN]") to say "Returns the longest common
subsequence of two string values" (and adjust any incidental phrasing like
"substring"→"subsequence" and mention that it uses DP/backtracking if you want
to be explicit).
- Around line 407-428: The code currently clones values with to_vec() and
allocates Vec::new() for missing keys (s1/s2), causing unnecessary allocations;
instead, keep references to the underlying bytes and use a static empty slice
for missing keys. Change the bindings for s1 and s2 to be byte-slice references
(e.g., borrow the result of db.get(keyX).and_then(|e| e.value.as_bytes()) and
fall back to a shared empty &[u8] for None) so you avoid to_vec() and
Vec::new(); preserve the existing Frame::Error return when value.as_bytes() is
None.
- Around line 435-440: The OOM guard currently compares n.saturating_mul(m) to
MAX_LCS_CELLS but the actual allocation in string_read.rs (the dp variable
created via vec![vec![0u32; m + 1]; n + 1]) requires (n + 1) * (m + 1) cells;
update the check to compute and compare (n + 1).saturating_mul(m + 1) against
MAX_LCS_CELLS (or otherwise ensure the guard accounts for the +1 rows/cols and
vector overhead) so that the Frame::Error path is taken before dp is allocated
when the full allocation would exceed the cap.
---
Duplicate comments:
In `@src/command/key_extra.rs`:
- Around line 258-320: The comparator treats missing BY lookups (None in
sort_keys) as greater, causing missing keys to sort to the end; change behavior
to normalize missing keys to the appropriate default before sorting: when
building sort_keys, map missing db.get lookups to Some(Bytes::from_static(b""))
if alpha is true (ALPHA sort) and to Some(Bytes::from_slice(b"0")) (or "0") for
numeric sorts so that sort_keys contains no None for comparisons, or
alternatively adjust the indices.sort_by closure to treat None as equivalent to
Some(empty) for alpha and Some(b"0") for numeric; update the construction of
sort_keys (the block that computes sort_keys) and/or the comparison arms in
indices.sort_by (the match on (ka,kb)) to use these normalized defaults and
preserve descending via the existing descending handling.
- Around line 392-575: The test module covers COPY and SORT but omits the new
MEMORY command handlers; add unit tests exercising MEMORY usage paths (e.g.,
call the same command-entry point used by copy/sort with arguments like
&[bs(b"MEMORY"), bs(b"USAGE"), bs(b"<key>")], &[bs(b"MEMORY"), bs(b"DOCTOR")],
and &[bs(b"MEMORY"), bs(b"HELP")] to assert expected
Frame::Integer/Frame::Array/Frame::BulkString or Frame::Error results) so
memory_usage/memory_doctor/memory_help (or whichever handler functions implement
MEMORY) are covered, and also add corresponding entries to
scripts/test-consistency.sh and scripts/test-commands.sh to satisfy the
new-command requirement.
In `@src/command/key.rs`:
- Around line 361-363: The TOUCH implementation uses db.exists(key) which only
checks presence and doesn't update access metadata, so OBJECT IDLETIME remains
stale; modify the touch path in the TOUCH handler (the code calling db.exists)
to call the method that updates access metadata (e.g., a db.touch(key) or
db.mark_accessed(key)) or add a step to update the entry's last-access timestamp
when a key is successfully found, ensuring the code that currently increments
count after db.exists(key) also triggers the access-time update on the live
entry so OBJECT IDLETIME reflects the touch.
- Around line 391-393: The flushdb implementation ignores _args and always
clears the DB; update the flushdb(db: &mut Database, _args: &[Frame]) -> Frame
function to validate arguments before calling db.clear(): accept either zero
args or exactly one arg that is the token "ASYNC" or "SYNC" (case-insensitive),
and for any other argument count or token return an error Frame (do not clear).
Locate the call to db.clear() in flushdb and gate it behind this validation,
returning the same success Frame::SimpleString(Bytes::from_static(b"OK")) only
when validation passes.
In `@src/command/string/string_read.rs`:
- Around line 394-405: The loop parsing LCS options currently only recognizes
"LEN" but silently ignores all other tokens; update the parsing in the loop that
starts with i = 2 (using extract_bytes(&args[i]) and want_len) to explicitly
reject any token that is not "LEN" (case-insensitive) by returning the existing
Frame::Error(Bytes::from_static(b"ERR syntax error")) instead of ignoring it, so
invalid options produce a syntax error; keep the existing handling for a valid
"LEN" token and preserve error handling when extract_bytes returns None.
---
Nitpick comments:
In `@src/command/key.rs`:
- Around line 373-385: The TIME reply currently allocates a Vec and two Strings
via secs.to_string()/micros.to_string(); change the implementation of pub fn
time() -> Frame to build the array without heap-heavy temporaries by using
framevec![] (to avoid Vec allocation) and an itoa::Buffer to format secs and
micros into small stack buffers, then wrap those buffers into Bytes (or the
project's small-buffer pattern) for the two Frame::BulkString entries; update
references in the function body (time, Frame::Array, Frame::BulkString, Bytes)
to use framevec! and itoa::Buffer so no temporary Strings or Vec allocations
occur on the hot path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 56ca0f1f-6217-4a67-96da-e1b8919db396
📒 Files selected for processing (3)
src/command/key.rssrc/command/key_extra.rssrc/command/string/string_read.rs
| RedisValueRef::ListListpack(lp) => lp.iter().map(|e| e.to_bytes()).collect(), | ||
| RedisValueRef::Set(s) => s.iter().cloned().collect(), | ||
| RedisValueRef::SetListpack(lp) => lp.iter().map(|e| e.to_bytes()).collect(), | ||
| RedisValueRef::SetIntset(is) => is.iter().map(|v| Bytes::from(v.to_string())).collect(), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -a 'Cargo.toml$' | xargs rg -n '\bitoa\b'
rg -n --type rs 'to_string\(\)' src/commandRepository: pilotspace/moon
Length of output: 116
🏁 Script executed:
# Read the target file around line 238 to see the context
head -n 260 src/command/key_extra.rs | tail -n 50 | cat -n
# Also check the full function to understand context
grep -n "SetIntset" src/command/key_extra.rs
# Check if itoa is already used in command files
rg -n 'use itoa' src/command
rg -n 'itoa::' src/commandRepository: pilotspace/moon
Length of output: 3298
🏁 Script executed:
# Check how itoa is imported in files that use it
head -n 30 src/command/vector_search/mod.rs | grep -E '^use|^crate'
head -n 30 src/command/hash/hash_read.rs | grep -E '^use|^crate'
# Check what function contains line 238
sed -n '200,240p' src/command/key_extra.rs | head -n 41
# Check entire function signature and context
sed -n '180,250p' src/command/key_extra.rsRepository: pilotspace/moon
Length of output: 5778
🏁 Script executed:
# Get the function signature by finding where this code is
grep -B 50 'RedisValueRef::SetIntset' src/command/key_extra.rs | head -n 60 | grep -E '^(pub fn|fn |async fn)' | tail -n 1
# Read the top 40 lines of key_extra.rs for imports
head -n 40 src/command/key_extra.rs
# Check what SetIntset member type is (search for intset definition)
rg -n 'struct.*Intset|type.*Intset' src/storageRepository: pilotspace/moon
Length of output: 1405
🏁 Script executed:
# Read Intset struct definition
sed -n '20,50p' src/storage/intset.rs
# Check iterator implementation
sed -n '26,100p' src/storage/intset.rs
# Check for full intset struct and methods
head -n 150 src/storage/intset.rs
# Verify if itoa is used anywhere in key_extra.rs
rg -n 'itoa' src/command/key_extra.rsRepository: pilotspace/moon
Length of output: 7129
🏁 Script executed:
# Check IntsetIter implementation to see what it yields
rg -A 20 'impl.*Iterator.*for IntsetIter' src/storage/intset.rs
# Verify the complete file size
wc -l src/command/key_extra.rs
# Check what function contains line 238
grep -n 'fn sort' src/command/key_extra.rsRepository: pilotspace/moon
Length of output: 794
Use itoa instead of to_string() for integer formatting in the SORT command's element extraction.
The intset iterator yields i64 values that are being converted to strings via to_string(), which allocates a String per member on a hot path. Per repo guidance, allocations like to_string() are forbidden in command dispatch code. Replace with itoa::Buffer:
Suggested refactor
+use itoa;
+
pub fn sort(db: &mut Database, args: &[Frame]) -> Frame {
// ...
- RedisValueRef::SetIntset(is) => is.iter().map(|v| Bytes::from(v.to_string())).collect(),
+ RedisValueRef::SetIntset(is) => is
+ .iter()
+ .map(|v| {
+ let mut buf = itoa::Buffer::new();
+ Bytes::copy_from_slice(buf.format(v).as_bytes())
+ })
+ .collect(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key_extra.rs` at line 238, The RedisValueRef::SetIntset(is) arm
is allocating a String per integer via v.to_string() on a hot path; replace this
with itoa::Buffer to avoid allocations: create a single itoa::Buffer (e.g.,
inside the closure or mapped iterator), call buffer.format(v) for each i64 and
convert the returned &str into Bytes (Bytes::from(buffer.format(v))). Update the
SORT element extraction code path that constructs the Vec<Bytes> from the intset
to use the itoa::Buffer-based formatting instead of to_string().
| /// EXPIREAT key unix-time-seconds | ||
| /// | ||
| /// Set the expiration for a key as a UNIX timestamp (seconds). | ||
| pub fn expireat(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.len() != 2 { | ||
| return err_wrong_args("EXPIREAT"); | ||
| } | ||
| let key = match extract_key(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("EXPIREAT"), | ||
| }; | ||
| let timestamp = match parse_int(&args[1]) { | ||
| Some(n) => n, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR invalid expire time in 'EXPIREAT' command", | ||
| )); | ||
| } | ||
| }; | ||
| // Redis accepts 0 and negative timestamps as past-time expiry (deletes key immediately) | ||
| if timestamp <= 0 { | ||
| return if db.remove(key).is_some() { | ||
| Frame::Integer(1) | ||
| } else { | ||
| Frame::Integer(0) | ||
| }; | ||
| } | ||
| let expires_at_ms = (timestamp as u64) * 1000; | ||
| if db.set_expiry(key, expires_at_ms) { | ||
| Frame::Integer(1) | ||
| } else { | ||
| Frame::Integer(0) | ||
| } | ||
| } | ||
|
|
||
| /// PEXPIREAT key unix-time-milliseconds | ||
| /// | ||
| /// Set the expiration for a key as a UNIX timestamp in milliseconds. | ||
| pub fn pexpireat(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.len() != 2 { | ||
| return err_wrong_args("PEXPIREAT"); | ||
| } | ||
| let key = match extract_key(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("PEXPIREAT"), | ||
| }; | ||
| let timestamp_ms = match parse_int(&args[1]) { | ||
| Some(n) => n, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR invalid expire time in 'PEXPIREAT' command", | ||
| )); | ||
| } | ||
| }; | ||
| // Redis accepts 0 and negative timestamps as past-time expiry (deletes key immediately) | ||
| if timestamp_ms <= 0 { | ||
| return if db.remove(key).is_some() { | ||
| Frame::Integer(1) | ||
| } else { | ||
| Frame::Integer(0) | ||
| }; | ||
| } | ||
| if db.set_expiry(key, timestamp_ms as u64) { | ||
| Frame::Integer(1) | ||
| } else { | ||
| Frame::Integer(0) | ||
| } | ||
| } | ||
|
|
||
| /// EXPIRETIME key | ||
| /// | ||
| /// Returns the absolute Unix timestamp (in seconds) at which the key will expire. | ||
| /// Returns -2 if key doesn't exist, -1 if key has no expiry. | ||
| pub fn expiretime(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.len() != 1 { | ||
| return err_wrong_args("EXPIRETIME"); | ||
| } | ||
| let key = match extract_key(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("EXPIRETIME"), | ||
| }; | ||
| let base_ts = db.base_timestamp(); | ||
| match db.get(key) { | ||
| None => Frame::Integer(-2), | ||
| Some(entry) => { | ||
| if !entry.has_expiry() { | ||
| Frame::Integer(-1) | ||
| } else { | ||
| Frame::Integer((entry.expires_at_ms(base_ts) / 1000) as i64) | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /// PEXPIRETIME key | ||
| /// | ||
| /// Returns the absolute Unix timestamp (in milliseconds) at which the key will expire. | ||
| pub fn pexpiretime(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.len() != 1 { | ||
| return err_wrong_args("PEXPIRETIME"); | ||
| } | ||
| let key = match extract_key(&args[0]) { | ||
| Some(k) => k, | ||
| None => return err_wrong_args("PEXPIRETIME"), | ||
| }; | ||
| let base_ts = db.base_timestamp(); | ||
| match db.get(key) { | ||
| None => Frame::Integer(-2), | ||
| Some(entry) => { | ||
| if !entry.has_expiry() { | ||
| Frame::Integer(-1) | ||
| } else { | ||
| Frame::Integer(entry.expires_at_ms(base_ts) as i64) | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /// RANDOMKEY | ||
| /// | ||
| /// Returns a random key from the currently selected database. | ||
| pub fn randomkey(db: &mut Database, _args: &[Frame]) -> Frame { | ||
| match db.random_key() { | ||
| Some(key) => Frame::BulkString(key), | ||
| None => Frame::Null, | ||
| } | ||
| } | ||
|
|
||
| /// TOUCH key [key ...] | ||
| /// | ||
| /// Alters the last access time of a key(s). Returns the number of keys that exist. | ||
| pub fn touch(db: &mut Database, args: &[Frame]) -> Frame { | ||
| if args.is_empty() { | ||
| return err_wrong_args("TOUCH"); | ||
| } | ||
| let mut count = 0i64; | ||
| for arg in args { | ||
| let key = match extract_key(arg) { | ||
| Some(k) => k, | ||
| None => continue, | ||
| }; | ||
| if db.exists(key) { | ||
| // exists() already does lazy expiry + access tracking | ||
| count += 1; | ||
| } | ||
| } | ||
| Frame::Integer(count) | ||
| } | ||
|
|
||
| /// TIME | ||
| /// | ||
| /// Returns the current server time as a two-element array: | ||
| /// [unix-seconds, microseconds-since-epoch-second]. | ||
| pub fn time() -> Frame { | ||
| let now = std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap_or_default(); | ||
| let secs = now.as_secs(); | ||
| let micros = now.subsec_micros(); | ||
| Frame::Array( | ||
| vec![ | ||
| Frame::BulkString(Bytes::from(secs.to_string())), | ||
| Frame::BulkString(Bytes::from(micros.to_string())), | ||
| ] | ||
| .into(), | ||
| ) | ||
| } | ||
|
|
||
| /// FLUSHDB [ASYNC|SYNC] | ||
| /// | ||
| /// Delete all keys in the currently selected database. | ||
| pub fn flushdb(db: &mut Database, _args: &[Frame]) -> Frame { | ||
| db.clear(); | ||
| Frame::SimpleString(Bytes::from_static(b"OK")) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Split key.rs back under the repository size limit.
This file is now well past the 1500-line cap, and this PR adds another large block of handlers plus tests on top. src/command/key_extra.rs already consumes extract_key/parse_int, so these new key commands are good candidates to move out before this file gets harder to maintain.
As per coding guidelines, "No single .rs file should exceed 1500 lines. Command implementations should split read/write operations into separate files when exceeding 1000 lines."
Also applies to: 1704-1809
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key.rs` around lines 220 - 394, Move the large block of key
command handlers into a new module to keep key.rs under 1500 lines: create
src/command/key_extra.rs and cut the functions expireat, pexpireat, expiretime,
pexpiretime, randomkey, touch, time, and flushdb from key.rs into that file;
ensure key_extra.rs imports the same symbols used in the moved code (Database,
Frame, Bytes, err_wrong_args, extract_key, parse_int, etc.) so
extract_key/parse_int remain shared, then update the parent module to declare
mod key_extra; and re-export or use the moved functions from command::key_extra
so existing callers/tests keep compiling. Also run cargo check and update any
use sites or tests that referenced these handlers if needed.
| let timestamp = match parse_int(&args[1]) { | ||
| Some(n) => n, | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR invalid expire time in 'EXPIREAT' command", | ||
| )); | ||
| } | ||
| }; | ||
| // Redis accepts 0 and negative timestamps as past-time expiry (deletes key immediately) | ||
| if timestamp <= 0 { | ||
| return if db.remove(key).is_some() { | ||
| Frame::Integer(1) | ||
| } else { | ||
| Frame::Integer(0) | ||
| }; | ||
| } | ||
| let expires_at_ms = (timestamp as u64) * 1000; | ||
| if db.set_expiry(key, expires_at_ms) { |
There was a problem hiding this comment.
Guard EXPIREAT against large-timestamp overflow.
parse_int() accepts the full i64 range, so (timestamp as u64) * 1000 can overflow before set_expires_at_ms() gets a chance to clamp. That turns a very large client-supplied timestamp into a panic in debug or a wrapped expiry in release.
Minimal fix
- let expires_at_ms = (timestamp as u64) * 1000;
+ let expires_at_ms = (timestamp as u64).saturating_mul(1000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key.rs` around lines 231 - 248, The EXPIREAT handler currently
multiplies the parsed i64 timestamp by 1000 into a u64 which can overflow;
before computing expires_at_ms (the variable used in db.set_expiry), validate
and clamp/guard the conversion: after parse_int(&args[1]) and the negative/zero
check, ensure timestamp is within a safe range (e.g. timestamp <= (u64::MAX /
1000) as i64) or use a checked multiplication conversion path, and on overflow
either return an error (like "ERR invalid expire time") or clamp to u64::MAX
before calling db.set_expiry; update the code around parse_int, expires_at_ms
and the db.set_expiry call to use this checked/ clamped value so no panic or
wraparound can occur.
| pub fn randomkey(db: &mut Database, _args: &[Frame]) -> Frame { | ||
| match db.random_key() { | ||
| Some(key) => Frame::BulkString(key), | ||
| None => Frame::Null, | ||
| } |
There was a problem hiding this comment.
Reject unexpected arguments in RANDOMKEY.
_args is ignored here, so RANDOMKEY extra currently succeeds instead of returning a wrong-arity error. Please validate that this command is called with zero arguments before dispatching to db.random_key().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/key.rs` around lines 341 - 345, The RANDOMKEY handler randomkey
currently ignores its _args so calls like `RANDOMKEY extra` wrongly succeed;
update the function `randomkey(db: &mut Database, _args: &[Frame]) -> Frame` to
validate the argument count (use `args.is_empty()` or `args.len() == 0`) and
return a wrong-arity error Frame when any arguments are present (e.g. an error
Frame with message "ERR wrong number of arguments for 'randomkey' command");
only call `db.random_key()` when there are zero arguments. Use the actual
parameter name (remove the leading underscore) so the args are checked before
dispatching to `db.random_key()`.
| /// LCS key1 key2 [LEN] [IDX] [MINMATCHLEN len] [WITHMATCHLEN] | ||
| /// | ||
| /// Returns the longest common substring of two string values. |
There was a problem hiding this comment.
Fix the command doc: implementation is subsequence, not substring.
Line 380 is inaccurate: this code computes Longest Common Subsequence (DP + backtracking), not substring.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_read.rs` around lines 378 - 380, The command
documentation incorrectly says "substring" but the implementation of the LCS
command computes the Longest Common Subsequence (DP + backtracking); update the
doc comment for the LCS command in string_read.rs (the comment block starting
"LCS key1 key2 [LEN] [IDX] [MINMATCHLEN len] [WITHMATCHLEN]") to say "Returns
the longest common subsequence of two string values" (and adjust any incidental
phrasing like "substring"→"subsequence" and mention that it uses DP/backtracking
if you want to be explicit).
| let s1 = match db.get(key1) { | ||
| Some(e) => match e.value.as_bytes() { | ||
| Some(v) => v.to_vec(), | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"WRONGTYPE Operation against a key holding the wrong kind of value", | ||
| )); | ||
| } | ||
| }, | ||
| None => Vec::new(), | ||
| }; | ||
| let s2 = match db.get(key2) { | ||
| Some(e) => match e.value.as_bytes() { | ||
| Some(v) => v.to_vec(), | ||
| None => { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"WRONGTYPE Operation against a key holding the wrong kind of value", | ||
| )); | ||
| } | ||
| }, | ||
| None => Vec::new(), | ||
| }; |
There was a problem hiding this comment.
Avoid extra cloning/allocations in the new command hot path.
Lines 409/420 clone full string values with to_vec() (and Lines 416/427 allocate Vec::new()), adding avoidable allocations before the already-heavy DP work.
Suggested refactor
- let s1 = match db.get(key1) {
+ let s1: &[u8] = match db.get(key1) {
Some(e) => match e.value.as_bytes() {
- Some(v) => v.to_vec(),
+ Some(v) => v,
None => {
return Frame::Error(Bytes::from_static(
b"WRONGTYPE Operation against a key holding the wrong kind of value",
));
}
},
- None => Vec::new(),
+ None => &[],
};
- let s2 = match db.get(key2) {
+ let s2: &[u8] = match db.get(key2) {
Some(e) => match e.value.as_bytes() {
- Some(v) => v.to_vec(),
+ Some(v) => v,
None => {
return Frame::Error(Bytes::from_static(
b"WRONGTYPE Operation against a key holding the wrong kind of value",
));
}
},
- None => Vec::new(),
+ None => &[],
};As per coding guidelines, src/command/** hot paths must avoid new allocations such as Vec::new() and cloning in command execution paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_read.rs` around lines 407 - 428, The code currently
clones values with to_vec() and allocates Vec::new() for missing keys (s1/s2),
causing unnecessary allocations; instead, keep references to the underlying
bytes and use a static empty slice for missing keys. Change the bindings for s1
and s2 to be byte-slice references (e.g., borrow the result of
db.get(keyX).and_then(|e| e.value.as_bytes()) and fall back to a shared empty
&[u8] for None) so you avoid to_vec() and Vec::new(); preserve the existing
Frame::Error return when value.as_bytes() is None.
| if n.saturating_mul(m) > MAX_LCS_CELLS { | ||
| return Frame::Error(Bytes::from_static( | ||
| b"ERR inputs too large for LCS computation", | ||
| )); | ||
| } | ||
| let mut dp = vec![vec![0u32; m + 1]; n + 1]; |
There was a problem hiding this comment.
Make the OOM guard match the real DP allocation size.
Line 435 checks n * m, but Line 440 allocates (n + 1) * (m + 1) cells plus vector overhead. The current cap can admit allocations above the intended bound.
Suggested fix
- const MAX_LCS_CELLS: usize = 16 * 1024 * 1024; // 64MB at 4 bytes/cell
- if n.saturating_mul(m) > MAX_LCS_CELLS {
+ const MAX_LCS_BYTES: usize = 64 * 1024 * 1024;
+ let rows = match n.checked_add(1) {
+ Some(v) => v,
+ None => return Frame::Error(Bytes::from_static(b"ERR inputs too large for LCS computation")),
+ };
+ let cols = match m.checked_add(1) {
+ Some(v) => v,
+ None => return Frame::Error(Bytes::from_static(b"ERR inputs too large for LCS computation")),
+ };
+ let dp_bytes = match rows
+ .checked_mul(cols)
+ .and_then(|cells| cells.checked_mul(std::mem::size_of::<u32>()))
+ {
+ Some(v) => v,
+ None => return Frame::Error(Bytes::from_static(b"ERR inputs too large for LCS computation")),
+ };
+ if dp_bytes > MAX_LCS_BYTES {
return Frame::Error(Bytes::from_static(
b"ERR inputs too large for LCS computation",
));
}
- let mut dp = vec![vec![0u32; m + 1]; n + 1];
+ let mut dp = vec![vec![0u32; cols]; rows];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/string/string_read.rs` around lines 435 - 440, The OOM guard
currently compares n.saturating_mul(m) to MAX_LCS_CELLS but the actual
allocation in string_read.rs (the dp variable created via vec![vec![0u32; m +
1]; n + 1]) requires (n + 1) * (m + 1) cells; update the check to compute and
compare (n + 1).saturating_mul(m + 1) against MAX_LCS_CELLS (or otherwise ensure
the guard accounts for the +1 rows/cols and vector overhead) so that the
Frame::Error path is taken before dp is allocated when the full allocation would
exceed the cap.
Merge origin/main to resolve conflicts between the high-impact commands branch and the graph engine merged via PR #70.
Summary
<dir>/moon.conf, CONFIG RESETSTAT stubestimate_memory()CONFIG SET lazyfree-threshold NTotal: 20 new commands, 43 unit tests, +2,755 lines
All commands include:
#[cfg(test)]modulesscripts/test-commands.shandscripts/test-consistency.shTest plan
cargo test --lib— 1963 passed, 0 failed (macOS)cargo test --lib— 1981 passed, 0 failed (Linux/OrbStack)cargo clippy -- -D warnings— clean (default features)cargo clippy --no-default-features --features runtime-tokio,jemalloc -- -D warnings— cleancargo fmt --check— cleanbash -n scripts/test-commands.sh— syntax OKbash -n scripts/test-consistency.sh— syntax OKscripts/test-consistency.shagainst live Moon + Redis to validate behavioral paritySummary by CodeRabbit
New Features
Tests
Documentation