feat(v0.1.5): Moon Console — interactive data client#73
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughIntroduces Moon Console, a feature-gated interactive React 19 admin UI with WebSocket and REST backends, enabling real-time key browsing, query execution, vector visualization, and graph exploration. Includes admin authentication, CORS policies, rate limiting, new server commands, and comprehensive integration testing infrastructure. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Browser as Console Browser
participant React as React Frontend
participant SSE as SSE Stream
participant REST as REST API
participant WS as WebSocket
participant Gateway as ConsoleGateway
participant Shard as Shard Worker
User->>Browser: Navigate to /ui/
Browser->>React: Load App.tsx
React->>SSE: connectSSE()
SSE-->>React: MetricEvent stream (ops/sec, memory, clients)
React->>React: useMetricsStore.pushMetric()
User->>Browser: Browse keys
Browser->>REST: GET /api/v1/keys?pattern=*&count=20
REST->>Gateway: scanKeys(cursor, pattern, count)
Gateway->>Shard: SCAN 0 MATCH * COUNT 20
Shard-->>Gateway: [cursor, [keys]]
Gateway-->>REST: {cursor, keys}
REST-->>Browser: JSON response
User->>Browser: View key details
Browser->>REST: GET /api/v1/key/mykey
REST->>Gateway: executeCommand("GET", ["mykey"])
Gateway->>Shard: GET mykey
Shard-->>Gateway: value (string/hash/list/set/zset/stream)
Gateway->>Gateway: frameToJson(Frame)
Gateway-->>REST: {type, value}
REST-->>Browser: JSON response
User->>Browser: Execute RESP query
Browser->>WS: send({cmd: "SET", args: ["key", "val"]})
WS->>Gateway: executeCommand("SET", ["key", "val"])
Gateway->>Shard: SET key val
Shard-->>Gateway: OK
Gateway-->>WS: {type: "simple_string", data: "OK"}
WS-->>Browser: JSON response
Browser->>REST: GET /api/v1/info
REST->>Gateway: executeCommand("INFO", [])
Gateway->>Shard: INFO
Shard-->>Gateway: info (bulk string)
Gateway-->>REST: {type, data}
REST-->>Browser: Server info
React->>React: useGraphStore
React->>REST: POST /api/v1/command (GRAPH.QUERY cypher)
REST->>Gateway: executeCommand("GRAPH.QUERY", [cypher])
Gateway->>Shard: GRAPH.QUERY cypher
Shard-->>Gateway: nodes, edges
Gateway-->>REST: {nodes, edges, ...}
REST-->>React: GraphData
React->>React: useVectorStore
React->>REST: GET /api/v1/indexes
REST->>Gateway: executeCommand("FT._LIST", [])
Gateway->>Shard: FT._LIST
Shard-->>Gateway: [index_names]
Gateway-->>REST: {indexes}
React->>React: runUmap() via Worker
React->>Browser: Render 3D point cloud
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Review Summary by QodoMoon Console — Interactive Data Client with REST/WebSocket Gateway and 3D Visualization
WalkthroughsDescription• **Moon Console** — comprehensive embedded interactive web client with HTTP/WebSocket/SSE gateway, KV browser, query console, 3D vector/graph visualization, and real-time dashboard served at /ui/ from the Moon binary with zero additional deployment • **Rust backend** (feature-gated --features console): New REST API endpoints (/api/v1/*) for command dispatch via SPSC channels, WebSocket bridge at /ws for RESP3-over-JSON, SSE metrics stream at /events (1 Hz), static file serving at /ui/*, memory treemap aggregation, and HNSW trace visualization • **Console gateway** — translates REST/WS requests to RESP Frame commands, routes via SPSC to appropriate shards based on key hashing, converts RESP responses to JSON with base64 encoding for binary data • **React frontend** (~7.8K LOC, 77 files): Vite 8 + React 19 + Tailwind 4 + shadcn/ui + Zustand + Monaco + Three.js/R3F + d3-force-3d + umap-js with 56 Vitest unit tests and 9 Playwright E2E specs • **Six interactive views**: Dashboard (ops/sec chart, memory, hit ratio), KV Browser (virtualized key list with type badges, TTL countdown, inline editors for all types), Console (tabbed RESP/Cypher editor with history), Vector Explorer (3D point cloud with KNN search and lasso selection), Graph Explorer (force-directed 3D visualization with Cypher queries), Memory (treemap drill-down and command statistics) • **Real bugs fixed**: execCommand result unwrapping (type badges showed [object Object]), WebSocket protocol mismatch (Console hung on Cmd+Enter), server dropping id on error responses, multi-line paste handling (Cmd+Enter runs current line, Cmd+Shift+Enter runs batch) • **Integration testing**: Bash harness with deterministic fixture seeding (2K KV keys, 50K vectors, 10K graph nodes), live verification against running Moon, Playwright E2E specs, and 45 unit tests (26 gateway + 19 endpoint tests) Diagramflowchart LR
Client["Client Browser"]
REST["REST API<br/>/api/v1/*"]
WS["WebSocket<br/>/ws"]
SSE["SSE Stream<br/>/events"]
Static["Static Files<br/>/ui/*"]
Gateway["Console Gateway<br/>SPSC Dispatcher"]
Shards["Shard Event Loops<br/>RESP Command Execution"]
Client -->|HTTP| REST
Client -->|WS| WS
Client -->|EventSource| SSE
Client -->|GET| Static
REST --> Gateway
WS --> Gateway
SSE -.->|Metrics| Shards
Gateway -->|SPSC Channel| Shards
Shards -->|RESP Frame| Gateway
Gateway -->|JSON Response| Client
File Changes1. src/admin/http_server.rs
|
Code Review by Qodo
|
| // Push to the SPSC ring buffer. If full, the shard is overloaded. | ||
| { | ||
| let mut prod = self.shard_producers[shard_id].lock(); | ||
| prod.try_push(msg) | ||
| .map_err(|_| "shard SPSC buffer full (overloaded)".to_string())?; | ||
| } |
There was a problem hiding this comment.
1. consolegateway locks spsc producer 📘 Rule violation ➹ Performance
The console gateway uses parking_lot::Mutex to lock a per-shard SPSC producer on every command dispatch instead of using flume::Sender/Receiver for lock-free stage handoff. This violates the hot-path pipeline communication requirement and can add avoidable synchronization overhead per message.
Agent Prompt
## Issue description
The console gateway dispatch path locks `Mutex<HeapProd<ShardMessage>>` on every command push and uses ringbuf SPSC plumbing instead of `flume::Sender`/`Receiver`, violating the rule requiring flume channels for hot-path pipeline communication.
## Issue Context
This affects the admin/console request pipeline (REST/WS → shard dispatch). The compliance requirement specifically calls for flume-based producer/consumer messaging instead of per-item locking.
## Fix Focus Areas
- src/admin/console_gateway.rs[33-38]
- src/admin/console_gateway.rs[143-148]
- src/shard/mesh.rs[176-193]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| #[cfg(test)] | ||
| mod tests { | ||
| use super::*; | ||
|
|
||
| #[test] |
There was a problem hiding this comment.
2. Admin tests outside mod.rs 📘 Rule violation ▣ Testability
New unit tests were added in non-mod.rs files under the split src/admin/ module tree. This violates the requirement that split-module unit tests live in the module’s mod.rs only, to keep tests centralized and coordinated.
Agent Prompt
## Issue description
Unit tests were added to leaf module files under `src/admin/`, but the compliance rule requires tests for split modules to be centralized in `src/admin/mod.rs`.
## Issue Context
`src/admin/` is a split module directory with a `mod.rs` that declares submodules. Tests should be declared only in `mod.rs` (e.g., `#[cfg(test)] mod tests { ... }`) and import the items they test.
## Fix Focus Areas
- src/admin/mod.rs[6-25]
- src/admin/console_gateway.rs[252-256]
- src/admin/memory_treemap.rs[295-300]
- src/admin/hnsw_trace.rs[175-179]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| // ── Trace tests (console feature) ──────────────────────────────── | ||
|
|
||
| #[cfg(feature = "console")] | ||
| #[test] | ||
| fn trace_empty_graph_returns_empty() { | ||
| distance::init(); | ||
| let collection = CollectionMetadata::new( | ||
| 1, |
There was a problem hiding this comment.
3. Hnsw trace tests outside mod.rs 📘 Rule violation ▣ Testability
New console-feature tests were added in src/vector/hnsw/search.rs, which is a leaf file in a split module directory that already has src/vector/hnsw/mod.rs. This violates the rule that unit tests for split modules must live in mod.rs rather than sibling leaf files.
Agent Prompt
## Issue description
Unit tests were added to `src/vector/hnsw/search.rs`, but the compliance rule requires split-module unit tests to live in `src/vector/hnsw/mod.rs`.
## Issue Context
`src/vector/hnsw/` is a split module directory (it has `mod.rs` plus multiple sibling leaf files). Tests should be centralized in `mod.rs` and import functions/types from `search`.
## Fix Focus Areas
- src/vector/hnsw/mod.rs[1-10]
- src/vector/hnsw/search.rs[1409-1473]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| async fn handle_command( | ||
| req: Request<Incoming>, | ||
| gw: &crate::admin::console_gateway::ConsoleGateway, | ||
| ) -> Response<BoxBody<Bytes, Infallible>> { | ||
| let body = match read_body(req).await { | ||
| Ok(b) => b, | ||
| Err(resp) => return resp, | ||
| }; | ||
|
|
||
| let parsed: serde_json::Value = match serde_json::from_slice(&body) { | ||
| Ok(v) => v, | ||
| Err(e) => { | ||
| return json_response( | ||
| StatusCode::BAD_REQUEST, | ||
| &serde_json::json!({"error": format!("Invalid JSON: {}", e)}), | ||
| ); | ||
| } | ||
| }; | ||
|
|
||
| let cmd = match parsed["cmd"].as_str() { | ||
| Some(c) => c, | ||
| None => { | ||
| return json_response( | ||
| StatusCode::BAD_REQUEST, | ||
| &serde_json::json!({"error": "Missing 'cmd' field"}), | ||
| ); | ||
| } | ||
| }; | ||
|
|
||
| let args: Vec<Bytes> = parsed["args"] | ||
| .as_array() | ||
| .map(|arr| { | ||
| arr.iter() | ||
| .map(|v| match v.as_str() { | ||
| Some(s) => Bytes::from(s.to_string()), | ||
| None => Bytes::from(v.to_string()), | ||
| }) | ||
| .collect() | ||
| }) | ||
| .unwrap_or_default(); | ||
|
|
||
| let db_index = parsed["db"].as_u64().unwrap_or(0) as usize; | ||
|
|
||
| match gw.execute_command(db_index, cmd, &args).await { | ||
| Ok(frame) => { | ||
| let result = crate::admin::console_gateway::ConsoleGateway::frame_to_json(&frame); | ||
| let type_name = frame_type_name(&frame); | ||
| json_response( | ||
| StatusCode::OK, | ||
| &serde_json::json!({"result": result, "type": type_name}), | ||
| ) | ||
| } | ||
| Err(e) => json_response( | ||
| StatusCode::INTERNAL_SERVER_ERROR, | ||
| &serde_json::json!({"error": e}), | ||
| ), | ||
| } | ||
| } |
There was a problem hiding this comment.
4. Unauthenticated admin command exec 🐞 Bug ⛨ Security
/api/v1/command and /ws execute arbitrary commands via ConsoleGateway without any authentication/ACL enforcement, while API responses set permissive CORS (Access-Control-Allow-Origin: *) enabling cross-origin browser access. If the admin port is reachable (e.g., --bind 0.0.0.0), this is remote RCE-on-data: attackers can read/modify/delete DB contents via HTTP/WS.
Agent Prompt
### Issue description
Console REST/WS/SSE endpoints on the admin HTTP port allow unauthenticated arbitrary command execution, and JSON/SSE responses set `Access-Control-Allow-Origin: *`, enabling cross-origin browser access.
### Issue Context
- `/api/v1/command` forwards user JSON directly into `ConsoleGateway::execute_command`.
- `json_response()` attaches permissive CORS headers.
- `/ws` upgrade path has no Origin validation.
- Protected mode currently guards Redis/TLS listeners only.
### Fix Focus Areas
- Add an explicit authentication/authorization gate for all console endpoints (REST + WS + SSE), ideally reusing existing auth config (`requirepass`/ACL) or introducing a dedicated admin token.
- Default to **no CORS** (or same-origin only) in production builds; only allow `*` in explicit dev mode.
- For WebSocket: validate `Origin`/`Host` and reject cross-site origins by default.
- Consider enforcing that admin HTTP (when `console` is enabled) only binds to loopback unless explicitly opted out.
### Fix Focus Areas (code locations)
- src/admin/http_server.rs[114-191]
- src/admin/http_server.rs[203-277]
- src/admin/http_server.rs[276-344]
- src/admin/http_server_support.rs[32-54]
- src/admin/sse_stream.rs[78-85]
- src/admin/ws_bridge.rs[31-93]
- src/admin/metrics_setup.rs[37-68]
- src/config.rs[8-20]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| pub fn serve_static_file(path: &str) -> Response<Full<Bytes>> { | ||
| let path = path.trim_start_matches('/'); | ||
| let path = if path.is_empty() { "index.html" } else { path }; | ||
|
|
||
| match ConsoleAssets::get(path) { | ||
| Some(file) => { | ||
| let mime = mime_guess::from_path(path).first_or_text_plain(); | ||
| let cache_control = if path == "index.html" { | ||
| // index.html must not be aggressively cached (SPA entry point) | ||
| "no-cache" | ||
| } else { | ||
| // Hashed assets (JS, CSS) can be cached immutably | ||
| "public, max-age=31536000, immutable" | ||
| }; | ||
|
|
||
| Response::builder() | ||
| .status(StatusCode::OK) | ||
| .header("content-type", mime.as_ref()) | ||
| .header("cache-control", cache_control) | ||
| .body(Full::new(Bytes::from(file.data.into_owned()))) | ||
| .unwrap_or_else(|_| { | ||
| Response::new(Full::new(Bytes::from_static(b"Internal Server Error"))) | ||
| }) | ||
| } | ||
| None => { | ||
| // SPA fallback: serve index.html for client-side routing | ||
| match ConsoleAssets::get("index.html") { | ||
| Some(file) => Response::builder() | ||
| .status(StatusCode::OK) | ||
| .header("content-type", "text/html; charset=utf-8") | ||
| .header("cache-control", "no-cache") | ||
| .body(Full::new(Bytes::from(file.data.into_owned()))) | ||
| .unwrap_or_else(|_| { | ||
| Response::new(Full::new(Bytes::from_static(b"Internal Server Error"))) | ||
| }), | ||
| None => Response::builder() | ||
| .status(StatusCode::NOT_FOUND) | ||
| .header("content-type", "text/plain") | ||
| .body(Full::new(Bytes::from_static(b"Console not available"))) | ||
| .unwrap_or_else(|_| { | ||
| Response::new(Full::new(Bytes::from_static(b"Not Found"))) | ||
| }), | ||
| } | ||
| } |
There was a problem hiding this comment.
5. Spa fallback breaks missing assets 🐞 Bug ≡ Correctness
serve_static_file() serves index.html for *any* unknown path (including missing JS/CSS under /assets/*), so when referenced assets aren’t embedded/present the browser receives HTML instead of JS/CSS and the console fails to load. This is exacerbated because console/dist/index.html references hashed /assets/... files, but the Rust embed serves only what exists under console/dist/.
Agent Prompt
### Issue description
The embedded console can fail to load because `serve_static_file()` returns `index.html` for any unknown path, including missing `/assets/*` bundles referenced by `dist/index.html`.
### Issue Context
- `console/dist/index.html` references hashed JS/CSS under `/assets/...`.
- Rust static serving uses rust-embed from `console/dist/`.
- The server returns index.html on any miss, which masks missing bundles and produces HTML responses to JS/CSS requests.
- The Rust build does not automatically run `pnpm build`, so it’s easy to ship a binary with incomplete/stale `dist/`.
### Fix Focus Areas
- Change SPA fallback to only apply to likely client-routes (e.g., paths without a file extension) and/or only under `/ui/*`.
- Return **404** for missing static assets (especially under `/assets/`), rather than serving index.html.
- Add a build-time guarantee that `console/dist` contains the expected assets when `--features console` is enabled:
- Option A: check in built assets (if that’s intended).
- Option B: add a build step (e.g., in `build.rs` gated by `CARGO_FEATURE_CONSOLE`) that runs `pnpm build` or fails fast with a clear error if assets are missing.
### Fix Focus Areas (code locations)
- src/admin/static_files.rs[16-65]
- console/dist/index.html[1-9]
- console/README.md[24-32]
- build.rs[1-20]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟡 Minor comments (24)
scripts/gen-verification.sh-115-115 (1)
115-115:⚠️ Potential issue | 🟡 MinorQuote nested expansion in prefix stripping to avoid glob-pattern mismatch.
Line 115 uses unquoted nested expansion in parameter removal. When
REPO_ROOTcontains glob metacharacters like[, the pattern matching fails and the prefix is not stripped.Suggested fix
- rel="${f#${REPO_ROOT}/}" + rel="${f#"$REPO_ROOT"/}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/gen-verification.sh` at line 115, The prefix-stripping uses an unquoted nested expansion rel="${f#${REPO_ROOT}/}" which can be treated as a glob pattern if REPO_ROOT contains metacharacters; update the code to quote the nested expansion to prevent pathname/glob expansion, e.g. set a quoted prefix or use rel="${f#"$REPO_ROOT"/}" (or prefix="${REPO_ROOT}/"; rel="${f#"$prefix"}") so f, REPO_ROOT and rel are handled safely.src/vector/hnsw/search.rs-870-875 (1)
870-875:⚠️ Potential issue | 🟡 MinorBase-layer
selectedcap does not match the documented contract.Line 874 uses
k_cap.max(ef_cap), butef_capis alreadymax(ef_search, k)(Line 845), so this effectively caps byef_search. The docs/comments state layer 0 should be capped byk.Proposed fix
- selected: l0_visited.min(k_cap.max(ef_cap)), + selected: l0_visited.min(k_cap),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vector/hnsw/search.rs` around lines 870 - 875, The HnswTraceLayer for layer 0 currently sets selected using k_cap.max(ef_cap), which because ef_cap is already max(ef_search, k) ends up effectively capping by ef_search rather than k; change the selected computation in the HnswTraceLayer construction (the layer: 0 entry where HnswTraceLayer is created) to use k_cap as the cap (e.g., selected should be l0_visited.min(k_cap)) so layer 0 follows the documented contract of being capped by k.src/admin/console_gateway.rs-22-24 (1)
22-24:⚠️ Potential issue | 🟡 MinorDon't silently ignore duplicate gateway initialization.
A second
set_global_gateway()currently fails with no signal and leaves the stale gateway installed. That makes bootstrap and test reinitialization bugs much harder to diagnose.🩹 Suggested change
-pub fn set_global_gateway(gw: Arc<ConsoleGateway>) { - let _ = CONSOLE_GATEWAY.set(gw); +pub fn set_global_gateway(gw: Arc<ConsoleGateway>) -> Result<(), Arc<ConsoleGateway>> { + CONSOLE_GATEWAY.set(gw) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/admin/console_gateway.rs` around lines 22 - 24, Currently set_global_gateway(gw: Arc<ConsoleGateway>) silently ignores a second initialization because CONSOLE_GATEWAY.set is swallowed; change it to detect duplicate initialization (use CONSOLE_GATEWAY.try_set or check the Result from set) and surface an error instead of ignoring it — either return a Result from set_global_gateway (e.g., Result<(), InitError>) or log/panic with a clear message including the new gw and the already-installed gateway; update call sites to handle the new Result if you choose that route and reference set_global_gateway and CONSOLE_GATEWAY when making the change.console/src/components/vector/UmapProgress.tsx-8-19 (1)
8-19:⚠️ Potential issue | 🟡 MinorClamp progress percentage before rendering width.
pctcan exceed bounds when progress reports transient values outside[0, total], which can produce misleading UI.🐛 Proposed fix
- const pct = progress.total > 0 ? (progress.epoch / progress.total) * 100 : 0; + const rawPct = progress.total > 0 ? (progress.epoch / progress.total) * 100 : 0; + const pct = Math.max(0, Math.min(100, rawPct));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/vector/UmapProgress.tsx` around lines 8 - 19, The computed pct in UmapProgress (currently const pct = progress.total > 0 ? (progress.epoch / progress.total) * 100 : 0) may fall outside 0–100; clamp pct to that range before using it in the inline style. Update the pct computation (or add a clamp step) so pct = Math.min(100, Math.max(0, calculatedPct)) and use that clamped value in the style width for the inner progress div to prevent overflow/negative widths when progress.epoch is out of bounds.console/src/lib/monarch-cypher.ts-32-34 (1)
32-34:⚠️ Potential issue | 🟡 MinorSeparate arrow operators from arithmetic minus to fix semantic tokenization.
Line 32 currently classifies bare
-asoperator.arrow, causing expressions like1-2to be mis-tokenized. The hyphen in Cypher has different semantic roles: relationship arrows (->,<-) versus arithmetic minus, and should be tokenized separately.Suggested fix
- [/->|<-|-/, "operator.arrow"], + [/->|<-/, "operator.arrow"], + [/=~|<>|<=|>=|=|<|>|\+|-|\*|\/|%/, "operator"],tests/console_gateway_test.rs-13-18 (1)
13-18:⚠️ Potential issue | 🟡 MinorHardcoded ports conflict across test files and could cause CI flakiness when tests run together.
Tests in both
jepsen_lite.rs(port 16399) andcrash_matrix.rs(port 16400) use the same hardcoded ports as this test. While all tests are marked#[ignore]and won't run automatically, if invoked together (manually or by a test harness), they will fail due to port collisions. Consider using environment variables or a test coordination mechanism to allocate distinct ports for each test, or ensure the test suite enforces serial execution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/console_gateway_test.rs` around lines 13 - 18, The test uses hardcoded ports in the start_server() helper (Command::new("./target/release/moon").args(["--port", "16399", "--admin-port", "16400", "--shards", "2"])) which can collide with other tests; modify start_server to accept ports from environment variables (e.g., read PORT and ADMIN_PORT) or generate randomized free ports at runtime and pass them into Command::args, or add a coordination mechanism to ensure unique ports per test; update any callers of start_server to provide/propagate the chosen ports and ensure the spawned server logs the actual ports used for easier debugging.console/src/components/browser/editors/ListEditor.tsx-9-21 (1)
9-21:⚠️ Potential issue | 🟡 MinorAdd error handling for LPUSH/RPUSH operations.
Similar to HashEditor, these operations perform optimistic state updates without catching errors. Network or server failures would leave the UI out of sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/editors/ListEditor.tsx` around lines 9 - 21, The LPUSH/RPUSH handlers (handleLPush and handleRPush) do optimistic UI updates without error handling, so wrap the execCommand call in try/catch and only update state (setItems and setNewValue) after a successful execCommand; on failure log or surface the error (e.g., via processLogger/console/error toast) and avoid mutating items. Also consider disabling the input while the async call is pending (use a local isSaving flag) to prevent duplicate requests and ensure execCommand is awaited before clearing newValue.console/src/components/browser/editors/HashEditor.tsx-46-57 (1)
46-57:⚠️ Potential issue | 🟡 MinorAdd error handling for delete and add operations.
handleDeleteandhandleAddperform optimistic state updates without catching errors. IfHDELorHSETfails, the UI will show stale state (item removed/added locally but not on server).🛡️ Proposed error handling
const handleDelete = useCallback(async (field: string) => { - await execCommand("HDEL", [keyName, field]); - setFields((prev) => prev.filter((f) => f.field !== field)); + try { + await execCommand("HDEL", [keyName, field]); + setFields((prev) => prev.filter((f) => f.field !== field)); + } catch (err) { + console.error("Failed to delete field:", err); + // Optionally show user feedback + } }, [keyName]); const handleAdd = async () => { if (!newField.trim()) return; - await execCommand("HSET", [keyName, newField, newValue]); - setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]); - setNewField(""); - setNewValue(""); + try { + await execCommand("HSET", [keyName, newField, newValue]); + setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]); + setNewField(""); + setNewValue(""); + } catch (err) { + console.error("Failed to add field:", err); + } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/editors/HashEditor.tsx` around lines 46 - 57, handleDelete and handleAdd optimistically update state without handling execCommand failures; wrap each execCommand call in try/catch and only mutate state after a successful response (or revert on error). For handleDelete, call execCommand("HDEL", ...) inside try and only call setFields(...) after success (or store the removed item and restore it in catch), and surface/log the error; for handleAdd, perform execCommand("HSET", ...) inside try before calling setFields([...]), setNewField("") and setNewValue(""), and handle/log failures in catch (or remove the newly added entry on failure). Use the existing symbols execCommand, handleDelete, handleAdd, setFields, setNewField and setNewValue when making these changes.console/src/components/dashboard/HitRatioCard.tsx-17-20 (1)
17-20:⚠️ Potential issue | 🟡 MinorNon-deterministic
Math.random()insideuseMemo.Using
Math.random()insideuseMemomakes the computation impure—the sparkline will regenerate with different random values each timeserverInfochanges, causing visual jumpiness. If jitter is desired, consider seeding it deterministically (e.g., based on index and hit rate) or generating it once outside the memo.🔧 Deterministic jitter alternative
// Generate synthetic sparkline from ratio (single-value sparkline) const spark = Array.from({ length: 10 }, (_, i) => ({ idx: i, - value: rate + (Math.random() - 0.5) * 2, // slight jitter for visual + value: rate + ((i % 2 === 0 ? 1 : -1) * (i * 0.2)), // deterministic wave }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/dashboard/HitRatioCard.tsx` around lines 17 - 20, The sparkline generation in the useMemo (variable spark) is non-deterministic because it uses Math.random(), causing visual jumpiness when serverInfo changes; replace the impure randomness by either (a) computing a deterministic jitter based on stable inputs (e.g., derive a pseudo-random value from rate and index using a small hash or Math.sin(index + rate * constant)) inside the same useMemo for spark, or (b) generate the jitter once outside the memo (on component mount) and store it in state so spark stays stable; update the code that builds spark (referencing spark, rate, and useMemo) to use the deterministic function or stored jitter instead of Math.random().console/src/components/browser/editors/HashEditor.tsx-84-85 (1)
84-85:⚠️ Potential issue | 🟡 MinorPotential double-save when pressing Enter.
When Enter is pressed,
handleSaveis called viaonKeyDown. This setsediting: false, which unmounts the input and triggersonBlur, callinghandleSaveagain. This could result in two HSET commands for the same edit.🔧 Prevent double-save by checking editing state or using a flag
- onKeyDown={(e) => e.key === "Enter" && handleSave(f.field, f.editValue)} - onBlur={() => handleSave(f.field, f.editValue)} + onKeyDown={(e) => { + if (e.key === "Enter") { + e.currentTarget.blur(); // Let blur handler do the save + } + }} + onBlur={() => handleSave(f.field, f.editValue)}Or alternatively, track a "saving in progress" flag per field to skip redundant calls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/editors/HashEditor.tsx` around lines 84 - 85, Pressing Enter triggers handleSave via onKeyDown and then onBlur again when the input unmounts, causing duplicate saves; update the logic so handleSave is idempotent or guarded: add a per-field "saving" boolean or check the field's editing state inside handleSave (reference handleSave, onKeyDown, onBlur, editing, f.field, f.editValue) and early-return if a save is already in progress or editing is false, set the saving flag before starting the HSET call and clear it after completion so the second trigger is ignored.console/src/components/browser/ValuePanel.tsx-63-68 (1)
63-68:⚠️ Potential issue | 🟡 MinorAdd a fallback for unsupported/unknown key types.
If
res.typeis unexpected, the body renders nothing. Show an explicit unsupported-type state instead of a blank panel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/ValuePanel.tsx` around lines 63 - 68, The panel currently renders nothing for unexpected key types (res.type -> keyType) — update the ValuePanel component to add a clear fallback branch that renders an explicit "Unsupported key type" state (e.g., simple message or Placeholder component) when keyType is not "string", "hash", "list", "set", "zset", or "stream"; modify the conditional rendering around keyType (the lines that choose StringEditor, HashEditor, ListEditor, SetEditor, ZSetEditor, StreamViewer) to include this default case so users see a visible unsupported-type message instead of a blank panel.console/src/components/graph/NodeInspector.tsx-29-29 (1)
29-29:⚠️ Potential issue | 🟡 MinorGuard
node.propertiesbeforeObject.entries.If
propertiesisnull/undefined, this throws at runtime. A safe fallback avoids inspector crashes on partial data.Suggested fix
-const propertyEntries = Object.entries(node.properties); +const propertyEntries = Object.entries(node.properties ?? {});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/graph/NodeInspector.tsx` at line 29, In NodeInspector, the line creating propertyEntries uses Object.entries(node.properties) which will throw if node.properties is null/undefined; update the property extraction (where propertyEntries is defined) to guard against missing properties by using a safe fallback such as Object.entries(node.properties ?? {}) or Object.entries(node.properties || {}) so the inspector won't crash on partial data (refer to the propertyEntries variable and NodeInspector component when making the change).console/src/components/browser/TtlManager.tsx-48-59 (1)
48-59:⚠️ Potential issue | 🟡 MinorAPI calls lack error handling.
Both
handleSaveandhandlePersistdon't handle API failures. Users receive no feedback ifsetKeyTtlfails, and the local state would become inconsistent with the server.🛡️ Proposed fix with error handling
const handleSave = async () => { const seconds = parseInt(inputValue, 10); if (isNaN(seconds) || seconds <= 0) return; - await setKeyTtl(keyName, seconds); - setTtl(seconds); - setEditing(false); + try { + await setKeyTtl(keyName, seconds); + setTtl(seconds); + setEditing(false); + } catch (err) { + console.error("Failed to set TTL:", err); + // Optionally show toast/notification to user + } }; const handlePersist = async () => { - await setKeyTtl(keyName, -1); - setTtl(-1); + try { + await setKeyTtl(keyName, -1); + setTtl(-1); + } catch (err) { + console.error("Failed to persist key:", err); + } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/TtlManager.tsx` around lines 48 - 59, handleSave and handlePersist call setKeyTtl without handling API errors, which can leave local state inconsistent; wrap the setKeyTtl calls in try/catch inside handleSave and handlePersist, validate the parseInt result before calling, and only call setTtl (and setEditing(false)) on success, while logging or surfacing the error (e.g., via a passed showError/toast) when the API throws; reference the functions handleSave, handlePersist, setKeyTtl, setTtl, setEditing, inputValue and keyName to locate where to add the try/catch and conditional state updates.console/src/components/browser/TtlManager.tsx-35-44 (1)
35-44:⚠️ Potential issue | 🟡 MinorInterval effect dependency may cause stale closures or multiple intervals.
The dependency
[ttl !== null && ttl > 0]with the eslint-disable is unusual. While the intent is to start/stop the interval based on TTL positivity, this creates issues:
- The interval captures
setTtlin closure, which is stable, so that's fine.- However, when
keyNamechanges (new key selected), a new TTL is fetched but the interval from the previous key may still be running since the boolean expression might remaintrue.Consider including
keyNamein dependencies or restructuring:🐛 Proposed fix to properly reset interval on key change
// Live countdown useEffect(() => { + if (ttl === null || ttl <= 0) return; + - if (ttl !== null && ttl > 0) { - intervalRef.current = setInterval(() => { - setTtl((prev) => (prev !== null && prev > 0 ? prev - 1 : prev)); - }, 1000); - } + intervalRef.current = setInterval(() => { + setTtl((prev) => (prev !== null && prev > 0 ? prev - 1 : prev)); + }, 1000); + return () => { if (intervalRef.current) clearInterval(intervalRef.current); }; - }, [ttl !== null && ttl > 0]); // eslint-disable-line react-hooks/exhaustive-deps + }, [ttl, keyName]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/TtlManager.tsx` around lines 35 - 44, The effect that manages the countdown interval (useEffect using intervalRef and setTtl) currently only depends on the boolean expression ttl !== null && ttl > 0 which can leave an interval running when a new key is selected; update the effect to depend on both ttl and keyName (or key identifier prop) so the interval is reset whenever the selected key changes, and inside the effect always clear any existing intervalRef.current before creating a new setInterval and on cleanup; remove the eslint-disable and ensure you only start the interval when ttl > 0 and setTtl is used in the interval callback.console/README.md-80-83 (1)
80-83:⚠️ Potential issue | 🟡 MinorE2E suite counts appear stale.
Line 80-83 says “7 tests across 6 spec files,” but this PR’s stated scope references 9 Playwright specs. Please sync the README count (or avoid hardcoded totals) to prevent confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/README.md` around lines 80 - 83, The README line "The suite contains 7 tests across 6 spec files — one per view (Dashboard, Browser, Console, Vectors, Graph, Memory)." is stale; either update the hardcoded totals to match the current 9 Playwright specs mentioned in the PR or remove the numeric counts entirely to avoid future drift. Locate that sentence and replace it with a neutral phrasing like "The suite contains Playwright specs covering the views: Dashboard, Browser, Console, Vectors, Graph, Memory, ..." or update the numbers to the correct counts so the README and the PR scope stay in sync.console/src/lib/completions.ts-248-252 (1)
248-252:⚠️ Potential issue | 🟡 MinorDuplicate command entries will cause duplicate suggestions in Monaco.
OBJECT HELPappears on both line 248 and line 281.SRANDMEMBERappears on both line 71 and line 283. These duplicates will produce duplicate completion items.🧹 Remove duplicates from the Misc section
// ── Misc (1) ── { name: "SUBSTR", summary: "Get substring (deprecated, use GETRANGE)", args: "key start end", group: "string" }, - { name: "OBJECT HELP", summary: "Show OBJECT subcommands", args: "", group: "key" }, { name: "LCS", summary: "Find longest common subsequence", args: "key1 key2 [LEN] [IDX] [MINMATCHLEN len] [WITHMATCHLEN]", group: "string" }, - { name: "SRANDMEMBER", summary: "Get random member(s) from set", args: "key [count]", group: "set" }, { name: "WAITAOF", summary: "Wait for AOF flush on replicas", args: "numlocal numreplicas timeout", group: "server" }, ];Also applies to: 281-284
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/lib/completions.ts` around lines 248 - 252, The completions list contains duplicate entries (e.g., "OBJECT HELP" and "SRANDMEMBER") which produce duplicate Monaco suggestions; open console/src/lib/completions.ts and remove the redundant duplicate objects (or implement a dedupe step) so each command name appears only once — locate the literal entries "OBJECT HELP" and "SRANDMEMBER" in the array and keep a single canonical entry for each (or filter the array by unique name before exporting) to eliminate duplicate completion items.scripts/test-integration.sh-50-62 (1)
50-62:⚠️ Potential issue | 🟡 MinorReadiness loop doesn't fail explicitly after timeout.
If the
/readyzendpoint never becomes available within 30 seconds, the loop exits silently and the script continues to seed fixtures against a potentially unhealthy server. Add an explicit failure after the loop completes without success.🛡️ Add explicit timeout failure
echo "[integration] waiting for /readyz" +ready=false for i in $(seq 1 30); do if curl -fsS "http://127.0.0.1:${ADMIN_PORT}/readyz" >/dev/null 2>&1; then echo "[integration] moon ready after ${i}s" + ready=true break fi if ! kill -0 "${MOON_PID}" 2>/dev/null; then echo "[integration] moon exited during startup; log:" tail -n 80 "${MOON_LOG}" || true exit 1 fi sleep 1 done + +if [ "$ready" = false ]; then + echo "[integration] moon did not become ready within 30s; log:" + tail -n 80 "${MOON_LOG}" || true + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-integration.sh` around lines 50 - 62, The readiness loop may exit after 30s without detecting /readyz but currently continues; modify scripts/test-integration.sh to detect that timeout and fail explicitly: after the for loop, perform a final check (e.g., attempt curl -fsS "http://127.0.0.1:${ADMIN_PORT}/readyz" or test a flag set on success) and if it still fails, print an explicit error message, dump recent logs from ${MOON_LOG} (tail -n 80), and exit 1 so seeding does not proceed against an unhealthy server; reference ADMIN_PORT, MOON_PID, MOON_LOG and the /readyz readiness check to locate and implement this change.console/src/components/browser/KeyList.tsx-88-97 (1)
88-97:⚠️ Potential issue | 🟡 MinorMany concurrent requests when enriching visible keys on scroll.
The effect re-triggers on scroll since
virtualItemschanges, and callsenrichKeyfor each entry withtype === null. WhileenrichKeyincludes a guard (if (entry.type !== null) return;) to prevent re-enriching, if many visible keys are unenriched, you'll still fire numerous concurrent requests (e.g., 50 visible items × 3 API calls each = 150 requests per scroll).Consider batching enrichment requests by key name, or tracking in-flight requests locally in the effect to avoid redundant function invocations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/KeyList.tsx` around lines 88 - 97, The effect over virtualItems in KeyList.tsx causes many concurrent enrichKey calls on scroll; change the effect to deduplicate and/or batch requests by collecting visible key names (from virtualItems -> keys[vItem.index].name) into a Set and then either call a new batch helper (e.g., enrichKeysBatch(names: string[])) or call enrichKey only once per name while tracking an in-effect inFlight Set to avoid duplicate invocations; ensure you reference virtualizer.getVirtualItems(), virtualItems, keys, and enrichKey when implementing the dedupe/batch logic so repeated scroll updates don't spawn redundant requests.console/src/workers/umap.worker.ts-13-18 (1)
13-18:⚠️ Potential issue | 🟡 MinorGuard against degenerate cases where
count < 2.When
countis 0 or 1,nNeighborsbecomes negative or zero (Math.min(15, 0-1) = -1), which may cause undefined behavior inumap-js. UMAP requires at least 2 points to compute a meaningful embedding.🛡️ Add early return for degenerate input
self.onmessage = (e: MessageEvent<UmapWorkerRequest>) => { const { vectors, dims, count, nNeighbors = 15, minDist = 0.1 } = e.data; + // UMAP requires at least 2 points + if (count < 2) { + const positions = new Float32Array(count * 3); + const msg: UmapWorkerResponse = { type: "complete", positions }; + self.postMessage(msg, { transfer: [positions.buffer] }); + return; + } + // Convert flattened Float32Array to array-of-arrays for umap-js🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/workers/umap.worker.ts` around lines 13 - 18, Guard against degenerate inputs before constructing the UMAP instance: if the point count (variable count) is less than 2, return early (or resolve with a sensible default) to avoid passing negative or zero nNeighbors into new UMAP. Update the code around the UMAP construction (the block that creates const umap = new UMAP({...}) and related nNeighbors/nEpochs calculations) to check count < 2 first and handle it (e.g., return an empty embedding or reject) so UMAP is only created when count >= 2.console/src/types/graph.ts-5-8 (1)
5-8:⚠️ Potential issue | 🟡 MinorMake
x/y/zoptional, or keep them out ofGraphNode.The store keeps layout output in a separate
Float32Array, and the worker API never writes coordinates back ontoGraphNode. Requiringx,y, andzhere doesn't match the current data flow and usually leads to dummy values oras-casts aroundqueryGraph().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/types/graph.ts` around lines 5 - 8, GraphNode currently requires numeric x, y, z which conflicts with the worker/store design; make x, y, z optional on GraphNode (change x: number; y: number; z: number; to x?: number; y?: number; z?: number;) or remove them from GraphNode and create a separate LayoutCoordinates type used only where coordinates are produced/consumed (e.g., the force worker and layout store). Update any call sites of queryGraph()/GraphNode consumers to handle optional coordinates (or cast to LayoutCoordinates where appropriate) and keep the GraphNode interface free of mandatory layout fields.console/src/components/browser/editors/StringEditor.tsx-27-31 (1)
27-31:⚠️ Potential issue | 🟡 MinorReset
modewhenkeyName/valuechanges.
modeis only derived from the firstvalue. After switching from a raw string to a JSON string (or back), this effect refresheseditValuebut keeps the previous key's mode, so the new key can open in the wrong view.Suggested fix
useEffect(() => { - const formatted = detectedJson ? JSON.stringify(JSON.parse(value), null, 2) : value; - setEditValue(mode === "json" && detectedJson ? formatted : value); + const nextIsJson = isValidJson(value); + const nextMode: ViewMode = nextIsJson ? "json" : "raw"; + setMode(nextMode); + setEditValue( + nextMode === "json" ? JSON.stringify(JSON.parse(value), null, 2) : value, + ); setDirty(false); - }, [keyName, value]); // eslint-disable-line react-hooks/exhaustive-deps + }, [keyName, value]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/browser/editors/StringEditor.tsx` around lines 27 - 31, When keyName or value changes, reset the editor mode so the new key opens in the correct view: inside the useEffect that currently computes formatted and calls setEditValue/setDirty, also call setMode(detectedJson ? "json" : "text") (use the existing detectedJson, mode, setEditValue, setDirty, and setMode identifiers) before setting editValue; ensure the effect depends on keyName and value (and/or detectedJson) so mode is updated whenever the key/value changes.console/src/components/vector/LassoSelect.tsx-31-55 (1)
31-55:⚠️ Potential issue | 🟡 MinorCapture the pointer outside the SVG during an active lasso.
Releasing the mouse outside this
<svg>never triggersonMouseUp, sodrawingstaystrueand the next re-entry keeps appending to the old polygon until Escape is pressed. Use pointer capture or window-level move/up listeners for the active drag.Also applies to: 106-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/vector/LassoSelect.tsx` around lines 31 - 55, The lasso stays stuck because mouseup outside the SVG doesn't fire; update handleMouseDown/handleMouseUp/handleMouseMove to capture the pointer or add global window listeners: in handleMouseDown (where you call svgRef and setDrawing/setLassoPath) call element.setPointerCapture(e.pointerId) or register window.addEventListener('pointermove'/'pointerup') and in handleMouseUp release capture (element.releasePointerCapture) or remove the window listeners and call setDrawing(false) and finalize/reset setLassoPath; ensure svgRef.current is used to capture/release and cleanup listeners in the same scope so drawing is always cleared even if pointer is released outside the SVG.src/admin/ws_bridge.rs-139-167 (1)
139-167:⚠️ Potential issue | 🟡 MinorKeep all
SELECThandling local, including failures.When
args[0]is missing or unparsable, this falls through intoexecute_command. That breaks the per-WebSocket session contract in this module and makes invalidSELECTbehavior depend on backend command handling instead of returning a deterministic client error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/admin/ws_bridge.rs` around lines 139 - 167, The SELECT branch (when cmd == "SELECT") must handle missing or unparsable args[0] locally instead of falling through to execute_command; if parsed.get("args").first() is missing or cannot be parsed into a usize, immediately build and return an error response JSON (e.g., {"error": "ERR invalid DB index", "type":"error"}) and include request_id if present, rather than allowing execute_command to handle it. Update the block around parsed / db / db_num / selected_db and ensure the failure path returns the error response (referencing cmd == "SELECT", parsed, selected_db, request_id, and execute_command) so SELECT always yields a deterministic per-WebSocket error on invalid input.console/src/components/console/ResultPanel.tsx-233-237 (1)
233-237:⚠️ Potential issue | 🟡 MinorActually reset the view override when the result changes.
The comment says this should happen, but
overrideis never cleared. After a new query, the footer can highlight the previous manual mode while the body renders a different fallback view.Suggested fix
-import { useState, useMemo, useCallback } from "react"; +import { useState, useMemo, useCallback, useEffect } from "react"; @@ export function ResultPanel({ result, executing }: ResultPanelProps) { const autoView = useMemo(() => (result ? detectView(result.data) : "raw"), [result]); const [override, setOverride] = useState<ViewMode | null>(null); - // Reset override when result changes + useEffect(() => { + setOverride(null); + }, [result]); + const currentView = override ?? autoView;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/components/console/ResultPanel.tsx` around lines 233 - 237, The override state is never cleared so the manual view remains highlighted after a new result; update the component to reset override when result (or result.data) changes by using an effect that calls setOverride(null) whenever the result changes (the computed autoView from useMemo should remain). Target the override/setOverride state declared with useState and the currentView calculation (override ?? autoView) — add a useEffect that depends on result (or result.data) to clear override on new results.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fc582ae2-264e-4b53-9968-5fbc918d430e
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.lockconsole/dist/index.htmlis excluded by!**/dist/**console/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (131)
.github/workflows/console-integration.yml.gitignore.planningCargo.tomlconsole/.gitignoreconsole/README.mdconsole/index.htmlconsole/package.jsonconsole/playwright.config.tsconsole/src/App.tsxconsole/src/app.cssconsole/src/components/browser/KeyList.tsxconsole/src/components/browser/KeyMetadata.tsxconsole/src/components/browser/KeyToolbar.tsxconsole/src/components/browser/NamespaceTree.tsxconsole/src/components/browser/TtlManager.tsxconsole/src/components/browser/ValuePanel.tsxconsole/src/components/browser/editors/HashEditor.tsxconsole/src/components/browser/editors/ListEditor.tsxconsole/src/components/browser/editors/SetEditor.tsxconsole/src/components/browser/editors/StreamViewer.tsxconsole/src/components/browser/editors/StringEditor.tsxconsole/src/components/browser/editors/ZSetEditor.tsxconsole/src/components/console/Editor.tsxconsole/src/components/console/HistoryPanel.tsxconsole/src/components/console/ResultPanel.tsxconsole/src/components/console/TabBar.tsxconsole/src/components/dashboard/ClientsCard.tsxconsole/src/components/dashboard/HitRatioCard.tsxconsole/src/components/dashboard/InfoCards.tsxconsole/src/components/dashboard/KeyspaceCard.tsxconsole/src/components/dashboard/MemoryChart.tsxconsole/src/components/dashboard/OpsChart.tsxconsole/src/components/dashboard/SlowlogTable.tsxconsole/src/components/graph/CypherInput.tsxconsole/src/components/graph/GraphInfoPanel.tsxconsole/src/components/graph/GraphScene.tsxconsole/src/components/graph/LabelFilter.tsxconsole/src/components/graph/NodeInspector.tsxconsole/src/components/layout/AppShell.tsxconsole/src/components/layout/Sidebar.tsxconsole/src/components/memory/CommandStatsTable.tsxconsole/src/components/memory/MemoryTreemap.tsxconsole/src/components/memory/SlowlogPanel.tsxconsole/src/components/ui/badge.tsxconsole/src/components/ui/card.tsxconsole/src/components/vector/ClusterStats.tsxconsole/src/components/vector/ColorByControls.tsxconsole/src/components/vector/HnswOverlay.tsxconsole/src/components/vector/IndexMetadataPanel.tsxconsole/src/components/vector/KnnSearchPanel.tsxconsole/src/components/vector/LassoSelect.tsxconsole/src/components/vector/PointCloudScene.tsxconsole/src/components/vector/PointInspector.tsxconsole/src/components/vector/UmapProgress.tsxconsole/src/lib/api.tsconsole/src/lib/completions.tsconsole/src/lib/graph-api.tsconsole/src/lib/monarch-cypher.tsconsole/src/lib/monarch-resp.tsconsole/src/lib/sse.tsconsole/src/lib/utils.tsconsole/src/lib/vector-api.tsconsole/src/lib/ws.tsconsole/src/main.tsxconsole/src/stores/browser.tsconsole/src/stores/console.tsconsole/src/stores/graph.tsconsole/src/stores/memory.tsconsole/src/stores/metrics.tsconsole/src/stores/vector.tsconsole/src/types/browser.tsconsole/src/types/console.tsconsole/src/types/d3-force-3d.d.tsconsole/src/types/graph.tsconsole/src/types/memory.tsconsole/src/types/metrics.tsconsole/src/types/vector.tsconsole/src/views/Browser.tsxconsole/src/views/Console.tsxconsole/src/views/Dashboard.tsxconsole/src/views/GraphExplorer.tsxconsole/src/views/Help.tsxconsole/src/views/MemoryView.tsxconsole/src/views/VectorExplorer.tsxconsole/src/vite-env.d.tsconsole/src/workers/force-layout.worker.tsconsole/src/workers/umap.worker.tsconsole/tests/e2e/benchmarks.spec.tsconsole/tests/e2e/browser.spec.tsconsole/tests/e2e/console.spec.tsconsole/tests/e2e/dashboard.spec.tsconsole/tests/e2e/fixtures.tsconsole/tests/e2e/graph.spec.tsconsole/tests/e2e/integration.spec.tsconsole/tests/e2e/memory.spec.tsconsole/tests/e2e/vectors.spec.tsconsole/tests/unit/components/NamespaceTree.test.tsxconsole/tests/unit/lib/completions.test.tsconsole/tests/unit/lib/monarch-cypher.test.tsconsole/tests/unit/lib/monarch-resp.test.tsconsole/tests/unit/setup.tsconsole/tests/unit/stores/browser.test.tsconsole/tests/unit/stores/console.test.tsconsole/tests/unit/stores/graph.test.tsconsole/tests/unit/stores/memory.test.tsconsole/tests/unit/stores/metrics.test.tsconsole/tests/unit/stores/vector.test.tsconsole/tsconfig.app.jsonconsole/tsconfig.jsonconsole/tsconfig.node.jsonconsole/tsconfig.test.jsonconsole/vite.config.tsconsole/vitest.config.tsscripts/gen-verification.shscripts/seed-console-fixtures.pyscripts/test-integration.shsrc/admin/console_gateway.rssrc/admin/hnsw_trace.rssrc/admin/http_server.rssrc/admin/http_server_support.rssrc/admin/memory_treemap.rssrc/admin/metrics_setup.rssrc/admin/mod.rssrc/admin/sse_stream.rssrc/admin/static_files.rssrc/admin/ws_bridge.rssrc/main.rssrc/shard/mesh.rssrc/vector/hnsw/search.rstests/console_gateway_test.rs
| const vectorStr = queryVector.join(","); | ||
| const result = await execCommand("FT.SEARCH", [ | ||
| indexName, | ||
| `*=>[KNN ${k} @vector $BLOB]`, | ||
| "PARAMS", "2", "BLOB", vectorStr, | ||
| "SORTBY", "__vector_score", | ||
| "LIMIT", "0", String(k), | ||
| ]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n console/src/lib/vector-api.ts | head -90Repository: pilotspace/moon
Length of output: 3449
🏁 Script executed:
# Search for DIALECT usage in the codebase
rg "DIALECT" --type ts --type js -i -B 2 -A 2Repository: pilotspace/moon
Length of output: 351
🏁 Script executed:
# Check if there are any RediSearch protocol definitions or documentation
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.md" \) | xargs rg "FT\.SEARCH|VECTOR|KNN" -l | head -20Repository: pilotspace/moon
Length of output: 477
🌐 Web query:
RediSearch FT.SEARCH vector KNN DIALECT parameter syntax requirements
💡 Result:
For RediSearch FT.SEARCH with vector KNN queries, the DIALECT parameter is required and must be set to 2 (or higher dialects like 3 or 4 where supported). Dialect 1 does not support vector search. Syntax requirements: - Append DIALECT 2 (or higher) at the end of the FT.SEARCH command. - Vector KNN syntax: "()=>[KNN <num_neighbours> @<vector_field> $vec]" or with pre-filtering "()=>[KNN <num_neighbours> @<vector_field> $vec]". - Use PARAMS 2 vec <binary_vector_blob> to provide the query vector. - Typically sort by distance: SORTBY <distance_field> ASC. - Full example: FT.SEARCH "=>[KNN 10 @vector_field $vec AS dist]" PARAMS 2 vec SORTBY dist DIALECT 2 Dialect 2 introduced vector search support in RediSearch 2.4. Higher dialects add enhancements but 2 is the minimum for vectors. Default dialect is 1 without specification.
Citations:
- 1: https://redis.io/docs/latest/develop/ai/search-and-query/advanced-concepts/dialects/
- 2: https://redis.io/docs/latest/commands/ft.search
- 3: https://redis.io/docs/latest/develop/ai/search-and-query/query/vector-search/
- 4: https://redis.io/docs/latest/commands/ft.search/
Fix FT.SEARCH vector query syntax: add DIALECT 2 and encode vector as binary blob.
The RediSearch protocol requires DIALECT 2 for vector search support (dialect 1 is the default and does not support vectors). Additionally, the vector parameter must be encoded as a binary blob, not a comma-separated string. The code at lines 60–67 omits DIALECT and sends vectorStr as CSV text, which violates the protocol and will cause the query to fail or be misinterpreted.
Reference example in console/src/views/Help.tsx correctly shows the expected format: FT.SEARCH embeddings "*=>[KNN 10 @v $query]" PARAMS 2 query "..." DIALECT 2 with binary vector encoding.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@console/src/lib/vector-api.ts` around lines 60 - 67, The FT.SEARCH call is
using a CSV string (vectorStr) and missing DIALECT 2; convert queryVector (the
numeric array) into a binary Float32 buffer and pass that buffer as the PARAMS
value (instead of vectorStr), and add "DIALECT", "2" to the argument list for
execCommand; update the call site where execCommand("FT.SEARCH", [...]) is built
(symbols: queryVector, vectorStr, k, indexName, execCommand) so the param key
(e.g., "BLOB" or "query") receives the binary blob and the command arguments
include DIALECT 2.
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/command/mod.rs (1)
524-595:⚠️ Potential issue | 🔴 CriticalTest script entries are incomplete — 6 commands lack coverage.
Commands are missing test entries in
scripts/test-commands.shandscripts/test-consistency.sh: FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE, SLOWLOG, and ZMSCORE. Per project rules, every new command requires at least one entry in each test script. Ensure these are added.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/command/mod.rs` around lines 524 - 595, Add missing test-script entries for the six commands introduced in mod.rs: server_admin::flushdb (FLUSHDB), connection::healthz (HEALTHZ), hll::pfcount (PFCOUNT), hll::pfmerge (PFMERGE), crate::admin::slowlog::handle_slowlog/global_slowlog (SLOWLOG), and sorted_set::zmscore (ZMSCORE). Update scripts/test-commands.sh and scripts/test-consistency.sh to include at least one test-line per command (invoke the command with minimal valid args and assert expected response/exit code), using the exact command names (FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE, SLOWLOG, ZMSCORE) so CI covers these code paths. Ensure test entries follow existing script conventions (same formatting, invocation helpers, and cleanup) to avoid flakiness.
🧹 Nitpick comments (2)
console/src/lib/api.ts (2)
104-139: Sequential deletion is fine for UI context but consider batching for large selections.The sequential loop enables per-key failure tracking, which is good for UX. For bulk operations (100+ keys), consider using a batched DELETE endpoint if available, or
Promise.allSettledwith a concurrency limit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/lib/api.ts` around lines 104 - 139, The current deleteKeys(keys: string[]) implementation performs sequential per-key DELETEs which is fine for small sets but will be slow for large selections; change deleteKeys to use a batched approach: if a bulk DELETE endpoint exists on API_BASE (e.g., POST/DELETE /keys with body of keys) call that and parse total/failed, otherwise run parallel requests with Promise.allSettled and a concurrency limiter (preserving per-key success/failure tracking and building the failed array) so UX toasts still reflect total deleted and failed keys; ensure the function still returns the total deleted and retains the same toast/error message logic.
240-265: Consider batching key enrichment for better throughput.The current loop processes one key at a time despite using
Promise.allfor TYPE + MEMORY. For large key sets, you could batch the enrichment (e.g., 50 keys at a time) to improve throughput while still respecting the server.♻️ Optional: Batch enrichment example
do { const result = await scanKeys(cursor, "*", 500); cursor = result.cursor; - for (const key of result.keys) { - if (keys.length >= maxKeys) { - cursor = "0"; - break; - } - const [typeResult, memResult] = await Promise.all([ - execCommand("TYPE", [key]), - execCommand("MEMORY", ["USAGE", key]), - ]); - keys.push({ - key, - type: String(typeResult).toLowerCase(), - bytes: typeof memResult === "number" ? memResult : 0, - }); - } + const batch = result.keys.slice(0, maxKeys - keys.length); + if (batch.length === 0) { cursor = "0"; break; } + const enriched = await Promise.all( + batch.map(async (key) => { + const [typeResult, memResult] = await Promise.all([ + execCommand("TYPE", [key]), + execCommand("MEMORY", ["USAGE", key]), + ]); + return { + key, + type: String(typeResult).toLowerCase(), + bytes: typeof memResult === "number" ? memResult : 0, + }; + }) + ); + keys.push(...enriched); + if (keys.length >= maxKeys) cursor = "0"; } while (cursor !== "0");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@console/src/lib/api.ts` around lines 240 - 265, fetchMemoryTreemap currently enriches keys one-by-one using execCommand inside the scan loop which is slow for large sets; change it to batch enrichment: while scanning collect raw key names into a buffer (up to a chosen batch size, e.g., 50 or remaining to reach maxKeys), then call Promise.all (or Promise.allSettled) on mapped execCommand("TYPE", key) and execCommand("MEMORY", ["USAGE", key]) pairs for that batch, transform the results into the same {key,type,bytes} objects and push them into the keys array, and continue scanning until cursor="0" or keys.length >= maxKeys; update flow in fetchMemoryTreemap and keep calling buildTreemapFromKeys(keys) at the end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/admin/auth.rs`:
- Around line 77-83: verify_header currently uses token.split_once('.') which
splits at the first '.' and will truncate user IDs containing dots; change the
split to use the right-most separator (e.g. token.rsplit_once('.') in
verify_header) so the tuple (user, b64) captures the full user portion and the
signature portion correctly for HMAC verification.
In `@src/admin/rate_limit.rs`:
- Around line 57-75: The cleanup task branch uses Tokio APIs (tokio::spawn,
tokio::runtime::Handle::try_current, tokio::time::interval) unguarded; wrap the
entire async cleanup spawn logic inside a #[cfg(feature = "runtime-tokio")]
block (or provide an alternative no-op for non-Tokio builds) so the module can
compile under other runtimes; locate the code around the enabled branch where
Arc::downgrade(&limiter) and the weak.upgrade() loop call
l.cleanup(Duration::from_secs(300)) and apply the cfg guard or add a conditional
stub function that compiles when runtime-tokio is disabled.
In `@src/admin/scan_fanout.rs`:
- Around line 123-126: The code currently treats a non-Array Frame for
keys_frame by returning an empty Vec (ks), silently dropping keys; change this
to surface a protocol error instead: in the match on keys_frame (the branch
handling Frame::Array(ks) / _), return an Err(...) for the non-Array case
(similar to other malformed-reply handling in this module) rather than
Vec::new(), so that the caller (scan_fanout / surrounding function) receives an
error when as_bytes/frame parsing fails; reference the keys_frame match, the
as_bytes conversion used in the Frame::Array arm, and the ks variable to locate
and replace the fallback with an Err describing a malformed SCAN reply.
In `@src/admin/sse_stream.rs`:
- Around line 79-85: The SSE response builder in Response::builder() is setting
a hardcoded header header("access-control-allow-origin", "*") which bypasses the
policy-driven CORS applied by the middleware; remove the explicit
access-control-allow-origin header from the SSE handler (the Response::builder()
call that constructs the StreamBody) so that CORS is solely enforced by the
existing middleware (src/admin/middleware.rs) and ensure no other code in
sse_stream.rs sets that header elsewhere.
In `@src/admin/static_files.rs`:
- Around line 45-61: The SPA fallback currently serves index.html for any
missing ConsoleAssets (ConsoleAssets::get returning None); change it to only
return index.html for route-like URLs and return a proper 404 for missing
asset-like requests. Detect this in the None branch by inspecting the requested
path (e.g., the path string passed into the static handler or the variable used
to lookup ConsoleAssets) and treating requests whose last path segment contains
a file extension (a dot like ".js", ".css", ".png", etc.) as assets — on those
return StatusCode::NOT_FOUND/plain text; for requests without an extension
(route URLs) return index.html as before. Update the logic around
ConsoleAssets::get and the fallback match so asset misses yield 404s while route
misses yield the SPA index.html.
In `@src/admin/ws_bridge.rs`:
- Around line 53-68: The send_queue_depth counter in the receive/process/send
loop (send_queue_depth, MAX_SEND_QUEUE, ws_sender) cannot accumulate because
each send is awaited, so replace this false soft-limit with a bounded outbound
channel and dedicated writer task: create a bounded mpsc channel for outgoing
messages, spawn a writer task that pulls from the channel and calls
ws_sender.send(Message::text(...)).await, and in the main receive loop try_send
to the channel and drop/log the message if the channel is full (enforcing the
soft limit); alternatively implement gating by checking sink readiness before
reading. Update references in this module to push outgoing payloads into the new
channel and remove the send_queue_depth counter logic.
In `@src/command/mod.rs`:
- Around line 790-797: DEBUG is marked only as ADMIN in the command metadata but
is routed through the read path (dispatch_read_inner,
is_dispatch_read_supported) and the handlers gate on !metadata::is_write(), so
the metadata and documented contract for dispatch_read() (which states it is
only called when metadata::is_read() is true) are inconsistent; fix by adding
the READONLY flag to DEBUG's metadata (making it READONLY|ADMIN like INFO or RA)
so metadata::is_read() returns true for DEBUG, or alternatively update the
documentation/comment at dispatch_read()/is_dispatch_read_supported() to reflect
that commands allowed on the read path may be ADMIN-only and the real gate is
metadata::is_write().
In `@src/command/server_admin.rs`:
- Around line 297-301: memory_stats currently reports the current estimated
memory as "peak.allocated", which is misleading because memory() and
memory_readonly() call memory_stats with db.estimated_memory(); change this by
either (A) adding and maintaining a true peak counter (e.g., track a peak_memory
field that is updated in the logic that calls or computes db.estimated_memory()
and use that in memory_stats) or (B) renaming the exported key from
"peak.allocated" to a Moon-specific current field like "allocated" or
"current.allocated" and update all callers (memory(), memory_readonly(), and any
consumers) to use the new name; locate the helper function memory_stats and the
callers memory() and memory_readonly() as the places to implement the chosen fix
and ensure backward compatibility or documentation if renaming.
- Around line 130-133: Replace the heap-allocation via format!() when
constructing the error frame (the Err(Frame::Error(Bytes::from(format!(...))))
use) with a non-allocating stack-based format: prepare a fixed-size stack buffer
(e.g. [u8; 128] or appropriate), write the static prefix "ERR DEBUG subcommand
'" into it, append the sub slice (use String::from_utf8_lossy(sub).as_bytes()
only if that avoids allocation, otherwise validate/append sub bytes directly),
append the trailing "' not supported", and use itoa/fast integer writers if you
need to format numbers; then create the Bytes from the written buffer (e.g.
Bytes::copy_from_slice(&buf[..len])). Apply the same change to the other
occurrences flagged (the similar format! uses at the other noted locations) so
command-path code avoids allocations.
In `@src/config.rs`:
- Around line 50-53: The docstring promises console_rate_burst defaults to 2x
console_rate_limit but the struct uses default_value_t = 2000.0, so change
console_rate_burst to an Option<f64> (remove default_value_t) and after parsing
compute the effective burst from console_rate_limit (e.g., if
config.console_rate_burst.is_none() set effective_burst = 2.0 *
config.console_rate_limit), storing or returning that derived value where the
limiter is constructed; update references that expect a f64 to use the derived
value and adjust the #[arg(...)] on console_rate_burst accordingly and the
docstring to match the new behavior (use symbols console_rate_burst and
console_rate_limit to locate and modify the code).
In `@src/main.rs`:
- Around line 129-174: The console auth/CORS policy construction (the block that
computes (console_auth, console_cors) using config.console_auth_required,
moon::admin::auth::AuthPolicy::enabled, and moon::admin::cors::CorsPolicy::new)
runs after the early check config return (config.check_config) so validation is
skipped; move this construction into the config validation path (or execute it
before the `if config.check_config { return Ok(()) }` early return) so
wildcard-CORS/auth conflicts and auth-policy construction errors are detected
during `--check-config`.
- Around line 177-188: The admin server is started by
moon::admin::metrics_setup::init_metrics before the global gateway is
registered, allowing /api/v1/* and /ws to race with an unset gateway; fix by
ensuring the gateway is registered before starting the HTTP server or making the
HTTP layer return a deterministic 503 until registration completes: either move
the call to set_global_gateway(...) to run prior to calling init_metrics
(referencing set_global_gateway and init_metrics), or change init_metrics to
accept or expose a readiness flag (the readiness_flag returned now) that is only
flipped true after set_global_gateway runs so all handlers check that flag and
return 503 until the gateway is set.
In `@tests/admin_auth_cors_ratelimit.rs`:
- Around line 77-89: Remove the dead call to bin_path() in the timeout branch of
the loop in tests/admin_auth_cors_ratelimit.rs (the "let _ = bin_path();" on the
path where Instant::now() >= deadline) because it does nothing; either delete
that statement or replace it with a meaningful action (e.g., logging) if
intended, leaving the rest of the timeout handling to break out and return None
(so the Moon { child, port, admin } return path remains unchanged).
In `@tests/scan_fanout_multishard.rs`:
- Around line 17-18: The test uses hard-coded RESP_PORT and ADMIN_PORT and
spawns a `moon` child without guaranteeing cleanup, so reserve free ports
up-front (e.g., bind to port 0 or use a helper to claim available ports and
assign them to RESP_PORT/ADMIN_PORT variables) before calling `spawn` and
replace the constants with those reserved values, and wrap the spawned child
process in a drop guard (a scoped RAII guard around the child returned by the
`spawn_moon`/spawn code) that unconditionally kills and awaits the child on
panic or normal exit; also ensure `wait_for_port()` is called against the
reserved ports tied to that child so stale processes cannot be mistaken for the
test child.
---
Outside diff comments:
In `@src/command/mod.rs`:
- Around line 524-595: Add missing test-script entries for the six commands
introduced in mod.rs: server_admin::flushdb (FLUSHDB), connection::healthz
(HEALTHZ), hll::pfcount (PFCOUNT), hll::pfmerge (PFMERGE),
crate::admin::slowlog::handle_slowlog/global_slowlog (SLOWLOG), and
sorted_set::zmscore (ZMSCORE). Update scripts/test-commands.sh and
scripts/test-consistency.sh to include at least one test-line per command
(invoke the command with minimal valid args and assert expected response/exit
code), using the exact command names (FLUSHDB, HEALTHZ, PFCOUNT, PFMERGE,
SLOWLOG, ZMSCORE) so CI covers these code paths. Ensure test entries follow
existing script conventions (same formatting, invocation helpers, and cleanup)
to avoid flakiness.
---
Nitpick comments:
In `@console/src/lib/api.ts`:
- Around line 104-139: The current deleteKeys(keys: string[]) implementation
performs sequential per-key DELETEs which is fine for small sets but will be
slow for large selections; change deleteKeys to use a batched approach: if a
bulk DELETE endpoint exists on API_BASE (e.g., POST/DELETE /keys with body of
keys) call that and parse total/failed, otherwise run parallel requests with
Promise.allSettled and a concurrency limiter (preserving per-key success/failure
tracking and building the failed array) so UX toasts still reflect total deleted
and failed keys; ensure the function still returns the total deleted and retains
the same toast/error message logic.
- Around line 240-265: fetchMemoryTreemap currently enriches keys one-by-one
using execCommand inside the scan loop which is slow for large sets; change it
to batch enrichment: while scanning collect raw key names into a buffer (up to a
chosen batch size, e.g., 50 or remaining to reach maxKeys), then call
Promise.all (or Promise.allSettled) on mapped execCommand("TYPE", key) and
execCommand("MEMORY", ["USAGE", key]) pairs for that batch, transform the
results into the same {key,type,bytes} objects and push them into the keys
array, and continue scanning until cursor="0" or keys.length >= maxKeys; update
flow in fetchMemoryTreemap and keep calling buildTreemapFromKeys(keys) at the
end.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a0298d37-146d-4c81-8aa8-3b6d96e2c0fc
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.lockconsole/dist/index.htmlis excluded by!**/dist/**console/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (39)
.planningCargo.tomlconsole/.npmrcconsole/package.jsonconsole/src/lib/api.tsconsole/src/lib/toast.tsconsole/src/main.tsxconsole/tests/e2e/benchmarks.spec.tsconsole/tests/e2e/browser.spec.tsconsole/tests/e2e/console.spec.tsconsole/tests/e2e/dashboard.spec.tsconsole/tests/e2e/fixtures.tsconsole/tests/e2e/graph.spec.tsconsole/tests/e2e/integration.spec.tsconsole/tests/e2e/memory.spec.tsconsole/tests/e2e/vectors.spec.tsconsole/vite.config.tssrc/admin/auth.rssrc/admin/console_gateway.rssrc/admin/cors.rssrc/admin/http_server.rssrc/admin/http_server_support.rssrc/admin/metrics_setup.rssrc/admin/middleware.rssrc/admin/mod.rssrc/admin/rate_limit.rssrc/admin/scan_fanout.rssrc/admin/sse_stream.rssrc/admin/static_files.rssrc/admin/ws_bridge.rssrc/command/key.rssrc/command/metadata.rssrc/command/mod.rssrc/command/server_admin.rssrc/config.rssrc/main.rstests/admin_auth_cors_ratelimit.rstests/cmd_flush_dbsize_debug_memory.rstests/scan_fanout_multishard.rs
✅ Files skipped from review due to trivial changes (9)
- console/.npmrc
- console/tests/e2e/browser.spec.ts
- .planning
- console/tests/e2e/console.spec.ts
- console/tests/e2e/graph.spec.ts
- console/tests/e2e/dashboard.spec.ts
- console/package.json
- console/tests/e2e/fixtures.ts
- tests/cmd_flush_dbsize_debug_memory.rs
🚧 Files skipped from review as they are similar to previous changes (8)
- console/tests/e2e/vectors.spec.ts
- console/src/main.tsx
- console/vite.config.ts
- console/tests/e2e/memory.spec.ts
- console/tests/e2e/benchmarks.spec.ts
- console/tests/e2e/integration.spec.ts
- src/admin/metrics_setup.rs
- src/admin/mod.rs
| if enabled { | ||
| // Spawn cleanup on the current tokio runtime. The weak reference | ||
| // ensures the task exits automatically when the last Arc is dropped. | ||
| let weak = Arc::downgrade(&limiter); | ||
| // `tokio::spawn` requires a tokio runtime context. Callers in | ||
| // tests (`#[tokio::test]`) and in the admin server runtime both | ||
| // satisfy that. If invoked outside a runtime we silently skip. | ||
| if tokio::runtime::Handle::try_current().is_ok() { | ||
| tokio::spawn(async move { | ||
| let mut ticker = tokio::time::interval(Duration::from_secs(60)); | ||
| loop { | ||
| ticker.tick().await; | ||
| let Some(l) = weak.upgrade() else { | ||
| return; | ||
| }; | ||
| l.cleanup(Duration::from_secs(300)); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, explore the structure of src/admin/rate_limit.rs
head -100 src/admin/rate_limit.rsRepository: pilotspace/moon
Length of output: 3842
🏁 Script executed:
# Check for cfg guards in the file
rg '\[cfg' src/admin/rate_limit.rs -A 2Repository: pilotspace/moon
Length of output: 193
🏁 Script executed:
# Check if the entire admin module is cfg-gated
head -50 src/admin/mod.rsRepository: pilotspace/moon
Length of output: 1100
🏁 Script executed:
# Look at Cargo.toml to understand feature structure
cat Cargo.toml | grep -A 20 '\[features\]'Repository: pilotspace/moon
Length of output: 1325
🏁 Script executed:
# Search for runtime abstraction patterns in the codebase
rg 'cfg.*runtime-tokio|cfg.*runtime-monoio' src/ -lRepository: pilotspace/moon
Length of output: 758
🏁 Script executed:
# Check if there are any documented constraints or feature conflicts
rg 'console.*runtime|runtime.*console' . --type toml --type mdRepository: pilotspace/moon
Length of output: 41
🏁 Script executed:
# Look for integration tests or examples that might show feature combinations
fd -e toml -type f | xargs rg 'console.*runtime|runtime.*console' 2>/dev/nullRepository: pilotspace/moon
Length of output: 231
🏁 Script executed:
# Check for any references to "console requires" or similar constraints
rg 'console.*tokio|console.*require' src/ --type rs -iRepository: pilotspace/moon
Length of output: 85
🏁 Script executed:
# Verify if the issue is actually a problem by checking what happens with the feature combo
# Look at how hyper is used in console (hyper typically is runtime-agnostic but may have tokio deps)
rg 'hyper-tungstenite' . --type toml -A 3 -B 3Repository: pilotspace/moon
Length of output: 1397
Cleanup task uses Tokio APIs without explicit runtime feature guard.
The code directly calls tokio::spawn, tokio::runtime::Handle::try_current(), and tokio::time::interval inside the enabled branch without a #[cfg(feature = "runtime-tokio")] guard or runtime abstraction. While the console feature depends on tokio-stream (making it implicitly Tokio-bound), there is no explicit constraint preventing the module from being compiled with only runtime-monoio.
Add a #[cfg(feature = "runtime-tokio")] guard around the cleanup task spawn, or provide a no-op stub for non-Tokio runtimes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/admin/rate_limit.rs` around lines 57 - 75, The cleanup task branch uses
Tokio APIs (tokio::spawn, tokio::runtime::Handle::try_current,
tokio::time::interval) unguarded; wrap the entire async cleanup spawn logic
inside a #[cfg(feature = "runtime-tokio")] block (or provide an alternative
no-op for non-Tokio builds) so the module can compile under other runtimes;
locate the code around the enabled branch where Arc::downgrade(&limiter) and the
weak.upgrade() loop call l.cleanup(Duration::from_secs(300)) and apply the cfg
guard or add a conditional stub function that compiles when runtime-tokio is
disabled.
| /// Token-bucket burst capacity for the admin rate limiter. | ||
| /// Default: 2x --console-rate-limit. | ||
| #[arg(long = "console-rate-burst", default_value_t = 2000.0)] | ||
| pub console_rate_burst: f64, |
There was a problem hiding this comment.
The documented burst default is not what this parser does.
With default_value_t = 2000.0, --console-rate-limit 25 still leaves burst at 2000, not 50 as the docstring promises. Make console_rate_burst optional and derive the effective default from console_rate_limit after parsing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config.rs` around lines 50 - 53, The docstring promises
console_rate_burst defaults to 2x console_rate_limit but the struct uses
default_value_t = 2000.0, so change console_rate_burst to an Option<f64> (remove
default_value_t) and after parsing compute the effective burst from
console_rate_limit (e.g., if config.console_rate_burst.is_none() set
effective_burst = 2.0 * config.console_rate_limit), storing or returning that
derived value where the limiter is constructed; update references that expect a
f64 to use the derived value and adjust the #[arg(...)] on console_rate_burst
accordingly and the docstring to match the new behavior (use symbols
console_rate_burst and console_rate_limit to locate and modify the code).
| // ── Admin/console hardening (HARD-01/02/03, Phase 137) ────────── | ||
| // Build the auth + CORS policies BEFORE the admin listener binds so | ||
| // misconfiguration (wildcard CORS + auth required, empty secret) fails | ||
| // fast and never opens a port that would satisfy probes while silently | ||
| // accepting unauthenticated requests. | ||
| // | ||
| // The rate limiter is constructed inside the admin runtime (its | ||
| // cleanup task needs `tokio::spawn`); we thread the raw rps/burst | ||
| // through to `spawn_admin_server`. | ||
| #[cfg(feature = "console")] | ||
| let (console_auth, console_cors) = { | ||
| let auth_policy = if config.console_auth_required { | ||
| let secret = if config.console_auth_secret.is_empty() { | ||
| // Operator did not supply a secret: generate an ephemeral | ||
| // 32-byte secret and warn that issued tokens won't survive | ||
| // restart. | ||
| let bytes: [u8; 32] = rand::random(); | ||
| use base64::Engine; | ||
| let s = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(bytes); | ||
| tracing::warn!( | ||
| "--console-auth-required set without --console-auth-secret; \ | ||
| generated ephemeral secret (tokens will not survive restart). \ | ||
| Set --console-auth-secret=... for reproducible deploys." | ||
| ); | ||
| s | ||
| } else { | ||
| config.console_auth_secret.clone() | ||
| }; | ||
| match moon::admin::auth::AuthPolicy::enabled(secret.as_bytes()) { | ||
| Ok(p) => std::sync::Arc::new(p), | ||
| Err(e) => return Err(anyhow::anyhow!("--console-auth-secret: {}", e)), | ||
| } | ||
| } else { | ||
| std::sync::Arc::new(moon::admin::auth::AuthPolicy::disabled()) | ||
| }; | ||
|
|
||
| let cors_policy = match moon::admin::cors::CorsPolicy::new( | ||
| &config.console_cors_origin, | ||
| config.console_auth_required, | ||
| ) { | ||
| Ok(p) => std::sync::Arc::new(p), | ||
| Err(e) => return Err(anyhow::anyhow!(e)), | ||
| }; | ||
|
|
||
| (auth_policy, cors_policy) | ||
| }; |
There was a problem hiding this comment.
--check-config still skips the new console validation.
This block is after the early if config.check_config { return Ok(()) }, so wildcard-CORS/auth conflicts and auth-policy construction errors are only caught on a real boot. Move the console policy construction into the validation path, or run it before the early return.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main.rs` around lines 129 - 174, The console auth/CORS policy
construction (the block that computes (console_auth, console_cors) using
config.console_auth_required, moon::admin::auth::AuthPolicy::enabled, and
moon::admin::cors::CorsPolicy::new) runs after the early check config return
(config.check_config) so validation is skipped; move this construction into the
config validation path (or execute it before the `if config.check_config {
return Ok(()) }` early return) so wildcard-CORS/auth conflicts and auth-policy
construction errors are detected during `--check-config`.
| let readiness_flag = moon::admin::metrics_setup::init_metrics( | ||
| config.admin_port, | ||
| &config.bind, | ||
| #[cfg(feature = "console")] | ||
| console_auth, | ||
| #[cfg(feature = "console")] | ||
| console_cors, | ||
| #[cfg(feature = "console")] | ||
| config.console_rate_limit, | ||
| #[cfg(feature = "console")] | ||
| config.console_rate_burst, | ||
| ); |
There was a problem hiding this comment.
The admin server can start before the console gateway is registered.
Per src/admin/metrics_setup.rs:41-85, init_metrics() spawns the admin HTTP server immediately, but set_global_gateway(...) happens much later. Early /api/v1/* or /ws requests can race with an unset gateway during startup. Either register the gateway first or have the HTTP layer return a deterministic 503 until registration completes.
Also applies to: 306-321
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main.rs` around lines 177 - 188, The admin server is started by
moon::admin::metrics_setup::init_metrics before the global gateway is
registered, allowing /api/v1/* and /ws to race with an unset gateway; fix by
ensuring the gateway is registered before starting the HTTP server or making the
HTTP layer return a deterministic 503 until registration completes: either move
the call to set_global_gateway(...) to run prior to calling init_metrics
(referencing set_global_gateway and init_metrics), or change init_metrics to
accept or expose a readiness flag (the readiness_flag returned now) that is only
flipped true after set_global_gateway runs so all handlers check that flag and
return 503 until the gateway is set.
| loop { | ||
| if Instant::now() >= deadline { | ||
| let _ = bin_path(); | ||
| break; | ||
| } | ||
| let url = format!("http://127.0.0.1:{}/healthz", admin); | ||
| if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call() | ||
| && resp.status() == 200 | ||
| { | ||
| return Some(Moon { child, port, admin }); | ||
| } | ||
| thread::sleep(Duration::from_millis(100)); | ||
| } |
There was a problem hiding this comment.
Dead code in timeout branch.
Line 79 calls bin_path() but discards the result. This appears to be leftover debug code or a copy-paste artifact.
🧹 Suggested fix
if Instant::now() >= deadline {
- let _ = bin_path();
break;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| loop { | |
| if Instant::now() >= deadline { | |
| let _ = bin_path(); | |
| break; | |
| } | |
| let url = format!("http://127.0.0.1:{}/healthz", admin); | |
| if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call() | |
| && resp.status() == 200 | |
| { | |
| return Some(Moon { child, port, admin }); | |
| } | |
| thread::sleep(Duration::from_millis(100)); | |
| } | |
| loop { | |
| if Instant::now() >= deadline { | |
| break; | |
| } | |
| let url = format!("http://127.0.0.1:{}/healthz", admin); | |
| if let Ok(resp) = ureq::get(&url).timeout(Duration::from_millis(500)).call() | |
| && resp.status() == 200 | |
| { | |
| return Some(Moon { child, port, admin }); | |
| } | |
| thread::sleep(Duration::from_millis(100)); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/admin_auth_cors_ratelimit.rs` around lines 77 - 89, Remove the dead
call to bin_path() in the timeout branch of the loop in
tests/admin_auth_cors_ratelimit.rs (the "let _ = bin_path();" on the path where
Instant::now() >= deadline) because it does nothing; either delete that
statement or replace it with a meaningful action (e.g., logging) if intended,
leaving the rest of the timeout handling to break out and return None (so the
Moon { child, port, admin } return path remains unchanged).
| const RESP_PORT: u16 = 16499; | ||
| const ADMIN_PORT: u16 = 16500; |
There was a problem hiding this comment.
Make the spawned server lifecycle failure-safe.
A panic before Lines 137-139 leaves moon running on the hard-coded ports, and the next run can then pass wait_for_port() against the stale process instead of the child from this test. Reserve free ports up front and wrap the child in a drop guard so cleanup is unconditional.
Also applies to: 73-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/scan_fanout_multishard.rs` around lines 17 - 18, The test uses
hard-coded RESP_PORT and ADMIN_PORT and spawns a `moon` child without
guaranteeing cleanup, so reserve free ports up-front (e.g., bind to port 0 or
use a helper to claim available ports and assign them to RESP_PORT/ADMIN_PORT
variables) before calling `spawn` and replace the constants with those reserved
values, and wrap the spawned child process in a drop guard (a scoped RAII guard
around the child returned by the `spawn_moon`/spawn code) that unconditionally
kills and awaits the child on panic or normal exit; also ensure
`wait_for_port()` is called against the reserved ports tied to that child so
stale processes cannot be mistaken for the test child.
Archived phases 128-137 to .planning/milestones/v0.1.5-phases/. Full milestone artifacts at .planning/milestones/v0.1.5-*. See PR #73 for the implementation.
- CHANGELOG.md: add v0.1.5 Moon Console section (unblocks Lint gate) - console-integration.yml: pin dtolnay action to 1.94.1 and install rustfmt/clippy up-front to avoid rustup 'rustfmt-preview' vs 'rustfmt' conflict when cargo syncs via rust-toolchain.toml - Dockerfile: bump RUST_VERSION 1.88 → 1.94 (cargo-chef panicked because moon requires MSRV 1.94) - release.yml: cargo-cyclonedx rejects --output-file; switch to --override-filename + mv to artifacts/ for deterministic paths
Review ResponseThanks for the thorough reviews. Below is a disposition for each finding. Fixed in follow-up commits on this PRCommits
Plus CI unblockers in commits
Obsolete on current HEADQodo Bug #4 — unauthenticated admin command exec. This finding is against commit See Deferred with justificationQodo Rule #1 — flume channels vs Flagging as v0.1.6 followup — appropriate scope is to migrate all admin-lane channels uniformly, not just console dispatch. Qodo Rule #2/#3 — unit tests in leaf files instead of mod.rs. Codebase-wide audit:
Moving only the 9 flagged files creates asymmetry, not compliance. The CLAUDE.md rule (“Test code stays in mod.rs”) was authored for the narrower Flagging as tooling followup — either (a) update the rule to reflect the observed convention, or (b) plan a sweep across all 36 affected files as a separate PR. Low-priority CodeRabbit findings not addressed
|
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (2)
src/admin/auth.rs (1)
66-67:⚠️ Potential issue | 🟠 MajorRemove lint-suppressed
unwrap()from non-test auth code.Line 66 and Line 93 suppress
clippy::unwrap_usedand call.unwrap()in library code. This bypasses the repo ratchet and keeps a panic path in the auth flow.Proposed minimal fix
- #[allow(clippy::unwrap_used)] // invariant: secret verified non-empty in `enabled` - let mut mac = HmacSha256::new_from_slice(&self.secret).unwrap(); + let mut mac = match HmacSha256::new_from_slice(&self.secret) { + Ok(mac) => mac, + Err(_) => panic!("AuthPolicy invariant violated: non-empty secret required"), + }; @@ - #[allow(clippy::unwrap_used)] // invariant: secret non-empty - let mut mac = HmacSha256::new_from_slice(&self.secret).unwrap(); + let mut mac = match HmacSha256::new_from_slice(&self.secret) { + Ok(mac) => mac, + Err(_) => return Err("invalid auth policy secret"), + };#!/bin/bash # Verify unwrap/expect and suppressions are removed from this library auth path. rg -n --type=rust -C2 '#\[allow\(clippy::unwrap_used\)\]|\.unwrap\(|\.expect\(' src/admin/auth.rsAs per coding guidelines: "No
unwrap()orexpect()in library code outside tests. Use pattern matching andif letinstead of unwrap."Also applies to: 93-94
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/admin/auth.rs` around lines 66 - 67, Replace the panic-prone unwraps and the #[allow(clippy::unwrap_used)] around HmacSha256::new_from_slice(&self.secret) (and the similar call later) with proper error handling: call HmacSha256::new_from_slice(&self.secret) and match on the Result, mapping the Err case into a suitable error (e.g., an AuthError or boxed error) and returning it from the enclosing function (or using ? to propagate) instead of unwrapping; remove the allow attribute; ensure the enclosing functions (where HmacSha256::new_from_slice and the other unwrap occur) return Result so you can propagate the failure with context rather than panicking.src/main.rs (1)
129-174:⚠️ Potential issue | 🟠 MajorHoist console policy validation above
--check-config.This block still runs after the early return at Lines 98-126, so
--check-configcan report success without ever validating--console-auth-secretor the console CORS/auth combination.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.rs` around lines 129 - 174, The console auth/CORS validation (creating console_auth and console_cors via config.console_auth_required, config.console_auth_secret, moon::admin::auth::AuthPolicy::enabled and moon::admin::cors::CorsPolicy::new) is executed after the early --check-config return so checks can be skipped; move/hoist this entire #[cfg(feature = "console")] block so it runs before the --check-config early-return (the code that handles --check-config around the earlier lines), ensuring --console-auth-secret and console CORS/auth combos are validated during config-only checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@console/src/components/browser/editors/HashEditor.tsx`:
- Around line 106-121: The clickable <span> used to trigger edit (where
setFields(...) sets editing: true) and the icon-only buttons (Trash2 and other
icon buttons around handleDelete/Loader2) are not keyboard- or
screen-reader-accessible; replace the span with a focusable element (preferably
a <button> or an element with role="button" and tabindex="0") and ensure it
handles onKeyDown (Enter/Space) to call the same setFields edit toggle, and add
clear aria-label attributes to the icon-only buttons (e.g., the button that
calls handleDelete and any other icon buttons) so screen readers announce their
purpose; also ensure visible focus styles are preserved when making these
elements focusable and update any tests/attributes that expect the previous
span.
- Around line 63-68: handleAdd currently always appends a new row after
execCommand("HSET", ...), causing duplicates when the field already exists;
change the setFields update so it upserts: after successful execCommand("HSET",
[keyName, newField, newValue]) use setFields(prev => { if an entry with
entry.field === newField exists, return prev with that entry replaced (update
value and editValue and editing=false), otherwise return [...prev, newEntry] });
keep clearing setNewField("") as before.
- Around line 27-31: HashEditor reuses local state when props change (useState
initializer parseHashArray(value) only runs once), causing stale fields; add a
useEffect in the HashEditor component that watches keyName and value and resets
local state by calling setFields(parseHashArray(value)) and clearing
newField/newValue (setNewField(""), setNewValue("")) and optionally
setSaving(null) so the editor resyncs whenever a different key is selected;
reference the HashEditor function and the state setters setFields, setNewField,
setNewValue, and setSaving to locate where to add the effect.
In `@src/admin/console_gateway.rs`:
- Around line 179-186: The current frame_to_json converts Frame::PreSerialized
to a placeholder {"raw": true}, dropping the payload; instead, extract the
PreSerialized payload bytes and serialize them like other binary/string frames
(attempt UTF-8 decode and otherwise base64), returning the actual value (or
object with {"raw": <string_or_base64>} if you need to preserve metadata).
Update frame_to_json's match arm for Frame::PreSerialized to read the contained
bytes and apply the same UTF-8 / base64 logic used for Frame::BulkString or
Frame::SimpleString, and make the same change where PreSerialized is handled
elsewhere in the codebase so hot-path replies (e.g., GET) render real values
rather than a placeholder.
- Around line 48-85: is_keyless_command currently treats many commands (e.g.,
DBSIZE, KEYS, SCAN, MEMORY, OBJECT, MGET, EVAL, etc.) as shard-0/keyless which
causes partial or incorrect shard-local execution in ShardMessage::Execute;
update is_keyless_command to only return true for truly global commands with no
key semantics (e.g., PING, INFO, COMMAND, CLUSTER, SAVE/BGSAVE) and remove
commands that are key- or subcommand-dependent, and/or implement
subcommand-aware routing: in the console gateway routing path call a new helper
that parses the command and its first subcommand/argument (use the same parsing
logic used by shard/spsc_handler.rs) to either (a) fan out the command to all
shards for global ops (DBSIZE, FLUSHALL), (b) route by the actual key argument
for keyed ops (MGET, MEMORY USAGE, OBJECT ENCODING, EVAL with KEYS), or (c) pin
to shard 0 only for truly console-only commands; reference is_keyless_command
and the ShardMessage::Execute path when making these changes.
In `@src/admin/scan_fanout.rs`:
- Around line 123-124: The code currently uses keys_frame matched as
Frame::Array(ks) and builds ks: Vec<Bytes> via ks.iter().filter_map(|k|
as_bytes(k).cloned()).collect(), which silently drops non-string entries; change
this to explicitly validate each element (using as_bytes on every k) and return
an error on the first malformed entry instead of skipping it. Locate the
Frame::Array arm that constructs ks and replace the filter_map/collect with an
explicit iteration or try-collect that maps each k -> as_bytes(k).cloned() and
propagates a malformed-reply error if as_bytes returns None, so malformed SCAN
elements fail like the other branches.
In `@src/admin/static_files.rs`:
- Around line 49-55: The current asset-detection uses path.contains('.') which
treats any dot anywhere as an asset; change it to inspect only the final path
segment: compute the last segment from the path (e.g. let last_segment =
path.rsplit('/').next().unwrap_or(path)) and then set has_extension based on
last_segment.contains('.') (still account for trailing '/' as before), replacing
the existing has_extension logic in the code around the variable `path` in
static_files.rs so dotted route segments don't incorrectly 404.
In `@src/command/vector_search/mod.rs`:
- Around line 476-481: The CSV vector parser currently builds parsed via
filter_map(parse().ok()) which silently drops invalid tokens; update the parsing
in the vector parsing code (the logic around `let parsed: Vec<f32> = ...` and
the subsequent `if parsed.len() != dim`) to parse tokens one-by-one: trim each
token, attempt `parse::<f32>()`, immediately return an error on the first parse
failure, push valid values into a SmallVec (or Vec::with_capacity(dim) if not
hot), and stop early with an error if the token count exceeds `dim`; ensure the
function that uses `parsed` still checks exact length == dim after successful
incremental parsing.
---
Duplicate comments:
In `@src/admin/auth.rs`:
- Around line 66-67: Replace the panic-prone unwraps and the
#[allow(clippy::unwrap_used)] around HmacSha256::new_from_slice(&self.secret)
(and the similar call later) with proper error handling: call
HmacSha256::new_from_slice(&self.secret) and match on the Result, mapping the
Err case into a suitable error (e.g., an AuthError or boxed error) and returning
it from the enclosing function (or using ? to propagate) instead of unwrapping;
remove the allow attribute; ensure the enclosing functions (where
HmacSha256::new_from_slice and the other unwrap occur) return Result so you can
propagate the failure with context rather than panicking.
In `@src/main.rs`:
- Around line 129-174: The console auth/CORS validation (creating console_auth
and console_cors via config.console_auth_required, config.console_auth_secret,
moon::admin::auth::AuthPolicy::enabled and moon::admin::cors::CorsPolicy::new)
is executed after the early --check-config return so checks can be skipped;
move/hoist this entire #[cfg(feature = "console")] block so it runs before the
--check-config early-return (the code that handles --check-config around the
earlier lines), ensuring --console-auth-secret and console CORS/auth combos are
validated during config-only checks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 22e00cc7-06cb-4df4-9eda-dec11d86abd8
📒 Files selected for processing (22)
console/src/components/browser/editors/HashEditor.tsxconsole/src/components/browser/editors/ListEditor.tsxconsole/src/components/dashboard/HitRatioCard.tsxconsole/src/components/vector/UmapProgress.tsxconsole/src/lib/api.tsconsole/src/lib/monarch-cypher.tsconsole/tests/e2e/benchmarks.spec.tsconsole/tests/e2e/dashboard.spec.tsconsole/tests/e2e/integration.spec.tsconsole/tests/e2e/memory.spec.tsconsole/tests/e2e/vectors.spec.tsscripts/gen-verification.shscripts/test-integration.shsrc/admin/auth.rssrc/admin/console_gateway.rssrc/admin/scan_fanout.rssrc/admin/sse_stream.rssrc/admin/static_files.rssrc/command/server_admin.rssrc/command/vector_search/mod.rssrc/main.rssrc/vector/hnsw/search.rs
✅ Files skipped from review due to trivial changes (4)
- console/tests/e2e/memory.spec.ts
- console/tests/e2e/dashboard.spec.ts
- console/tests/e2e/vectors.spec.ts
- scripts/test-integration.sh
🚧 Files skipped from review as they are similar to previous changes (8)
- console/src/components/vector/UmapProgress.tsx
- console/tests/e2e/benchmarks.spec.ts
- console/src/components/browser/editors/ListEditor.tsx
- console/src/lib/monarch-cypher.ts
- console/src/components/dashboard/HitRatioCard.tsx
- console/tests/e2e/integration.spec.ts
- scripts/gen-verification.sh
- console/src/lib/api.ts
| export function HashEditor({ keyName, value }: { keyName: string; value: unknown }) { | ||
| const [fields, setFields] = useState<HashField[]>(() => parseHashArray(value)); | ||
| const [newField, setNewField] = useState(""); | ||
| const [newValue, setNewValue] = useState(""); | ||
| const [saving, setSaving] = useState<string | null>(null); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether HashEditor is keyed/remounted by keyName in parent render paths.
rg -nP --type=tsx '<HashEditor\b' -C3
rg -nP --type=tsx '<HashEditor\b[^>]*\bkey=' -C2Repository: pilotspace/moon
Length of output: 114
🏁 Script executed:
#!/bin/bash
# Find where HashEditor is imported and used
rg 'HashEditor' --type ts -A 2 -B 2Repository: pilotspace/moon
Length of output: 1746
🏁 Script executed:
#!/bin/bash
# List the structure to understand the codebase
find console/src -name '*.tsx' -o -name '*.ts' | head -20Repository: pilotspace/moon
Length of output: 1007
🏁 Script executed:
#!/bin/bash
# Search for HashEditor usage more broadly
rg 'HashEditor' -A 3 -B 3Repository: pilotspace/moon
Length of output: 2351
🏁 Script executed:
#!/bin/bash
# Read ValuePanel to understand the component structure and key handling
cat -n console/src/components/browser/ValuePanel.tsx | head -100Repository: pilotspace/moon
Length of output: 3333
🏁 Script executed:
#!/bin/bash
# Check if ValuePanel or its parents use key props for HashEditor or manage remounting
rg 'keyName|key=' console/src/components/browser/ValuePanel.tsx -B 3 -A 1Repository: pilotspace/moon
Length of output: 1444
🏁 Script executed:
#!/bin/bash
# Read HashEditor to see handleAdd and handleSave logic
cat -n console/src/components/browser/editors/HashEditor.tsx | sed -n '30,80p'Repository: pilotspace/moon
Length of output: 2423
Add useEffect to resync local state when keyName or value props change.
When switching between different hash keys in ValuePanel, HashEditor is reused without remounting. The useState initializer on line 28 only runs once, leaving stale hash fields displayed. This causes confusing UI where users see and edit fields from a previously viewed key.
Proposed fix
-import { useState, useCallback } from "react";
+import { useState, useCallback, useEffect } from "react";
export function HashEditor({ keyName, value }: { keyName: string; value: unknown }) {
const [fields, setFields] = useState<HashField[]>(() => parseHashArray(value));
const [newField, setNewField] = useState("");
const [newValue, setNewValue] = useState("");
const [saving, setSaving] = useState<string | null>(null);
+
+ useEffect(() => {
+ setFields(parseHashArray(value));
+ setNewField("");
+ setNewValue("");
+ setSaving(null);
+ }, [keyName, value]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@console/src/components/browser/editors/HashEditor.tsx` around lines 27 - 31,
HashEditor reuses local state when props change (useState initializer
parseHashArray(value) only runs once), causing stale fields; add a useEffect in
the HashEditor component that watches keyName and value and resets local state
by calling setFields(parseHashArray(value)) and clearing newField/newValue
(setNewField(""), setNewValue("")) and optionally setSaving(null) so the editor
resyncs whenever a different key is selected; reference the HashEditor function
and the state setters setFields, setNewField, setNewValue, and setSaving to
locate where to add the effect.
| const handleAdd = async () => { | ||
| if (!newField.trim()) return; | ||
| try { | ||
| await execCommand("HSET", [keyName, newField, newValue]); | ||
| setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]); | ||
| setNewField(""); |
There was a problem hiding this comment.
handleAdd should upsert existing fields, not always append.
Line 67 always appends a new row after HSET. For an existing hash field, backend value is replaced, but UI becomes duplicated and inconsistent.
Proposed fix
const handleAdd = async () => {
- if (!newField.trim()) return;
+ const fieldName = newField.trim();
+ if (!fieldName) return;
try {
- await execCommand("HSET", [keyName, newField, newValue]);
- setFields((prev) => [...prev, { field: newField, value: newValue, editing: false, editValue: newValue }]);
+ await execCommand("HSET", [keyName, fieldName, newValue]);
+ setFields((prev) => {
+ const idx = prev.findIndex((f) => f.field === fieldName);
+ if (idx >= 0) {
+ return prev.map((f) =>
+ f.field === fieldName
+ ? { ...f, value: newValue, editValue: newValue, editing: false }
+ : f
+ );
+ }
+ return [...prev, { field: fieldName, value: newValue, editing: false, editValue: newValue }];
+ });
setNewField("");
setNewValue("");
} catch (err) {
- toastError(`HSET ${keyName} ${newField} failed: ${err instanceof Error ? err.message : String(err)}`);
+ toastError(`HSET ${keyName} ${fieldName} failed: ${err instanceof Error ? err.message : String(err)}`);
}
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@console/src/components/browser/editors/HashEditor.tsx` around lines 63 - 68,
handleAdd currently always appends a new row after execCommand("HSET", ...),
causing duplicates when the field already exists; change the setFields update so
it upserts: after successful execCommand("HSET", [keyName, newField, newValue])
use setFields(prev => { if an entry with entry.field === newField exists, return
prev with that entry replaced (update value and editValue and editing=false),
otherwise return [...prev, newEntry] }); keep clearing setNewField("") as
before.
| <span | ||
| className="text-xs font-mono text-zinc-400 truncate cursor-pointer hover:text-zinc-100" | ||
| onClick={() => | ||
| setFields((prev) => | ||
| prev.map((x) => (x.field === f.field ? { ...x, editing: true } : x)) | ||
| ) | ||
| } | ||
| > | ||
| {f.value} | ||
| </span> | ||
| )} | ||
| <div className="flex items-center gap-1 justify-end"> | ||
| {saving === f.field && <Loader2 className="h-3 w-3 animate-spin text-muted-foreground" />} | ||
| <button onClick={() => handleDelete(f.field)} className="text-muted-foreground/50 hover:text-destructive"> | ||
| <Trash2 className="h-3 w-3" /> | ||
| </button> |
There was a problem hiding this comment.
Editing and icon actions are not fully accessible.
Line 106 uses a clickable <span> (not keyboard-focusable), and Lines 119/142 define icon-only buttons without accessible names. Keyboard and screen-reader users can miss core actions.
Proposed fix
- <span
- className="text-xs font-mono text-zinc-400 truncate cursor-pointer hover:text-zinc-100"
+ <button
+ type="button"
+ aria-label={`Edit value for field ${f.field}`}
+ className="text-left text-xs font-mono text-zinc-400 truncate hover:text-zinc-100"
onClick={() =>
setFields((prev) =>
prev.map((x) => (x.field === f.field ? { ...x, editing: true } : x))
)
}
>
{f.value}
- </span>
+ </button>
@@
- <button onClick={() => handleDelete(f.field)} className="text-muted-foreground/50 hover:text-destructive">
+ <button
+ type="button"
+ aria-label={`Delete field ${f.field}`}
+ onClick={() => handleDelete(f.field)}
+ className="text-muted-foreground/50 hover:text-destructive"
+ >
<Trash2 className="h-3 w-3" />
</button>
@@
- <button
+ <button
+ type="button"
+ aria-label="Add hash field"
onClick={handleAdd}
className="flex items-center justify-center gap-1 text-xs text-primary hover:text-primary/80"
>Also applies to: 142-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@console/src/components/browser/editors/HashEditor.tsx` around lines 106 -
121, The clickable <span> used to trigger edit (where setFields(...) sets
editing: true) and the icon-only buttons (Trash2 and other icon buttons around
handleDelete/Loader2) are not keyboard- or screen-reader-accessible; replace the
span with a focusable element (preferably a <button> or an element with
role="button" and tabindex="0") and ensure it handles onKeyDown (Enter/Space) to
call the same setFields edit toggle, and add clear aria-label attributes to the
icon-only buttons (e.g., the button that calls handleDelete and any other icon
buttons) so screen readers announce their purpose; also ensure visible focus
styles are preserved when making these elements focusable and update any
tests/attributes that expect the previous span.
| /// Commands that do not operate on a specific key and should route to shard 0. | ||
| fn is_keyless_command(cmd: &str) -> bool { | ||
| matches!( | ||
| cmd, | ||
| "PING" | ||
| | "INFO" | ||
| | "DBSIZE" | ||
| | "COMMAND" | ||
| | "CONFIG" | ||
| | "CLIENT" | ||
| | "FLUSHALL" | ||
| | "FLUSHDB" | ||
| | "SLOWLOG" | ||
| | "SELECT" | ||
| | "SUBSCRIBE" | ||
| | "UNSUBSCRIBE" | ||
| | "PSUBSCRIBE" | ||
| | "PUNSUBSCRIBE" | ||
| | "MONITOR" | ||
| | "DEBUG" | ||
| | "CLUSTER" | ||
| | "MULTI" | ||
| | "EXEC" | ||
| | "DISCARD" | ||
| | "WAIT" | ||
| | "SAVE" | ||
| | "BGSAVE" | ||
| | "BGREWRITEAOF" | ||
| | "LASTSAVE" | ||
| | "TIME" | ||
| | "MEMORY" | ||
| | "LATENCY" | ||
| | "RANDOMKEY" | ||
| | "SCAN" | ||
| | "KEYS" | ||
| | "OBJECT" | ||
| ) | ||
| } |
There was a problem hiding this comment.
The command-routing heuristic will misroute real console commands.
This path either pins the command to shard 0 or hashes args[0]. src/shard/spsc_handler.rs:196-312 executes ShardMessage::Execute against one shard-local DB, so DBSIZE/KEYS/SCAN become partial shard-0 reads, while MEMORY USAGE key, OBJECT ENCODING key, MGET, EVAL, etc. can hit the wrong shard entirely. Please restrict this gateway to commands with known key specs, or add proper fanout/subcommand-aware routing.
Also applies to: 121-125
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/admin/console_gateway.rs` around lines 48 - 85, is_keyless_command
currently treats many commands (e.g., DBSIZE, KEYS, SCAN, MEMORY, OBJECT, MGET,
EVAL, etc.) as shard-0/keyless which causes partial or incorrect shard-local
execution in ShardMessage::Execute; update is_keyless_command to only return
true for truly global commands with no key semantics (e.g., PING, INFO, COMMAND,
CLUSTER, SAVE/BGSAVE) and remove commands that are key- or subcommand-dependent,
and/or implement subcommand-aware routing: in the console gateway routing path
call a new helper that parses the command and its first subcommand/argument (use
the same parsing logic used by shard/spsc_handler.rs) to either (a) fan out the
command to all shards for global ops (DBSIZE, FLUSHALL), (b) route by the actual
key argument for keyed ops (MGET, MEMORY USAGE, OBJECT ENCODING, EVAL with
KEYS), or (c) pin to shard 0 only for truly console-only commands; reference
is_keyless_command and the ShardMessage::Execute path when making these changes.
| /// Convert a RESP `Frame` into a `serde_json::Value`. | ||
| /// | ||
| /// Handles all Frame variants recursively: | ||
| /// - Strings attempt UTF-8, fall back to base64 | ||
| /// - Errors produce `{"error": "..."}` | ||
| /// - Maps produce JSON objects (keys stringified) | ||
| /// - PreSerialized produces `{"raw": true}` (should not appear in console) | ||
| pub fn frame_to_json(frame: &Frame) -> serde_json::Value { |
There was a problem hiding this comment.
PreSerialized replies lose the actual command result.
src/protocol/frame.rs:152-155 notes that Frame::PreSerialized is used for hot-path replies such as GET. Converting it to {"raw": true} drops the payload, so common console queries will render a placeholder instead of the real value.
Also applies to: 245-245
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/admin/console_gateway.rs` around lines 179 - 186, The current
frame_to_json converts Frame::PreSerialized to a placeholder {"raw": true},
dropping the payload; instead, extract the PreSerialized payload bytes and
serialize them like other binary/string frames (attempt UTF-8 decode and
otherwise base64), returning the actual value (or object with {"raw":
<string_or_base64>} if you need to preserve metadata). Update frame_to_json's
match arm for Frame::PreSerialized to read the contained bytes and apply the
same UTF-8 / base64 logic used for Frame::BulkString or Frame::SimpleString, and
make the same change where PreSerialized is handled elsewhere in the codebase so
hot-path replies (e.g., GET) render real values rather than a placeholder.
| let ks: Vec<Bytes> = match keys_frame { | ||
| Frame::Array(ks) => ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(), |
There was a problem hiding this comment.
Don't silently drop malformed key entries.
filter_map(...).cloned() turns a bad SCAN element into missing data while still advancing the composite cursor. If any member of the keys array is not a string frame, this should fail like the other malformed-reply branches.
Suggested fix
- Frame::Array(ks) => ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(),
+ Frame::Array(ks) => ks
+ .iter()
+ .map(|k| {
+ as_bytes(k)
+ .cloned()
+ .ok_or_else(|| {
+ "malformed SCAN reply: key element is not a string".to_string()
+ })
+ })
+ .collect::<Result<Vec<_>, _>>()?,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/admin/scan_fanout.rs` around lines 123 - 124, The code currently uses
keys_frame matched as Frame::Array(ks) and builds ks: Vec<Bytes> via
ks.iter().filter_map(|k| as_bytes(k).cloned()).collect(), which silently drops
non-string entries; change this to explicitly validate each element (using
as_bytes on every k) and return an error on the first malformed entry instead of
skipping it. Locate the Frame::Array arm that constructs ks and replace the
filter_map/collect with an explicit iteration or try-collect that maps each k ->
as_bytes(k).cloned() and propagates a malformed-reply error if as_bytes returns
None, so malformed SCAN elements fail like the other branches.
| let has_extension = path.contains('.') && !path.ends_with('/'); | ||
| if has_extension { | ||
| return Response::builder() | ||
| .status(StatusCode::NOT_FOUND) | ||
| .header("content-type", "text/plain") | ||
| .body(Full::new(Bytes::from_static(b"Not Found"))) | ||
| .unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found")))); |
There was a problem hiding this comment.
Use the last path segment for asset detection.
This heuristic treats any . anywhere in the URL as an asset miss, so route URLs with dotted params/segments will 404 instead of falling back to index.html. Restrict the check to the final path segment.
Suggested fix
- let has_extension = path.contains('.') && !path.ends_with('/');
+ let has_extension = path
+ .rsplit('/')
+ .next()
+ .is_some_and(|segment| segment.contains('.'));
if has_extension {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header("content-type", "text/plain")
.body(Full::new(Bytes::from_static(b"Not Found")))
.unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found"))));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let has_extension = path.contains('.') && !path.ends_with('/'); | |
| if has_extension { | |
| return Response::builder() | |
| .status(StatusCode::NOT_FOUND) | |
| .header("content-type", "text/plain") | |
| .body(Full::new(Bytes::from_static(b"Not Found"))) | |
| .unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found")))); | |
| let has_extension = path | |
| .rsplit('/') | |
| .next() | |
| .is_some_and(|segment| segment.contains('.')); | |
| if has_extension { | |
| return Response::builder() | |
| .status(StatusCode::NOT_FOUND) | |
| .header("content-type", "text/plain") | |
| .body(Full::new(Bytes::from_static(b"Not Found"))) | |
| .unwrap_or_else(|_| Response::new(Full::new(Bytes::from_static(b"Not Found")))); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/admin/static_files.rs` around lines 49 - 55, The current asset-detection
uses path.contains('.') which treats any dot anywhere as an asset; change it to
inspect only the final path segment: compute the last segment from the path
(e.g. let last_segment = path.rsplit('/').next().unwrap_or(path)) and then set
has_extension based on last_segment.contains('.') (still account for trailing
'/' as before), replacing the existing has_extension logic in the code around
the variable `path` in static_files.rs so dotted route segments don't
incorrectly 404.
| let parsed: Vec<f32> = text | ||
| .split(',') | ||
| .filter(|s| !s.is_empty()) | ||
| .filter_map(|s| s.trim().parse::<f32>().ok()) | ||
| .collect(); | ||
| if parsed.len() != dim { |
There was a problem hiding this comment.
Reject malformed CSV query vectors; current parser silently drops invalid tokens.
filter_map(...parse().ok()) discards bad elements, so malformed input can still pass the dimension check (e.g., dim=3, "1,2,bad,3" becomes [1,2,3]). Parse incrementally and fail on the first invalid token; also stop early when element count exceeds dim.
Proposed fix
- let parsed: Vec<f32> = text
- .split(',')
- .filter(|s| !s.is_empty())
- .filter_map(|s| s.trim().parse::<f32>().ok())
- .collect();
- if parsed.len() != dim {
- return Frame::Error(Bytes::from_static(
- b"ERR query vector dimension mismatch",
- ));
- }
- parsed
+ let mut parsed: SmallVec<[f32; 64]> = SmallVec::with_capacity(dim.min(64));
+ for token in text.split(',').map(str::trim).filter(|s| !s.is_empty()) {
+ let value = match token.parse::<f32>() {
+ Ok(v) => v,
+ Err(_) => {
+ return Frame::Error(Bytes::from_static(
+ b"ERR query vector dimension mismatch",
+ ))
+ }
+ };
+ parsed.push(value);
+ if parsed.len() > dim {
+ return Frame::Error(Bytes::from_static(
+ b"ERR query vector dimension mismatch",
+ ));
+ }
+ }
+ if parsed.len() != dim {
+ return Frame::Error(Bytes::from_static(
+ b"ERR query vector dimension mismatch",
+ ));
+ }
+ parsed.into_vec()As per coding guidelines: src/{command,protocol}/**/*.rs: “Use SmallVec and itoa instead of allocating Vec and format!() on hot paths. Vec::with_capacity() is acceptable for result building at the end of a command path.”
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let parsed: Vec<f32> = text | |
| .split(',') | |
| .filter(|s| !s.is_empty()) | |
| .filter_map(|s| s.trim().parse::<f32>().ok()) | |
| .collect(); | |
| if parsed.len() != dim { | |
| let mut parsed: SmallVec<[f32; 64]> = SmallVec::with_capacity(dim.min(64)); | |
| for token in text.split(',').map(str::trim).filter(|s| !s.is_empty()) { | |
| let value = match token.parse::<f32>() { | |
| Ok(v) => v, | |
| Err(_) => { | |
| return Frame::Error(Bytes::from_static( | |
| b"ERR query vector dimension mismatch", | |
| )) | |
| } | |
| }; | |
| parsed.push(value); | |
| if parsed.len() > dim { | |
| return Frame::Error(Bytes::from_static( | |
| b"ERR query vector dimension mismatch", | |
| )); | |
| } | |
| } | |
| if parsed.len() != dim { | |
| return Frame::Error(Bytes::from_static( | |
| b"ERR query vector dimension mismatch", | |
| )); | |
| } | |
| parsed.into_vec() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/command/vector_search/mod.rs` around lines 476 - 481, The CSV vector
parser currently builds parsed via filter_map(parse().ok()) which silently drops
invalid tokens; update the parsing in the vector parsing code (the logic around
`let parsed: Vec<f32> = ...` and the subsequent `if parsed.len() != dim`) to
parse tokens one-by-one: trim each token, attempt `parse::<f32>()`, immediately
return an error on the first parse failure, push valid values into a SmallVec
(or Vec::with_capacity(dim) if not hot), and stop early with an error if the
token count exceeds `dim`; ensure the function that uses `parsed` still checks
exact length == dim after successful incremental parsing.
489846a to
3d6b3f1
Compare
HTTP/WebSocket gateway with REST API, WS-to-RESP3 bridge, SSE metrics streaming. React 19 console (7 views: Dashboard, Browser, Console, Vector Explorer, Graph Explorer, Memory, Help) served at /ui/ via rust-embed. 50.9 KB gzipped initial bundle. - Real-time Dashboard: 7 widgets driven by SSE at 1 Hz - KV Data Browser: namespace tree, virtual-scrolled keys, type editors - Query Console: Monaco + RESP/Cypher syntax, 233-cmd autocomplete - Vector 3D Explorer: UMAP projection, HNSW overlay, KNN search - Graph 3D Explorer: force-directed layout, Cypher integration - Memory: treemap, slowlog, command stats - Built-in Help guide with seed examples Server additions: - FLUSHALL/FLUSHDB/DBSIZE/DEBUG/MEMORY USAGE commands - Multi-shard SCAN fan-out with composite cursor - Admin-port hardening: Bearer auth, CORS allowlist, rate limiting - FT.SEARCH CSV float fallback for JSON bridge compatibility CI/review fixes: - CHANGELOG.md v0.1.5 entry - Dockerfile RUST_VERSION 1.88→1.94 - console-integration.yml rustup fix - release.yml cargo-cyclonedx flag fix - Playwright E2E hardened for headless CI - CodeRabbit/Qodo review findings addressed (fetchSlowlog, SSE CORS, auth token parsing, scan_fanout error, static_files SPA fallback, HNSW trace layer-0 cap, HashEditor/ListEditor error handling) 56 Vitest tests + 11 Playwright specs pass. Zero clippy warnings.
3d6b3f1 to
077667d
Compare
Summary
/ui/from the Moon binary with zero additional deployment./gsd:audit-milestone→passed.--features console):/api/v1/*(REST command dispatch via SPSC — no TCP self-connect),/ws(RESP3-over-JSON bridge),/events(SSE metrics at 1 Hz),/ui/*(rust-embed static serving),/api/v1/memory/treemap(server-side namespace aggregation),/api/v1/hnsw/trace.execCommandresult unwrapping (type badges showed[object Object]), WebSocket protocol mismatch (Console hung on Cmd+Enter), server droppingidon error responses, multi-line paste →ERR syntax error(now Cmd+Enter runs current line, Cmd+Shift+Enter runs batch).Test plan
cargo check(default features) — cleancargo check --features console— clean on Linux (OrbStackmoon-dev) and macOScargo test --features console— 45 unit tests pass (26 gateway + 19 Phase 136 endpoints)cargo clippy -- -D warnings— zero warnings on default, console, and tokio+jemalloc feature setscd console && npx tsc --noEmit— cleancd console && pnpm test— 56 Vitest tests passcd console && pnpm build— producesdist/embedded by rust-embed--shards 1 --admin-port 9100):server_statsevents at 1 Hzredis-benchmark -n 100000EXPIRE k 3600→ UI shows59m 43sticking)/api/v1/memory/treemapKnown limitations (documented, not introduced by this PR)
FLUSHALL/DBSIZE/FLUSHDB/DEBUG/MEMORY USAGEreturnERR unknown command— pre-existing Moon dispatch-table gap. Memory treemap shows0 Bfor byte sizes as a result. Deferred to av0.1.6 core-commands-gap-closuremilestone.--shards 1for a unified keyspace in the UI, or use hash tags to co-locate keys.baseURLpath-resolution quirk (page.goto("/vectors")→/vectorsinstead of/ui/vectors). Ad-hoc tests with absolute URLs pass; trivial one-line fix deferred.Planning artifacts
.planning/ROADMAP.md— 9 phases with goals, requirements, and success criteria.planning/REQUIREMENTS.md— 63 requirements across 9 categories (GW/DB/KV/QC/VE/GE/MO/TEST/INT).planning/v0.1.5-MILESTONE-AUDIT.md— final audit withstatus: passed.planning/phases/{128..136}-*/— per-phase CONTEXT / PLAN / SUMMARY / VERIFICATION / VALIDATION.planning/research/{STACK,FEATURES,ARCHITECTURE,PITFALLS,SUMMARY}.md— domain research from/gsd:new-milestoneSummary by CodeRabbit
Release Notes
New Features
/ui/featuring a real-time dashboard, key-value browser with filtering and TTL management, multi-language query editor with history, vector/graph explorers with visualization, and memory analysis toolsBug Fixes
Documentation
Tests