Conversation
|
@codex review |
Benchmark Regression ReportThreshold: 10.00%
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: bea5e33556
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const count = chunk_size + extra; | ||
| const chunk = entries.items[offset .. offset + count]; | ||
| offset += count; | ||
| threads[i] = try std.Thread.spawn(.{}, initialScanWorker, .{ worker, root, chunk }); |
There was a problem hiding this comment.
Join already-started scan threads when spawn fails
If std.Thread.spawn fails for a later worker (for example, due to process thread limits), this try returns immediately, but any previously started workers keep running while deferred cleanup tears down workers. Those active threads still write into results.items, so freeing worker arenas first creates a use-after-free race and can crash initial indexing under resource pressure.
Useful? React with 👍 / 👎.
|
|
||
| const workers = try allocator.alloc(WorkerParsedResults, n_workers); | ||
| defer { | ||
| for (workers) |*worker| worker.deinit(allocator); |
There was a problem hiding this comment.
Deinitialize only worker slots that were initialized
The deferred cleanup iterates every element in workers, but initialization happens inside the spawn loop. If the loop exits early on an error, trailing slots remain undefined and worker.deinit will run ArenaAllocator.deinit on garbage state. This can trigger invalid frees or memory corruption on the same spawn-failure path.
Useful? React with 👍 / 👎.
| { | ||
| var arena = std.heap.ArenaAllocator.init(allocator); | ||
| defer arena.deinit(); | ||
| const parsed = try parseInitialScanEntry(root, entry, arena.allocator()); |
There was a problem hiding this comment.
Keep single-worker scan resilient to per-file read errors
In single-worker mode, try parseInitialScanEntry aborts the whole initial scan on a single file-level I/O failure (e.g., a file removed after collectInitialScanEntries but before read). Previously, per-file indexing failures were ignored and scanning continued, so this is a behavior regression that can leave indexing incomplete when CODEDB_SCAN_WORKERS=1 is used.
Useful? React with 👍 / 👎.
Split initial scan into worker-local read/parse and deterministic main-thread merge. Add parity coverage for 1-worker vs 4-worker initial scan and keep warm search behavior unchanged. Refs #218
bea5e33 to
c534341
Compare
Benchmark Regression ReportThreshold: 10.00%
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c534341a7e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| errdefer owned_outline.deinit(); | ||
| var persistent_outline = try cloneOutline(&owned_outline, self.allocator); | ||
| defer owned_outline.deinit(); |
There was a problem hiding this comment.
Remove duplicate teardown of owned outline on error
commitParsedFileOwnedOutline registers both errdefer owned_outline.deinit() and defer owned_outline.deinit() for the same object; if any later try fails (for example allocation/indexing failure), both handlers run and deinitialize the same outline twice. That can double-free symbol/import buffers and corrupt allocator state on the error path, which makes failure handling unsafe instead of recoverable.
Useful? React with 👍 / 👎.
| for (entries) |entry| { | ||
| const parsed = parseInitialScanEntry(root, entry, arena_alloc) catch null; | ||
| if (parsed) |file| { | ||
| results.items.append(arena_alloc, file) catch return; |
There was a problem hiding this comment.
Propagate worker append failures instead of dropping chunks
In initialScanWorker, an append allocation failure causes an immediate return, which silently skips all remaining files in that worker’s chunk while the caller still treats the scan as successful. Under memory pressure this produces incomplete indexing without surfacing an error, so users can get partial search/tree results with no indication that files were dropped.
Useful? React with 👍 / 👎.
Closes #218
Summary
Verification
Perf
Cold openclaw tree, ReleaseFast, fresh HOME each run:
Warm openclaw MCP search regression check: