feat(core): implement download command handlers (task 11)#11
feat(core): implement download command handlers (task 11)#11
Conversation
Add 9 CQRS command handlers as methods on CommandBus, each in its own file under application/commands/: - StartDownload: URL validation, HEAD metadata, entity creation, event-driven queue scheduling - Pause/Resume: domain state machine transitions with engine control - Cancel: engine stop, DB cleanup, .vortex-meta removal - Retry: circuit breaker integration via domain retry() with MaxRetriesExceeded - PauseAll/ResumeAll: batch operations on active/paused downloads - SetPriority: priority update (1-10) for queue reordering - RemoveDownload: full cleanup with optional file deletion Also includes: - QueueManager extended to react to DownloadCreated, DownloadResumed, DownloadRetrying events - Tauri IPC driving adapter with AppState and 9 #[tauri::command] functions - SetPriorityCommand and RemoveDownloadCommand added to command types - 30 new tests (220 total), clippy clean
📝 WalkthroughWalkthroughAdds 9 concrete CommandBus command handlers and 3 QueryBus query handlers, a Tauri IPC driving adapter exposing those operations via Changes
Sequence Diagram(s)sequenceDiagram
participant Frontend
participant TauriIPC as Tauri IPC Adapter
participant CommandBus
participant Repo as Download Repository
participant Engine as Download Engine
participant EventBus
participant FileStorage
Frontend->>TauriIPC: download_start(url, destination)
TauriIPC->>CommandBus: start_download(cmd)
CommandBus->>Engine: HTTP HEAD probe / metadata
Engine-->>CommandBus: metadata
CommandBus->>Repo: save(new Download)
Repo-->>CommandBus: ok
CommandBus->>EventBus: publish(DownloadCreated)
EventBus-->>CommandBus: ok
CommandBus-->>TauriIPC: Result<u64,String>
TauriIPC-->>Frontend: id
Frontend->>TauriIPC: download_pause(id)
TauriIPC->>CommandBus: pause_download(cmd)
CommandBus->>Repo: find_by_id(id)
Repo-->>CommandBus: download
CommandBus->>Engine: pause(id)
Engine-->>CommandBus: ok
CommandBus->>Repo: save(paused)
CommandBus->>EventBus: publish(DownloadPaused)
CommandBus-->>TauriIPC: Result<(),String>
Frontend->>TauriIPC: download_remove(id, delete_files)
TauriIPC->>CommandBus: remove_download(cmd)
CommandBus->>Repo: find_by_id(id)
Repo-->>CommandBus: download
alt active
CommandBus->>Engine: cancel(id)
Engine-->>CommandBus: ok
end
alt delete_files
CommandBus->>FileStorage: delete(id.vortex-meta)
FileStorage-->>CommandBus: ok
end
CommandBus->>Repo: delete(id)
CommandBus->>EventBus: publish(DownloadCancelled)
CommandBus-->>TauriIPC: Result<(),String>
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Add 3 CQRS query handlers as methods on QueryBus: - GetDownloadsQuery: filtered/sorted/paginated list via DownloadReadRepository - GetDownloadDetailQuery: full detail with segments, NotFound error handling - CountDownloadsByStateQuery: state-grouped counts for UI filter badges Tauri IPC queries: - download_list: filter by state/search, sort by field/direction, pagination - download_detail: single download with segment breakdown - download_count_by_state: HashMap<state, count> for badges String parsing in IPC layer for DownloadState, SortField, SortDirection. 8 new tests (228 total), clippy clean.
Greptile SummaryThis PR implements 9 CQRS command handlers, extends
Confidence Score: 4/5Not safe to merge — the missing AppState registration will crash the app on any IPC call, and four additional P1 defects affect data integrity and correctness. Five P1 findings: (1) AppState never managed by Tauri — guaranteed runtime panic; (2) timestamp-based ID causes silent overwrite under concurrent starts; (3) save-before-engine in pause/resume leads to diverged state on engine failure; (4) delete_files only removes metadata, not content; (5) resume_all bypasses max_concurrent. All are present defects on the changed code paths. src-tauri/src/lib.rs (missing .manage), src-tauri/src/application/commands/start_download.rs (ID generation), src-tauri/src/application/commands/pause_download.rs and resume_download.rs (operation ordering), src-tauri/src/application/commands/remove_download.rs (file deletion), src-tauri/src/application/services/queue_manager.rs (resume concurrency cap)
|
| Filename | Overview |
|---|---|
| src-tauri/src/lib.rs | All nine IPC commands registered in invoke_handler but AppState is never passed to .manage(), causing a runtime panic on any frontend invocation. |
| src-tauri/src/application/commands/start_download.rs | Download handler implemented correctly except for collision-prone millisecond-timestamp DownloadId generation. |
| src-tauri/src/application/commands/pause_download.rs | Saves Paused state to DB before confirming engine.pause() succeeds; a failed engine call leaves DB and engine state diverged. |
| src-tauri/src/application/commands/resume_download.rs | Same save-before-engine ordering issue as pause_download; engine failure leaves DB in Downloading state while engine is still paused. |
| src-tauri/src/application/commands/remove_download.rs | delete_files=true only removes .vortex-meta sidecar via delete_meta; actual downloaded content file is never deleted despite the misleading flag name. |
| src-tauri/src/application/services/queue_manager.rs | Extended with DownloadCreated/DownloadResumed/DownloadRetrying event handling; DownloadResumed increments active_count without checking max_concurrent, enabling resume_all to exceed the concurrency limit. |
| src-tauri/src/application/commands/resume_all.rs | Resumes all paused downloads without checking max_concurrent; combined with the QueueManager's uncapped DownloadResumed handler, this bypasses the concurrency limit. |
| src-tauri/src/adapters/driving/tauri_ipc.rs | Clean IPC adapter with correct command mapping; all commands correctly delegate to CommandBus and convert errors to strings. |
| src-tauri/src/application/commands/pause_all.rs | Batch pause implementation is correct; silently ignores individual pause errors via if let Ok which may mask partial failures. |
| src-tauri/src/application/commands/cancel_download.rs | Cancel handler correctly checks active state before engine.cancel(), cleans up metadata, deletes from repo, and emits DownloadCancelled. |
| src-tauri/src/application/commands/retry_download.rs | Retry handler correctly delegates to domain model's retry() method, handles MaxRetriesExceeded, and persists state. |
| src-tauri/src/application/commands/set_priority.rs | Validates priority range through Priority::new(), persists update. No event emitted for queue reordering notification. |
| src-tauri/src/application/commands/mod.rs | Clean command type definitions; SetPriorityCommand and RemoveDownloadCommand added correctly. |
Sequence Diagram
sequenceDiagram
participant FE as Frontend
participant IPC as tauri_ipc
participant CB as CommandBus
participant Repo as DownloadRepository
participant Eng as DownloadEngine
participant EB as EventBus
participant QM as QueueManager
FE->>IPC: download_start(url, dest)
IPC->>CB: handle_start_download(cmd)
CB->>Repo: save(download)
CB->>EB: publish(DownloadCreated)
EB-->>QM: DownloadCreated - on_slot_freed()
QM->>Repo: find_by_state(Queued)
QM->>Eng: start(download)
QM->>EB: publish(DownloadStarted)
CB-->>IPC: Ok(id)
IPC-->>FE: u64 id
FE->>IPC: download_pause(id)
IPC->>CB: handle_pause_download(cmd)
CB->>Repo: save(Paused)
CB->>Eng: pause(id)
CB->>EB: publish(DownloadPaused)
EB-->>QM: DownloadPaused - decrement_and_schedule()
FE->>IPC: download_resume_all()
IPC->>CB: handle_resume_all(cmd)
CB->>Repo: find_by_state(Paused)
loop each paused download
CB->>Repo: save(Downloading)
CB->>Eng: resume(id)
CB->>EB: publish(DownloadResumed)
EB-->>QM: DownloadResumed - active_count++ no cap check
end
Reviews (1): Last reviewed commit: "feat(core): implement download command h..." | Re-trigger Greptile
| pub fn run() { | ||
| tauri::Builder::default() | ||
| .invoke_handler(tauri::generate_handler![ | ||
| download_start, | ||
| download_pause, | ||
| download_resume, | ||
| download_cancel, | ||
| download_retry, | ||
| download_pause_all, | ||
| download_resume_all, | ||
| download_set_priority, | ||
| download_remove, | ||
| ]) | ||
| .run(tauri::generate_context!()) | ||
| // Tauri's run() has no meaningful recovery path — panic is intentional here | ||
| .expect("fatal: failed to start Vortex"); |
There was a problem hiding this comment.
AppState never registered with Tauri
run() wires the invoke_handler with all nine commands but never calls .manage(AppState { ... }). Every handler parameter is state: State<'_, AppState>, so Tauri will panic at runtime the moment any frontend call arrives — the managed state simply doesn't exist.
// After .invoke_handler(...)
.manage(AppState {
command_bus: Arc::new(/* build CommandBus */),
query_bus: Arc::new(/* build QueryBus */),
})The full wiring (building CommandBus, QueueManager, etc.) needs to happen here — or in a setup closure — before .run() is called.
| let id = DownloadId( | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap_or_default() | ||
| .as_millis() as u64, | ||
| ); |
There was a problem hiding this comment.
Timestamp-based
DownloadId collides under concurrent starts
Two StartDownload calls arriving within the same millisecond produce the same DownloadId. The second repo.save() silently overwrites the first record; the first caller receives a valid-looking ID that now points to the second download's data.
A UUID, a database-generated sequence, or at minimum a monotonic counter (AtomicU64) would be collision-free.
// Example: atomic counter in CommandBus
let id = DownloadId(self.next_id.fetch_add(1, Ordering::SeqCst));| let event = download.pause()?; | ||
| self.download_repo().save(&download)?; | ||
| self.download_engine().pause(cmd.id)?; | ||
| self.event_bus().publish(event); |
There was a problem hiding this comment.
Repository saved before engine confirmation — state diverges on engine failure
If download_engine().pause() fails, the repository already holds DownloadState::Paused but the engine is still actively downloading. The event is never published, so the QueueManager does not decrement its slot count either, leaving active_count overstated.
The engine call should come before repo.save, or the save should be rolled back on engine failure:
let event = download.pause()?;
self.download_engine().pause(cmd.id)?; // confirm first
self.download_repo().save(&download)?;
self.event_bus().publish(event);The same ordering issue exists in resume_download.rs (lines 20–23).
| if cmd.delete_files { | ||
| let meta_path = format!("{}.vortex-meta", download.destination_path()); | ||
| let _ = self.file_storage().delete_meta(Path::new(&meta_path)); | ||
| } |
There was a problem hiding this comment.
delete_files=true only removes the sidecar, not the downloaded content
FileStorage exposes only delete_meta, so when cmd.delete_files is true the handler deletes the .vortex-meta file but leaves the actual downloaded file untouched. The parameter name delete_files strongly implies the content file is also cleaned up.
Either the FileStorage port needs a delete_file(&self, path: &Path) method (so the handler can also remove download.destination_path()), or the flag should be renamed to delete_meta to accurately describe what it does and avoid surprising callers.
| } | ||
| DomainEvent::DownloadResumed { .. } => { | ||
| self.active_count.fetch_add(1, Ordering::SeqCst); | ||
| Ok(()) |
There was a problem hiding this comment.
DownloadResumed increments active_count without checking max_concurrent
handle_resume_all resumes every paused download unconditionally and publishes one DownloadResumed event per download. The QueueManager handler increments active_count on each event with no cap check. If max_concurrent=2 and 5 downloads are paused, resuming all 5 sets active_count to 5 and starts all 5 in the engine simultaneously, bypassing the concurrency limit entirely.
The handler should guard the increment, or handle_resume_all should respect the available slots (e.g. only resume up to max - active downloads, or delegate scheduling to on_slot_freed).
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (5)
src-tauri/src/application/commands/remove_download.rs (2)
34-37: Behavioral inconsistency withcancel_download: event emission is conditional here but unconditional there.
remove_downloadonly publishesDownloadCancelledwhenis_activeis true (Lines 34-37), whilecancel_downloadalways publishes the event regardless of state (seecancel_download.rsLines 35-36 in context snippet 4).This is likely correct from a slot-accounting perspective (non-active downloads don't occupy slots), but creates a semantic inconsistency. Consider documenting this distinction or unifying the behavior if the event serves other purposes (e.g., UI notifications).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/remove_download.rs` around lines 34 - 37, The remove_download handler currently only publishes DomainEvent::DownloadCancelled when is_active is true, causing a semantic inconsistency with cancel_download which always emits DownloadCancelled; either make behavior consistent or document the distinction. To fix: decide whether DownloadCancelled should represent a logical cancellation (emit unconditionally) or only an active-slot release (emit conditionally), then update remove_download (function remove_download) to match cancel_download by always publishing DomainEvent::DownloadCancelled { id: cmd.id } if you choose unconditional emission, or add a clear comment/docstring in remove_download and cancel_download explaining that remove_download emits only for active downloads for slot-accounting reasons and leave code as-is so callers/UIs understand the intended semantics; reference DomainEvent::DownloadCancelled, remove_download, cancel_download, and the is_active check to locate the change.
316-385: Tests cover key scenarios; consider adding a test for non-active removal without event emission.Current tests verify active-download cancellation emits an event, but there's no explicit test confirming that removing a non-active (e.g.,
Queued) download does NOT emitDownloadCancelled. This would document the intended conditional behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/remove_download.rs` around lines 316 - 385, Add a test that removes a non-active download and asserts no cancellation event or engine cancel call occurs: create a non-active download via make_download(), push it into MockDownloadRepo with make_harness(), call harness.bus.handle_remove_download(RemoveDownloadCommand { id: DownloadId(1), delete_files: false }).await.unwrap(), then assert that harness.engine.cancelled is empty and harness.event_bus.events does NOT contain DomainEvent::DownloadCancelled { id: DownloadId(1) } (name the test e.g. test_remove_non_active_no_cancel) to document the intended conditional behavior.src-tauri/src/application/commands/start_download.rs (1)
22-31: Probed metadata (_file_size,_resume_supported) is not utilized.The HEAD response extracts
content-lengthandaccept-rangesbut these values are discarded. If theDownloadentity or engine can leverage file size for pre-allocation or resume capability, consider passing these through.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/start_download.rs` around lines 22 - 31, The HEAD probe currently computes FileSize and resume support but discards them; update the start_download flow to pass the discovered values into the Download creation/path that will use them (so pre-allocation or resume can be enabled). Specifically, in the block around self.http_client().head(...) where you call extract_filename, map resp.content_length().map(FileSize) and compute resume from resp.header("accept-ranges"), thread those values through instead of binding to _file_size and _resume_supported — e.g., propagate the FileSize and resume boolean into whatever constructs or function calls create or initialize a Download (or into the download engine's start function) so the Download type (or download start function) can use them for pre-allocation and resume logic.src-tauri/src/application/commands/pause_all.rs (2)
15-22: Inconsistent error handling betweensave()andpause().Line 17 propagates
save()errors with?, aborting the entire batch and returning the error. Line 18 silently ignoresdownload_engine().pause()errors withlet _ =. This asymmetry means:
- A save failure mid-batch aborts, but already-processed downloads remain paused.
- An engine pause failure is silently ignored, yet the download is still counted as successfully paused.
If the intent is best-effort batch processing, consider using
let _ =for both or logging engine failures. If strict consistency is needed, propagate both errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/pause_all.rs` around lines 15 - 22, The loop mixes propagated errors (self.download_repo().save(&dl)? ) with ignored ones (let _ = self.download_engine().pause(dl.id())), so make handling consistent: change to best-effort batch semantics by catching and logging errors from both save and download_engine().pause instead of using ?, only increment count and publish the event when both save and engine pause succeed, and continue to the next download on any failure; locate the calls to dl.pause(), download_repo().save(&dl), download_engine().pause(dl.id()), and event_bus().publish(event) to implement this (use your existing logger/error reporting method to log failures).
283-326: Tests verify core functionality; consider adding a partial-failure test.The current tests cover successful batch pause and empty-set scenarios. A test simulating
save()failure mid-batch would help document the expected partial-success behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/commands/pause_all.rs` around lines 283 - 326, Add a new async test that seeds the repo with multiple active downloads (use make_downloading and DownloadId), then configure the MockDownloadRepo to fail on save() for one of them (simulate partial failure), call CommandBus.handle_pause_all(PauseAllDownloadsCommand), and assert the returned count equals the number of successful pauses; also verify that successfully saved downloads have state DownloadState::Paused, the engine (MockDownloadEngine) received pause calls only for successes, and the event bus (MockEventBus) emitted events only for successful pauses to document expected partial-success behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/application/commands/pause_download.rs`:
- Around line 20-23: The code currently saves the Paused state before ensuring
the engine actually paused; change the ordering so the engine pause is attempted
first and only on success update the domain, persist and publish. Concretely:
call download_engine().pause(cmd.id) and handle any error (return it) before
calling download.pause(), then call download_repo().save(&download)? and
event_bus().publish(event). Alternatively, if you must derive the event from
download.pause(), perform the state transition in-memory into a temporary
variable but only persist/publish after download_engine().pause(cmd.id)
succeeds.
In `@src-tauri/src/application/commands/remove_download.rs`:
- Around line 27-30: The flag cmd.delete_files currently only removes the
.vortex-meta via self.file_storage().delete_meta but the name implies the
downloaded file should also be removed; update the remove_download handling to
also delete the actual download at download.destination_path() when
cmd.delete_files is true (call the appropriate file removal method on
self.file_storage(), e.g., delete_file or equivalent) and still delete the
.vortex-meta, or if the original intent was to only remove metadata rename the
parameter (cmd.delete_files -> cmd.delete_metadata or cmd.cleanup_meta_files)
and adjust call sites and docs accordingly; refer to cmd.delete_files,
download.destination_path(), and self.file_storage().delete_meta to locate the
logic to change.
In `@src-tauri/src/application/commands/resume_all.rs`:
- Around line 14-18: The code persists the download as resumed (dl.resume() +
download_repo().save) and publishes success before the engine actually resumes
it, and it also unconditionally calls download_engine().resume for every item
which can bypass concurrency limits; change the flow so you first attempt to
resume via download_engine().resume(dl.id()) and check its Result, and only if
that returns Ok then call dl.resume(), download_repo().save(&dl) and
event_bus().publish(event) and increment count; if download_engine().resume
returns Err, do not persist the resumed state or increment count—log or publish
a failure event instead. If your engine API supports enqueueing resumes to
respect concurrency, use that enqueue method (instead of calling resume for
every item) so the bulk path does not bypass the queue cap.
In `@src-tauri/src/application/commands/start_download.rs`:
- Around line 38-43: The current DownloadId creation using
SystemTime::now().as_millis() in start_download.rs can collide under concurrent
requests; replace it with a collision-safe generator such as a process-wide
AtomicU64 counter (e.g., add a static NEXT_ID: AtomicU64 and return
DownloadId(NEXT_ID.fetch_add(1, Ordering::Relaxed)) in the start_download ID
generation) or switch to a UUID/randomized approach (use the uuid crate or XOR
the timestamp with rand::random::<u64>()) and update the code path that
constructs DownloadId (the block creating DownloadId in start_download.rs) to
call the new generator function instead.
In `@src-tauri/src/application/services/queue_manager.rs`:
- Around line 253-255: The DownloadRetrying event can be handled before
schedule_retry() registers its cancellation token, allowing on_slot_freed() to
restart a download early; fix by ensuring the cancellation token is
inserted/registered before emitting DomainEvent::DownloadRetrying (or call
schedule_retry() prior to publishing), or alternatively make the
DomainEvent::DownloadRetrying handling check for an existing cancellation token
in the retry registry and ignore the event if no token is present; locate
schedule_retry(), the code that publishes DomainEvent::DownloadRetrying, and
on_slot_freed() to implement the change.
In `@src-tauri/src/lib.rs`:
- Around line 35-45: The AppState type is not registered with Tauri, so all IPC
handlers that request State<'_, AppState> (download_start, download_pause,
download_resume, download_cancel, download_retry, download_pause_all,
download_resume_all, download_set_priority, download_remove) will fail; fix this
by calling Builder::manage(...) with a properly constructed AppState instance
inside your run()/main Tauri builder before invoke_handler (i.e., add
.manage(AppState { /* initialize fields used by those handlers */ }) to the
tauri::Builder chain so State<'_, AppState> can be extracted by the listed
handlers).
---
Nitpick comments:
In `@src-tauri/src/application/commands/pause_all.rs`:
- Around line 15-22: The loop mixes propagated errors
(self.download_repo().save(&dl)? ) with ignored ones (let _ =
self.download_engine().pause(dl.id())), so make handling consistent: change to
best-effort batch semantics by catching and logging errors from both save and
download_engine().pause instead of using ?, only increment count and publish the
event when both save and engine pause succeed, and continue to the next download
on any failure; locate the calls to dl.pause(), download_repo().save(&dl),
download_engine().pause(dl.id()), and event_bus().publish(event) to implement
this (use your existing logger/error reporting method to log failures).
- Around line 283-326: Add a new async test that seeds the repo with multiple
active downloads (use make_downloading and DownloadId), then configure the
MockDownloadRepo to fail on save() for one of them (simulate partial failure),
call CommandBus.handle_pause_all(PauseAllDownloadsCommand), and assert the
returned count equals the number of successful pauses; also verify that
successfully saved downloads have state DownloadState::Paused, the engine
(MockDownloadEngine) received pause calls only for successes, and the event bus
(MockEventBus) emitted events only for successful pauses to document expected
partial-success behavior.
In `@src-tauri/src/application/commands/remove_download.rs`:
- Around line 34-37: The remove_download handler currently only publishes
DomainEvent::DownloadCancelled when is_active is true, causing a semantic
inconsistency with cancel_download which always emits DownloadCancelled; either
make behavior consistent or document the distinction. To fix: decide whether
DownloadCancelled should represent a logical cancellation (emit unconditionally)
or only an active-slot release (emit conditionally), then update remove_download
(function remove_download) to match cancel_download by always publishing
DomainEvent::DownloadCancelled { id: cmd.id } if you choose unconditional
emission, or add a clear comment/docstring in remove_download and
cancel_download explaining that remove_download emits only for active downloads
for slot-accounting reasons and leave code as-is so callers/UIs understand the
intended semantics; reference DomainEvent::DownloadCancelled, remove_download,
cancel_download, and the is_active check to locate the change.
- Around line 316-385: Add a test that removes a non-active download and asserts
no cancellation event or engine cancel call occurs: create a non-active download
via make_download(), push it into MockDownloadRepo with make_harness(), call
harness.bus.handle_remove_download(RemoveDownloadCommand { id: DownloadId(1),
delete_files: false }).await.unwrap(), then assert that harness.engine.cancelled
is empty and harness.event_bus.events does NOT contain
DomainEvent::DownloadCancelled { id: DownloadId(1) } (name the test e.g.
test_remove_non_active_no_cancel) to document the intended conditional behavior.
In `@src-tauri/src/application/commands/start_download.rs`:
- Around line 22-31: The HEAD probe currently computes FileSize and resume
support but discards them; update the start_download flow to pass the discovered
values into the Download creation/path that will use them (so pre-allocation or
resume can be enabled). Specifically, in the block around
self.http_client().head(...) where you call extract_filename, map
resp.content_length().map(FileSize) and compute resume from
resp.header("accept-ranges"), thread those values through instead of binding to
_file_size and _resume_supported — e.g., propagate the FileSize and resume
boolean into whatever constructs or function calls create or initialize a
Download (or into the download engine's start function) so the Download type (or
download start function) can use them for pre-allocation and resume logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 90b18947-6619-45dc-956f-30b11d5b07a9
📒 Files selected for processing (15)
CHANGELOG.mdsrc-tauri/src/adapters/driving/mod.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/commands/cancel_download.rssrc-tauri/src/application/commands/mod.rssrc-tauri/src/application/commands/pause_all.rssrc-tauri/src/application/commands/pause_download.rssrc-tauri/src/application/commands/remove_download.rssrc-tauri/src/application/commands/resume_all.rssrc-tauri/src/application/commands/resume_download.rssrc-tauri/src/application/commands/retry_download.rssrc-tauri/src/application/commands/set_priority.rssrc-tauri/src/application/commands/start_download.rssrc-tauri/src/application/services/queue_manager.rssrc-tauri/src/lib.rs
| let event = download.pause()?; | ||
| self.download_repo().save(&download)?; | ||
| self.download_engine().pause(cmd.id)?; | ||
| self.event_bus().publish(event); |
There was a problem hiding this comment.
Don’t persist Paused before the engine has actually stopped.
If download_engine().pause(cmd.id) fails, Line 21 has already saved Paused, so the repo/UI diverge from a still-running transfer. Roll back on engine error, or only persist/publish after the pause succeeds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/application/commands/pause_download.rs` around lines 20 - 23,
The code currently saves the Paused state before ensuring the engine actually
paused; change the ordering so the engine pause is attempted first and only on
success update the domain, persist and publish. Concretely: call
download_engine().pause(cmd.id) and handle any error (return it) before calling
download.pause(), then call download_repo().save(&download)? and
event_bus().publish(event). Alternatively, if you must derive the event from
download.pause(), perform the state transition in-memory into a temporary
variable but only persist/publish after download_engine().pause(cmd.id)
succeeds.
| if cmd.delete_files { | ||
| let meta_path = format!("{}.vortex-meta", download.destination_path()); | ||
| let _ = self.file_storage().delete_meta(Path::new(&meta_path)); | ||
| } |
There was a problem hiding this comment.
delete_files only removes metadata, not the actual downloaded file.
The parameter name delete_files suggests it would delete the downloaded content, but the implementation only removes the .vortex-meta file. If this is intentional (e.g., user should manually delete files), consider renaming to delete_metadata or cleanup_meta_files for clarity. If actual file deletion is intended, the downloaded file at destination_path() should also be removed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/application/commands/remove_download.rs` around lines 27 - 30,
The flag cmd.delete_files currently only removes the .vortex-meta via
self.file_storage().delete_meta but the name implies the downloaded file should
also be removed; update the remove_download handling to also delete the actual
download at download.destination_path() when cmd.delete_files is true (call the
appropriate file removal method on self.file_storage(), e.g., delete_file or
equivalent) and still delete the .vortex-meta, or if the original intent was to
only remove metadata rename the parameter (cmd.delete_files ->
cmd.delete_metadata or cmd.cleanup_meta_files) and adjust call sites and docs
accordingly; refer to cmd.delete_files, download.destination_path(), and
self.file_storage().delete_meta to locate the logic to change.
| if let Ok(event) = dl.resume() { | ||
| self.download_repo().save(&dl)?; | ||
| let _ = self.download_engine().resume(dl.id()); | ||
| self.event_bus().publish(event); | ||
| count += 1; |
There was a problem hiding this comment.
resume_all should not report success before the engine actually resumes the download.
Line 15 persists Downloading, Line 16 drops any engine error, and Lines 17-18 still publish/count a successful resume. That leaves failed resumes looking active, and this bulk path also bypasses the queue’s concurrency cap by resuming every paused item immediately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/application/commands/resume_all.rs` around lines 14 - 18, The
code persists the download as resumed (dl.resume() + download_repo().save) and
publishes success before the engine actually resumes it, and it also
unconditionally calls download_engine().resume for every item which can bypass
concurrency limits; change the flow so you first attempt to resume via
download_engine().resume(dl.id()) and check its Result, and only if that returns
Ok then call dl.resume(), download_repo().save(&dl) and
event_bus().publish(event) and increment count; if download_engine().resume
returns Err, do not persist the resumed state or increment count—log or publish
a failure event instead. If your engine API supports enqueueing resumes to
respect concurrency, use that enqueue method (instead of calling resume for
every item) so the bulk path does not bypass the queue cap.
| let id = DownloadId( | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap_or_default() | ||
| .as_millis() as u64, | ||
| ); |
There was a problem hiding this comment.
Potential ID collision under concurrent requests.
Using SystemTime::now().as_millis() for DownloadId generation can produce duplicate IDs if multiple downloads are started within the same millisecond. Consider using a monotonic counter, UUID, or combining timestamp with a random component.
Example fix using atomic counter
use std::sync::atomic::{AtomicU64, Ordering};
static NEXT_ID: AtomicU64 = AtomicU64::new(0);
fn generate_id() -> DownloadId {
DownloadId(NEXT_ID.fetch_add(1, Ordering::Relaxed))
}Or combine timestamp with randomness:
let id = DownloadId(
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64
^ rand::random::<u64>()
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/application/commands/start_download.rs` around lines 38 - 43,
The current DownloadId creation using SystemTime::now().as_millis() in
start_download.rs can collide under concurrent requests; replace it with a
collision-safe generator such as a process-wide AtomicU64 counter (e.g., add a
static NEXT_ID: AtomicU64 and return DownloadId(NEXT_ID.fetch_add(1,
Ordering::Relaxed)) in the start_download ID generation) or switch to a
UUID/randomized approach (use the uuid crate or XOR the timestamp with
rand::random::<u64>()) and update the code path that constructs DownloadId (the
block creating DownloadId in start_download.rs) to call the new generator
function instead.
| DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => { | ||
| self.on_slot_freed().await | ||
| } |
There was a problem hiding this comment.
DownloadRetrying can race past the backoff window.
In the same file, Lines 171-173 publish DownloadRetrying before schedule_retry() inserts the cancellation token. With this new arm, the listener can win that race and call on_slot_freed() immediately, restarting the download before the delay expires.
Minimal fix
- self.event_bus.publish(event);
- self.schedule_retry(id, download.retry_count());
+ self.schedule_retry(id, download.retry_count());
+ self.event_bus.publish(event);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/application/services/queue_manager.rs` around lines 253 - 255,
The DownloadRetrying event can be handled before schedule_retry() registers its
cancellation token, allowing on_slot_freed() to restart a download early; fix by
ensuring the cancellation token is inserted/registered before emitting
DomainEvent::DownloadRetrying (or call schedule_retry() prior to publishing), or
alternatively make the DomainEvent::DownloadRetrying handling check for an
existing cancellation token in the retry registry and ignore the event if no
token is present; locate schedule_retry(), the code that publishes
DomainEvent::DownloadRetrying, and on_slot_freed() to implement the change.
| .invoke_handler(tauri::generate_handler![ | ||
| download_start, | ||
| download_pause, | ||
| download_resume, | ||
| download_cancel, | ||
| download_retry, | ||
| download_pause_all, | ||
| download_resume_all, | ||
| download_set_priority, | ||
| download_remove, | ||
| ]) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n -C2 'pub struct AppState|State<.*AppState>|\.manage\(' \
src-tauri/src/adapters/driving/tauri_ipc.rs \
src-tauri/src/lib.rsRepository: mpiton/vortex
Length of output: 3931
🏁 Script executed:
# Check the size and read the lib.rs file to see the full Builder setup
wc -l src-tauri/src/lib.rs && cat -n src-tauri/src/lib.rsRepository: mpiton/vortex
Length of output: 2368
🏁 Script executed:
# Find AppState initialization and construction
rg -n 'AppState\s*\{|AppState::new' src-tauri/src/adapters/driving/tauri_ipc.rsRepository: mpiton/vortex
Length of output: 81
🏁 Script executed:
sed -n '21,25p' src-tauri/src/adapters/driving/tauri_ipc.rsRepository: mpiton/vortex
Length of output: 153
AppState is never registered with Tauri.
The IPC commands in tauri_ipc.rs extract State<'_, AppState> (9 commands: download_start, download_pause, download_resume, download_cancel, download_retry, download_pause_all, download_resume_all, download_set_priority, download_remove), but run() does not call Builder::manage(AppState { ... }). Every invocation of these commands will fail at runtime with a state extraction error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/lib.rs` around lines 35 - 45, The AppState type is not
registered with Tauri, so all IPC handlers that request State<'_, AppState>
(download_start, download_pause, download_resume, download_cancel,
download_retry, download_pause_all, download_resume_all, download_set_priority,
download_remove) will fail; fix this by calling Builder::manage(...) with a
properly constructed AppState instance inside your run()/main Tauri builder
before invoke_handler (i.e., add .manage(AppState { /* initialize fields used by
those handlers */ }) to the tauri::Builder chain so State<'_, AppState> can be
extracted by the listed handlers).
There was a problem hiding this comment.
11 issues found across 15 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/application/commands/cancel_download.rs">
<violation number="1" location="src-tauri/src/application/commands/cancel_download.rs:35">
P2: DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).</violation>
</file>
<file name="src-tauri/src/application/commands/pause_all.rs">
<violation number="1" location="src-tauri/src/application/commands/pause_all.rs:18">
P1: `handle_pause_all` ignores `download_engine().pause` errors, so failed pauses are still counted and emitted as successful.</violation>
</file>
<file name="src-tauri/src/application/commands/pause_download.rs">
<violation number="1" location="src-tauri/src/application/commands/pause_download.rs:21">
P1: Pause is persisted before the engine pause call, so an engine error can leave the DB in `Paused` while the download is still running.</violation>
<violation number="2" location="src-tauri/src/application/commands/pause_download.rs:21">
P1: Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.</violation>
</file>
<file name="src-tauri/src/application/commands/resume_all.rs">
<violation number="1" location="src-tauri/src/application/commands/resume_all.rs:15">
P1: `resume_all` ignores engine resume failures, causing false success (state/event/count) even when resume fails.</violation>
</file>
<file name="src-tauri/src/application/commands/start_download.rs">
<violation number="1" location="src-tauri/src/application/commands/start_download.rs:42">
P1: Do not generate `DownloadId` from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.</violation>
</file>
<file name="src-tauri/src/application/commands/set_priority.rs">
<violation number="1" location="src-tauri/src/application/commands/set_priority.rs:45">
P3: These test mocks duplicate the same `MockDownloadRepo`/`MockDownloadEngine` scaffolding already defined in other command tests (e.g., `start_download.rs`). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.</violation>
</file>
<file name="src-tauri/src/application/services/queue_manager.rs">
<violation number="1" location="src-tauri/src/application/services/queue_manager.rs:256">
P2: `DownloadResumed` is emitted both by the resume command handler and the download engine. With this new handler incrementing `active_count` for every `DownloadResumed`, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.</violation>
<violation number="2" location="src-tauri/src/application/services/queue_manager.rs:257">
P1: Enforce `max_concurrent` when handling `DownloadResumed`; unconditionally incrementing `active_count` can oversubscribe slots and bypass queue limits.</violation>
</file>
<file name="src-tauri/src/lib.rs">
<violation number="1" location="src-tauri/src/lib.rs:35">
P0: Register `AppState` with the Tauri builder before `run`; commands taking `State<'_, AppState>` will fail at runtime if the state is not managed.</violation>
</file>
<file name="src-tauri/src/application/commands/remove_download.rs">
<violation number="1" location="src-tauri/src/application/commands/remove_download.rs:29">
P2: `delete_files` should also remove the downloaded content file, not just the `.vortex-meta` sidecar; current behavior leaves user data behind unexpectedly.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| #[cfg_attr(mobile, tauri::mobile_entry_point)] | ||
| pub fn run() { | ||
| tauri::Builder::default() | ||
| .invoke_handler(tauri::generate_handler![ |
There was a problem hiding this comment.
P0: Register AppState with the Tauri builder before run; commands taking State<'_, AppState> will fail at runtime if the state is not managed.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/lib.rs, line 35:
<comment>Register `AppState` with the Tauri builder before `run`; commands taking `State<'_, AppState>` will fail at runtime if the state is not managed.</comment>
<file context>
@@ -24,9 +24,25 @@ pub use application::read_models::{
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
tauri::Builder::default()
+ .invoke_handler(tauri::generate_handler![
+ download_start,
+ download_pause,
</file context>
| for mut dl in downloads { | ||
| if let Ok(event) = dl.pause() { | ||
| self.download_repo().save(&dl)?; | ||
| let _ = self.download_engine().pause(dl.id()); |
There was a problem hiding this comment.
P1: handle_pause_all ignores download_engine().pause errors, so failed pauses are still counted and emitted as successful.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_all.rs, line 18:
<comment>`handle_pause_all` ignores `download_engine().pause` errors, so failed pauses are still counted and emitted as successful.</comment>
<file context>
@@ -0,0 +1,327 @@
+ for mut dl in downloads {
+ if let Ok(event) = dl.pause() {
+ self.download_repo().save(&dl)?;
+ let _ = self.download_engine().pause(dl.id());
+ self.event_bus().publish(event);
+ count += 1;
</file context>
| .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?; | ||
|
|
||
| let event = download.pause()?; | ||
| self.download_repo().save(&download)?; |
There was a problem hiding this comment.
P1: Pause is persisted before the engine pause call, so an engine error can leave the DB in Paused while the download is still running.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_download.rs, line 21:
<comment>Pause is persisted before the engine pause call, so an engine error can leave the DB in `Paused` while the download is still running.</comment>
<file context>
@@ -0,0 +1,321 @@
+ .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;
+
+ let event = download.pause()?;
+ self.download_repo().save(&download)?;
+ self.download_engine().pause(cmd.id)?;
+ self.event_bus().publish(event);
</file context>
| let mut count = 0u32; | ||
| for mut dl in downloads { | ||
| if let Ok(event) = dl.resume() { | ||
| self.download_repo().save(&dl)?; |
There was a problem hiding this comment.
P1: resume_all ignores engine resume failures, causing false success (state/event/count) even when resume fails.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/resume_all.rs, line 15:
<comment>`resume_all` ignores engine resume failures, causing false success (state/event/count) even when resume fails.</comment>
<file context>
@@ -0,0 +1,326 @@
+ let mut count = 0u32;
+ for mut dl in downloads {
+ if let Ok(event) = dl.resume() {
+ self.download_repo().save(&dl)?;
+ let _ = self.download_engine().resume(dl.id());
+ self.event_bus().publish(event);
</file context>
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap_or_default() | ||
| .as_millis() as u64, |
There was a problem hiding this comment.
P1: Do not generate DownloadId from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/start_download.rs, line 42:
<comment>Do not generate `DownloadId` from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.</comment>
<file context>
@@ -0,0 +1,409 @@
+ std::time::SystemTime::now()
+ .duration_since(std::time::UNIX_EPOCH)
+ .unwrap_or_default()
+ .as_millis() as u64,
+ );
+
</file context>
| .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?; | ||
|
|
||
| let event = download.pause()?; | ||
| self.download_repo().save(&download)?; |
There was a problem hiding this comment.
P1: Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_download.rs, line 21:
<comment>Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.</comment>
<file context>
@@ -0,0 +1,321 @@
+ .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;
+
+ let event = download.pause()?;
+ self.download_repo().save(&download)?;
+ self.download_engine().pause(cmd.id)?;
+ self.event_bus().publish(event);
</file context>
| self.download_repo().delete(cmd.id)?; | ||
|
|
||
| // Emit event (QueueManager decrements slot if was active) | ||
| self.event_bus() |
There was a problem hiding this comment.
P2: DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/cancel_download.rs, line 35:
<comment>DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).</comment>
<file context>
@@ -0,0 +1,366 @@
+ self.download_repo().delete(cmd.id)?;
+
+ // Emit event (QueueManager decrements slot if was active)
+ self.event_bus()
+ .publish(DomainEvent::DownloadCancelled { id: cmd.id });
+
</file context>
| DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => { | ||
| self.on_slot_freed().await | ||
| } | ||
| DomainEvent::DownloadResumed { .. } => { |
There was a problem hiding this comment.
P2: DownloadResumed is emitted both by the resume command handler and the download engine. With this new handler incrementing active_count for every DownloadResumed, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/services/queue_manager.rs, line 256:
<comment>`DownloadResumed` is emitted both by the resume command handler and the download engine. With this new handler incrementing `active_count` for every `DownloadResumed`, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.</comment>
<file context>
@@ -247,6 +250,13 @@ impl QueueManager {
+ DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => {
+ self.on_slot_freed().await
+ }
+ DomainEvent::DownloadResumed { .. } => {
+ self.active_count.fetch_add(1, Ordering::SeqCst);
+ Ok(())
</file context>
|
|
||
| if cmd.delete_files { | ||
| let meta_path = format!("{}.vortex-meta", download.destination_path()); | ||
| let _ = self.file_storage().delete_meta(Path::new(&meta_path)); |
There was a problem hiding this comment.
P2: delete_files should also remove the downloaded content file, not just the .vortex-meta sidecar; current behavior leaves user data behind unexpectedly.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/remove_download.rs, line 29:
<comment>`delete_files` should also remove the downloaded content file, not just the `.vortex-meta` sidecar; current behavior leaves user data behind unexpectedly.</comment>
<file context>
@@ -0,0 +1,386 @@
+
+ if cmd.delete_files {
+ let meta_path = format!("{}.vortex-meta", download.destination_path());
+ let _ = self.file_storage().delete_meta(Path::new(&meta_path));
+ }
+
</file context>
| EventBus, FileStorage, HttpClient, PluginLoader, | ||
| }; | ||
|
|
||
| struct MockDownloadRepo { |
There was a problem hiding this comment.
P3: These test mocks duplicate the same MockDownloadRepo/MockDownloadEngine scaffolding already defined in other command tests (e.g., start_download.rs). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/set_priority.rs, line 45:
<comment>These test mocks duplicate the same `MockDownloadRepo`/`MockDownloadEngine` scaffolding already defined in other command tests (e.g., `start_download.rs`). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.</comment>
<file context>
@@ -0,0 +1,286 @@
+ EventBus, FileStorage, HttpClient, PluginLoader,
+ };
+
+ struct MockDownloadRepo {
+ store: Mutex<HashMap<u64, Download>>,
+ }
</file context>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src-tauri/src/application/queries/get_download_detail.rs (1)
144-158: Consider asserting the specific error variant.The test verifies an error is returned but doesn't confirm it's specifically a
NotFounderror. This could mask incorrect error types in future refactors.🔧 Suggested improvement
let result = bus .handle_get_download_detail(GetDownloadDetailQuery { id: DownloadId(999), }) .await; - assert!(result.is_err()); + assert!(matches!( + result, + Err(crate::application::error::AppError::NotFound(_)) + ));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/queries/get_download_detail.rs` around lines 144 - 158, Update the test_get_download_detail_not_found to assert the specific NotFound error instead of only checking is_err(); after calling bus.handle_get_download_detail(...) unwrap the error (e.g., let err = result.unwrap_err()) and pattern-match or use assert_matches! to verify it is the NotFound variant (referencing the test function name test_get_download_detail_not_found, the QueryBus::handle_get_download_detail call, GetDownloadDetailQuery and DownloadId). Ensure the assertion checks the exact error enum variant (e.g., MyError::NotFound or RepoError::NotFound) used by the handler.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 26-30: AppState is never registered with Tauri so commands that
request State<'_, AppState> will panic; fix by constructing an AppState instance
(wrap CommandBus and QueryBus in Arc and set AppState { command_bus, query_bus
}) before building the app and call .manage(app_state) on the
tauri::Builder::default() chain (the same builder that calls .invoke_handler!
for download_start/download_pause/... handlers) so Tauri can inject State<'_,
AppState> into your command handlers.
---
Nitpick comments:
In `@src-tauri/src/application/queries/get_download_detail.rs`:
- Around line 144-158: Update the test_get_download_detail_not_found to assert
the specific NotFound error instead of only checking is_err(); after calling
bus.handle_get_download_detail(...) unwrap the error (e.g., let err =
result.unwrap_err()) and pattern-match or use assert_matches! to verify it is
the NotFound variant (referencing the test function name
test_get_download_detail_not_found, the QueryBus::handle_get_download_detail
call, GetDownloadDetailQuery and DownloadId). Ensure the assertion checks the
exact error enum variant (e.g., MyError::NotFound or RepoError::NotFound) used
by the handler.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0acf2f08-97d3-4579-9e15-8001ada8d5ad
📒 Files selected for processing (7)
CHANGELOG.mdsrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/queries/count_by_state.rssrc-tauri/src/application/queries/get_download_detail.rssrc-tauri/src/application/queries/get_downloads.rssrc-tauri/src/application/queries/mod.rssrc-tauri/src/lib.rs
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
- src-tauri/src/lib.rs
| /// Shared application state managed by Tauri. | ||
| pub struct AppState { | ||
| pub command_bus: Arc<CommandBus>, | ||
| pub query_bus: Arc<QueryBus>, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that .manage() is called with AppState in lib.rs
echo "Checking for .manage() call in lib.rs..."
rg -n '\.manage\(' src-tauri/src/lib.rs
echo ""
echo "Checking AppState usage in lib.rs..."
rg -n 'AppState' src-tauri/src/lib.rsRepository: mpiton/vortex
Length of output: 232
🏁 Script executed:
cat -n src-tauri/src/lib.rs | head -100Repository: mpiton/vortex
Length of output: 2526
🏁 Script executed:
grep -A 30 'pub struct AppState' src-tauri/src/adapters/driving/tauri_ipc.rsRepository: mpiton/vortex
Length of output: 848
🏁 Script executed:
grep -B 3 -A 5 'async fn download_' src-tauri/src/adapters/driving/tauri_ipc.rs | head -50Repository: mpiton/vortex
Length of output: 1277
🏁 Script executed:
# Verify that no AppState is constructed anywhere in lib.rs
rg -n 'AppState\s*{' src-tauri/src/lib.rs
# Check if there's any managed state setup
rg -n '\.manage' src-tauri/src/lib.rs
# Verify that main.rs or other entry point doesn't handle this
find src-tauri/src -name 'main.rs' -type fRepository: mpiton/vortex
Length of output: 183
🏁 Script executed:
# Simpler pattern search for AppState construction
rg -n 'AppState' src-tauri/src/lib.rs
# Check main.rs to see if AppState is managed there
cat src-tauri/src/main.rs
# Verify .manage is not in lib.rs (already confirmed from earlier search)
echo "Checking for ANY state management in lib.rs..."
rg -n 'manage\|state' src-tauri/src/lib.rsRepository: mpiton/vortex
Length of output: 364
Critical: AppState is not managed by Tauri — all commands will panic at runtime.
The AppState struct is defined in tauri_ipc.rs and imported in lib.rs (line 28), but the Tauri builder in lib.rs (lines 34-52) never calls .manage(AppState { ... }). Every command handler (download_start, download_pause, etc.) requires State<'_, AppState> injection, which Tauri cannot provide without explicitly registering the managed state. This will cause a runtime panic on the first command invocation.
Fix in src-tauri/src/lib.rs:
Apply .manage() call
pub fn run() {
// Construct dependencies and create AppState
let command_bus = Arc::new(CommandBus::new(/* dependencies */));
let query_bus = Arc::new(QueryBus::new(/* dependencies */));
let app_state = AppState { command_bus, query_bus };
tauri::Builder::default()
.manage(app_state) // <-- Add this line
.invoke_handler(tauri::generate_handler![
download_start,
download_pause,
download_resume,
download_cancel,
download_retry,
download_pause_all,
download_resume_all,
download_set_priority,
download_remove,
download_list,
download_detail,
download_count_by_state,
])
.run(tauri::generate_context!())
.expect("fatal: failed to start Vortex");
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/adapters/driving/tauri_ipc.rs` around lines 26 - 30, AppState
is never registered with Tauri so commands that request State<'_, AppState> will
panic; fix by constructing an AppState instance (wrap CommandBus and QueryBus in
Arc and set AppState { command_bus, query_bus }) before building the app and
call .manage(app_state) on the tauri::Builder::default() chain (the same builder
that calls .invoke_handler! for download_start/download_pause/... handlers) so
Tauri can inject State<'_, AppState> into your command handlers.
Summary
impl CommandBusmethods in separate filesDownloadCreated,DownloadResumed,DownloadRetryingevents for slot-aware schedulingAppStatestruct and 9#[tauri::command]functions wired intolib.rsSetPriorityCommand,RemoveDownloadCommandHandlers
StartDownloadhandle_start_downloadPauseDownloadhandle_pause_downloadResumeDownloadhandle_resume_downloadCancelDownloadhandle_cancel_downloadRetryDownloadhandle_retry_downloadPauseAllhandle_pause_allResumeAllhandle_resume_allSetPriorityhandle_set_priorityRemoveDownloadhandle_remove_downloadIPC Convention
download_{action}naming:download_start,download_pause,download_resume,download_cancel,download_retry,download_pause_all,download_resume_all,download_set_priority,download_removeTest plan
cargo clippy -- -D warningscleancargo fmt --checkcleanSummary by cubic
Implements Linear Task 11 by adding 9 download command handlers and 3 query handlers, exposed via Tauri IPC. Also updates the queue manager to schedule on DownloadCreated/Resumed/Retrying events.
New Features
CommandBushandlers: start, pause, resume, cancel, retry, pause all, resume all, set priority (1–10), remove (optional file delete).QueryBushandlers: list downloads (filter/sort/paginate), download detail, count by state.SetPriorityCommandandRemoveDownloadCommand.AppState; queue manager now reacts toDownloadCreated,DownloadResumed, andDownloadRetryingfor slot-aware scheduling.Migration
download_start,download_pause,download_resume,download_cancel,download_retry,download_pause_all,download_resume_all,download_set_priority,download_remove,download_list,download_detail,download_count_by_state.Written for commit fecb4a9. Summary will update on new commits.
Summary by CodeRabbit