Skip to content

feat(core): implement download command handlers (task 11)#11

Open
mpiton wants to merge 2 commits intomainfrom
feat/11-download-commands
Open

feat(core): implement download command handlers (task 11)#11
mpiton wants to merge 2 commits intomainfrom
feat/11-download-commands

Conversation

@mpiton
Copy link
Copy Markdown
Owner

@mpiton mpiton commented Apr 8, 2026

Summary

  • 9 CQRS command handlers implemented as impl CommandBus methods in separate files
  • QueueManager extended to react to DownloadCreated, DownloadResumed, DownloadRetrying events for slot-aware scheduling
  • Tauri IPC driving adapter with AppState struct and 9 #[tauri::command] functions wired into lib.rs
  • 2 new command types added: SetPriorityCommand, RemoveDownloadCommand

Handlers

Command Handler Behavior
StartDownload handle_start_download URL validation, HEAD metadata, entity creation, DownloadCreated event
PauseDownload handle_pause_download State: Downloading→Paused, engine.pause(), DownloadPaused event
ResumeDownload handle_resume_download State: Paused→Downloading, engine.resume(), DownloadResumed event
CancelDownload handle_cancel_download Engine cancel, DB delete, .vortex-meta cleanup, DownloadCancelled event
RetryDownload handle_retry_download State: Error→Retry, circuit breaker via MaxRetriesExceeded
PauseAll handle_pause_all Batch pause all Downloading, returns count
ResumeAll handle_resume_all Batch resume all Paused, returns count
SetPriority handle_set_priority Priority validation (1-10), queue reordering
RemoveDownload handle_remove_download Cancel if active, DB delete, optional file cleanup

IPC Convention

download_{action} naming: download_start, download_pause, download_resume, download_cancel, download_retry, download_pause_all, download_resume_all, download_set_priority, download_remove

Test plan

  • 30 new handler unit tests (220 total, all pass)
  • cargo clippy -- -D warnings clean
  • cargo fmt --check clean
  • Pre-commit hooks pass (lefthook: no-secrets, rust-fmt, rust-clippy)
  • Integration test with actual Tauri invoke (requires frontend, deferred to task 18)

Summary by cubic

Implements Linear Task 11 by adding 9 download command handlers and 3 query handlers, exposed via Tauri IPC. Also updates the queue manager to schedule on DownloadCreated/Resumed/Retrying events.

  • New Features

    • Added 9 CommandBus handlers: start, pause, resume, cancel, retry, pause all, resume all, set priority (1–10), remove (optional file delete).
    • Added 3 QueryBus handlers: list downloads (filter/sort/paginate), download detail, count by state.
    • Introduced SetPriorityCommand and RemoveDownloadCommand.
    • Added Tauri IPC driving adapter with AppState; queue manager now reacts to DownloadCreated, DownloadResumed, and DownloadRetrying for slot-aware scheduling.
  • Migration

    • Frontend should call new IPC commands: download_start, download_pause, download_resume, download_cancel, download_retry, download_pause_all, download_resume_all, download_set_priority, download_remove, download_list, download_detail, download_count_by_state.

Written for commit fecb4a9. Summary will update on new commits.

Summary by CodeRabbit

  • New Features
    • Desktop IPC endpoints for full download lifecycle (start/pause/resume/cancel/retry), batch pause/resume, set priority, remove, and list/detail/count queries
    • New command and query handlers powering the above operations
  • Performance / Behavior
    • Queue manager now reacts to create/resume/retry events for improved scheduling
  • Documentation
    • Changelog updated with runtime integration and IPC/command/query surface details

Add 9 CQRS command handlers as methods on CommandBus, each in its own
file under application/commands/:

- StartDownload: URL validation, HEAD metadata, entity creation, event-driven queue scheduling
- Pause/Resume: domain state machine transitions with engine control
- Cancel: engine stop, DB cleanup, .vortex-meta removal
- Retry: circuit breaker integration via domain retry() with MaxRetriesExceeded
- PauseAll/ResumeAll: batch operations on active/paused downloads
- SetPriority: priority update (1-10) for queue reordering
- RemoveDownload: full cleanup with optional file deletion

Also includes:
- QueueManager extended to react to DownloadCreated, DownloadResumed, DownloadRetrying events
- Tauri IPC driving adapter with AppState and 9 #[tauri::command] functions
- SetPriorityCommand and RemoveDownloadCommand added to command types
- 30 new tests (220 total), clippy clean
@github-actions github-actions bot added documentation Improvements or additions to documentation rust labels Apr 8, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 8, 2026

📝 Walkthrough

Walkthrough

Adds 9 concrete CommandBus command handlers and 3 QueryBus query handlers, a Tauri IPC driving adapter exposing those operations via #[tauri::command] functions with shared AppState, updates QueueManager event scheduling to include DownloadCreated/DownloadResumed/DownloadRetrying, and wires handlers into the Tauri app builder.

Changes

Cohort / File(s) Summary
Changelog
CHANGELOG.md
Documented event-driven scheduling for DownloadCreated/DownloadResumed/DownloadRetrying, new CQRS handlers, and Tauri IPC surface.
Driving adapter / IPC
src-tauri/src/adapters/driving/mod.rs, src-tauri/src/adapters/driving/tauri_ipc.rs
Added tauri_ipc module; new AppState { Arc<CommandBus>, Arc<QueryBus> } and many #[tauri::command] async handlers mapping IPC inputs to domain commands/queries and returning DTOs/errors as strings.
Public exports / Tauri wiring
src-tauri/src/lib.rs
Re-exported tauri_ipc items and registered IPC handlers with tauri::generate_handler!.
Command declarations
src-tauri/src/application/commands/mod.rs
Added submodules for new handlers and new command types SetPriorityCommand and RemoveDownloadCommand; adjusted dead-code cfg attrs.
Command handlers
src-tauri/src/application/commands/...
Implemented handlers: start_download, pause_download, resume_download, cancel_download, retry_download, pause_all, resume_all, set_priority, remove_download. Each coordinates domain transitions, persistence, engine calls, and event publishing; includes extensive unit tests and mocks.
Query handlers
src-tauri/src/application/queries/mod.rs, src-tauri/src/application/queries/get_downloads.rs, .../get_download_detail.rs, .../count_by_state.rs
Added query handler methods on QueryBus for listing, detail, and counting by state; parsing/filtering/sorting logic exercised in tests; updated module attrs.
Queue manager
src-tauri/src/application/services/queue_manager.rs
Extended event filtering to forward DownloadCreated, DownloadResumed, and DownloadRetrying into scheduling flow; Created/Retrying trigger slot-filling, Resumed increments active count.

Sequence Diagram(s)

sequenceDiagram
    participant Frontend
    participant TauriIPC as Tauri IPC Adapter
    participant CommandBus
    participant Repo as Download Repository
    participant Engine as Download Engine
    participant EventBus
    participant FileStorage

    Frontend->>TauriIPC: download_start(url, destination)
    TauriIPC->>CommandBus: start_download(cmd)
    CommandBus->>Engine: HTTP HEAD probe / metadata
    Engine-->>CommandBus: metadata
    CommandBus->>Repo: save(new Download)
    Repo-->>CommandBus: ok
    CommandBus->>EventBus: publish(DownloadCreated)
    EventBus-->>CommandBus: ok
    CommandBus-->>TauriIPC: Result<u64,String>
    TauriIPC-->>Frontend: id

    Frontend->>TauriIPC: download_pause(id)
    TauriIPC->>CommandBus: pause_download(cmd)
    CommandBus->>Repo: find_by_id(id)
    Repo-->>CommandBus: download
    CommandBus->>Engine: pause(id)
    Engine-->>CommandBus: ok
    CommandBus->>Repo: save(paused)
    CommandBus->>EventBus: publish(DownloadPaused)
    CommandBus-->>TauriIPC: Result<(),String>

    Frontend->>TauriIPC: download_remove(id, delete_files)
    TauriIPC->>CommandBus: remove_download(cmd)
    CommandBus->>Repo: find_by_id(id)
    Repo-->>CommandBus: download
    alt active
        CommandBus->>Engine: cancel(id)
        Engine-->>CommandBus: ok
    end
    alt delete_files
        CommandBus->>FileStorage: delete(id.vortex-meta)
        FileStorage-->>CommandBus: ok
    end
    CommandBus->>Repo: delete(id)
    CommandBus->>EventBus: publish(DownloadCancelled)
    CommandBus-->>TauriIPC: Result<(),String>
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 Hop, I stitched nine handlers tonight,

IPC bells ringing soft and bright,
Start, pause, resume—each tiny chore,
Events hop in, queue asks for more,
A rabbit's cheer for code that soars.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.41% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the primary change: implementation of download command handlers with CQRS pattern, referenced as task 11.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/11-download-commands

Comment @coderabbitai help to get the list of available commands and usage tips.

Add 3 CQRS query handlers as methods on QueryBus:

- GetDownloadsQuery: filtered/sorted/paginated list via DownloadReadRepository
- GetDownloadDetailQuery: full detail with segments, NotFound error handling
- CountDownloadsByStateQuery: state-grouped counts for UI filter badges

Tauri IPC queries:
- download_list: filter by state/search, sort by field/direction, pagination
- download_detail: single download with segment breakdown
- download_count_by_state: HashMap<state, count> for badges

String parsing in IPC layer for DownloadState, SortField, SortDirection.
8 new tests (228 total), clippy clean.
@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 8, 2026

Greptile Summary

This PR implements 9 CQRS command handlers, extends QueueManager for slot-aware scheduling, and wires a full Tauri IPC adapter — a substantial and well-structured addition. However, five P1 defects need to be addressed before merge:

  • lib.rs: AppState is never passed to .manage(), so every IPC command will panic at runtime.
  • start_download.rs: Millisecond-timestamp DownloadId collides under concurrent starts, silently overwriting earlier records.
  • pause_download.rs / resume_download.rs: Repository state is persisted before the engine confirms the operation; an engine failure leaves DB and engine out of sync.
  • remove_download.rs: delete_files=true only removes the .vortex-meta sidecar, not the actual downloaded content.
  • queue_manager.rs / resume_all.rs: DownloadResumed increments active_count without checking max_concurrent, allowing resume_all to exceed the concurrency limit.

Confidence Score: 4/5

Not safe to merge — the missing AppState registration will crash the app on any IPC call, and four additional P1 defects affect data integrity and correctness.

Five P1 findings: (1) AppState never managed by Tauri — guaranteed runtime panic; (2) timestamp-based ID causes silent overwrite under concurrent starts; (3) save-before-engine in pause/resume leads to diverged state on engine failure; (4) delete_files only removes metadata, not content; (5) resume_all bypasses max_concurrent. All are present defects on the changed code paths.

src-tauri/src/lib.rs (missing .manage), src-tauri/src/application/commands/start_download.rs (ID generation), src-tauri/src/application/commands/pause_download.rs and resume_download.rs (operation ordering), src-tauri/src/application/commands/remove_download.rs (file deletion), src-tauri/src/application/services/queue_manager.rs (resume concurrency cap)

Vulnerabilities

No security concerns identified. URL validation is delegated to the Url domain type before any network I/O. File paths are constructed from user-provided destination but go through PathBuf without direct shell execution. IPC error messages surface via e.to_string() which may leak internal detail to the frontend, but this is low risk in a local desktop app context.

Important Files Changed

Filename Overview
src-tauri/src/lib.rs All nine IPC commands registered in invoke_handler but AppState is never passed to .manage(), causing a runtime panic on any frontend invocation.
src-tauri/src/application/commands/start_download.rs Download handler implemented correctly except for collision-prone millisecond-timestamp DownloadId generation.
src-tauri/src/application/commands/pause_download.rs Saves Paused state to DB before confirming engine.pause() succeeds; a failed engine call leaves DB and engine state diverged.
src-tauri/src/application/commands/resume_download.rs Same save-before-engine ordering issue as pause_download; engine failure leaves DB in Downloading state while engine is still paused.
src-tauri/src/application/commands/remove_download.rs delete_files=true only removes .vortex-meta sidecar via delete_meta; actual downloaded content file is never deleted despite the misleading flag name.
src-tauri/src/application/services/queue_manager.rs Extended with DownloadCreated/DownloadResumed/DownloadRetrying event handling; DownloadResumed increments active_count without checking max_concurrent, enabling resume_all to exceed the concurrency limit.
src-tauri/src/application/commands/resume_all.rs Resumes all paused downloads without checking max_concurrent; combined with the QueueManager's uncapped DownloadResumed handler, this bypasses the concurrency limit.
src-tauri/src/adapters/driving/tauri_ipc.rs Clean IPC adapter with correct command mapping; all commands correctly delegate to CommandBus and convert errors to strings.
src-tauri/src/application/commands/pause_all.rs Batch pause implementation is correct; silently ignores individual pause errors via if let Ok which may mask partial failures.
src-tauri/src/application/commands/cancel_download.rs Cancel handler correctly checks active state before engine.cancel(), cleans up metadata, deletes from repo, and emits DownloadCancelled.
src-tauri/src/application/commands/retry_download.rs Retry handler correctly delegates to domain model's retry() method, handles MaxRetriesExceeded, and persists state.
src-tauri/src/application/commands/set_priority.rs Validates priority range through Priority::new(), persists update. No event emitted for queue reordering notification.
src-tauri/src/application/commands/mod.rs Clean command type definitions; SetPriorityCommand and RemoveDownloadCommand added correctly.

Sequence Diagram

sequenceDiagram
    participant FE as Frontend
    participant IPC as tauri_ipc
    participant CB as CommandBus
    participant Repo as DownloadRepository
    participant Eng as DownloadEngine
    participant EB as EventBus
    participant QM as QueueManager

    FE->>IPC: download_start(url, dest)
    IPC->>CB: handle_start_download(cmd)
    CB->>Repo: save(download)
    CB->>EB: publish(DownloadCreated)
    EB-->>QM: DownloadCreated - on_slot_freed()
    QM->>Repo: find_by_state(Queued)
    QM->>Eng: start(download)
    QM->>EB: publish(DownloadStarted)
    CB-->>IPC: Ok(id)
    IPC-->>FE: u64 id

    FE->>IPC: download_pause(id)
    IPC->>CB: handle_pause_download(cmd)
    CB->>Repo: save(Paused)
    CB->>Eng: pause(id)
    CB->>EB: publish(DownloadPaused)
    EB-->>QM: DownloadPaused - decrement_and_schedule()

    FE->>IPC: download_resume_all()
    IPC->>CB: handle_resume_all(cmd)
    CB->>Repo: find_by_state(Paused)
    loop each paused download
        CB->>Repo: save(Downloading)
        CB->>Eng: resume(id)
        CB->>EB: publish(DownloadResumed)
        EB-->>QM: DownloadResumed - active_count++ no cap check
    end
Loading

Fix All in Claude Code

Reviews (1): Last reviewed commit: "feat(core): implement download command h..." | Re-trigger Greptile

Comment on lines 33 to 48
pub fn run() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![
download_start,
download_pause,
download_resume,
download_cancel,
download_retry,
download_pause_all,
download_resume_all,
download_set_priority,
download_remove,
])
.run(tauri::generate_context!())
// Tauri's run() has no meaningful recovery path — panic is intentional here
.expect("fatal: failed to start Vortex");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 AppState never registered with Tauri

run() wires the invoke_handler with all nine commands but never calls .manage(AppState { ... }). Every handler parameter is state: State<'_, AppState>, so Tauri will panic at runtime the moment any frontend call arrives — the managed state simply doesn't exist.

// After .invoke_handler(...)
.manage(AppState {
    command_bus: Arc::new(/* build CommandBus */),
    query_bus: Arc::new(/* build QueryBus */),
})

The full wiring (building CommandBus, QueueManager, etc.) needs to happen here — or in a setup closure — before .run() is called.

Fix in Claude Code

Comment on lines +38 to +43
let id = DownloadId(
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Timestamp-based DownloadId collides under concurrent starts

Two StartDownload calls arriving within the same millisecond produce the same DownloadId. The second repo.save() silently overwrites the first record; the first caller receives a valid-looking ID that now points to the second download's data.

A UUID, a database-generated sequence, or at minimum a monotonic counter (AtomicU64) would be collision-free.

// Example: atomic counter in CommandBus
let id = DownloadId(self.next_id.fetch_add(1, Ordering::SeqCst));

Fix in Claude Code

Comment on lines +20 to +23
let event = download.pause()?;
self.download_repo().save(&download)?;
self.download_engine().pause(cmd.id)?;
self.event_bus().publish(event);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Repository saved before engine confirmation — state diverges on engine failure

If download_engine().pause() fails, the repository already holds DownloadState::Paused but the engine is still actively downloading. The event is never published, so the QueueManager does not decrement its slot count either, leaving active_count overstated.

The engine call should come before repo.save, or the save should be rolled back on engine failure:

let event = download.pause()?;
self.download_engine().pause(cmd.id)?;   // confirm first
self.download_repo().save(&download)?;
self.event_bus().publish(event);

The same ordering issue exists in resume_download.rs (lines 20–23).

Fix in Claude Code

Comment on lines +27 to +30
if cmd.delete_files {
let meta_path = format!("{}.vortex-meta", download.destination_path());
let _ = self.file_storage().delete_meta(Path::new(&meta_path));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 delete_files=true only removes the sidecar, not the downloaded content

FileStorage exposes only delete_meta, so when cmd.delete_files is true the handler deletes the .vortex-meta file but leaves the actual downloaded file untouched. The parameter name delete_files strongly implies the content file is also cleaned up.

Either the FileStorage port needs a delete_file(&self, path: &Path) method (so the handler can also remove download.destination_path()), or the flag should be renamed to delete_meta to accurately describe what it does and avoid surprising callers.

Fix in Claude Code

Comment on lines +255 to +258
}
DomainEvent::DownloadResumed { .. } => {
self.active_count.fetch_add(1, Ordering::SeqCst);
Ok(())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 DownloadResumed increments active_count without checking max_concurrent

handle_resume_all resumes every paused download unconditionally and publishes one DownloadResumed event per download. The QueueManager handler increments active_count on each event with no cap check. If max_concurrent=2 and 5 downloads are paused, resuming all 5 sets active_count to 5 and starts all 5 in the engine simultaneously, bypassing the concurrency limit entirely.

The handler should guard the increment, or handle_resume_all should respect the available slots (e.g. only resume up to max - active downloads, or delegate scheduling to on_slot_freed).

Fix in Claude Code

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (5)
src-tauri/src/application/commands/remove_download.rs (2)

34-37: Behavioral inconsistency with cancel_download: event emission is conditional here but unconditional there.

remove_download only publishes DownloadCancelled when is_active is true (Lines 34-37), while cancel_download always publishes the event regardless of state (see cancel_download.rs Lines 35-36 in context snippet 4).

This is likely correct from a slot-accounting perspective (non-active downloads don't occupy slots), but creates a semantic inconsistency. Consider documenting this distinction or unifying the behavior if the event serves other purposes (e.g., UI notifications).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/remove_download.rs` around lines 34 - 37,
The remove_download handler currently only publishes
DomainEvent::DownloadCancelled when is_active is true, causing a semantic
inconsistency with cancel_download which always emits DownloadCancelled; either
make behavior consistent or document the distinction. To fix: decide whether
DownloadCancelled should represent a logical cancellation (emit unconditionally)
or only an active-slot release (emit conditionally), then update remove_download
(function remove_download) to match cancel_download by always publishing
DomainEvent::DownloadCancelled { id: cmd.id } if you choose unconditional
emission, or add a clear comment/docstring in remove_download and
cancel_download explaining that remove_download emits only for active downloads
for slot-accounting reasons and leave code as-is so callers/UIs understand the
intended semantics; reference DomainEvent::DownloadCancelled, remove_download,
cancel_download, and the is_active check to locate the change.

316-385: Tests cover key scenarios; consider adding a test for non-active removal without event emission.

Current tests verify active-download cancellation emits an event, but there's no explicit test confirming that removing a non-active (e.g., Queued) download does NOT emit DownloadCancelled. This would document the intended conditional behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/remove_download.rs` around lines 316 -
385, Add a test that removes a non-active download and asserts no cancellation
event or engine cancel call occurs: create a non-active download via
make_download(), push it into MockDownloadRepo with make_harness(), call
harness.bus.handle_remove_download(RemoveDownloadCommand { id: DownloadId(1),
delete_files: false }).await.unwrap(), then assert that harness.engine.cancelled
is empty and harness.event_bus.events does NOT contain
DomainEvent::DownloadCancelled { id: DownloadId(1) } (name the test e.g.
test_remove_non_active_no_cancel) to document the intended conditional behavior.
src-tauri/src/application/commands/start_download.rs (1)

22-31: Probed metadata (_file_size, _resume_supported) is not utilized.

The HEAD response extracts content-length and accept-ranges but these values are discarded. If the Download entity or engine can leverage file size for pre-allocation or resume capability, consider passing these through.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/start_download.rs` around lines 22 - 31,
The HEAD probe currently computes FileSize and resume support but discards them;
update the start_download flow to pass the discovered values into the Download
creation/path that will use them (so pre-allocation or resume can be enabled).
Specifically, in the block around self.http_client().head(...) where you call
extract_filename, map resp.content_length().map(FileSize) and compute resume
from resp.header("accept-ranges"), thread those values through instead of
binding to _file_size and _resume_supported — e.g., propagate the FileSize and
resume boolean into whatever constructs or function calls create or initialize a
Download (or into the download engine's start function) so the Download type (or
download start function) can use them for pre-allocation and resume logic.
src-tauri/src/application/commands/pause_all.rs (2)

15-22: Inconsistent error handling between save() and pause().

Line 17 propagates save() errors with ?, aborting the entire batch and returning the error. Line 18 silently ignores download_engine().pause() errors with let _ =. This asymmetry means:

  • A save failure mid-batch aborts, but already-processed downloads remain paused.
  • An engine pause failure is silently ignored, yet the download is still counted as successfully paused.

If the intent is best-effort batch processing, consider using let _ = for both or logging engine failures. If strict consistency is needed, propagate both errors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/pause_all.rs` around lines 15 - 22, The
loop mixes propagated errors (self.download_repo().save(&dl)? ) with ignored
ones (let _ = self.download_engine().pause(dl.id())), so make handling
consistent: change to best-effort batch semantics by catching and logging errors
from both save and download_engine().pause instead of using ?, only increment
count and publish the event when both save and engine pause succeed, and
continue to the next download on any failure; locate the calls to dl.pause(),
download_repo().save(&dl), download_engine().pause(dl.id()), and
event_bus().publish(event) to implement this (use your existing logger/error
reporting method to log failures).

283-326: Tests verify core functionality; consider adding a partial-failure test.

The current tests cover successful batch pause and empty-set scenarios. A test simulating save() failure mid-batch would help document the expected partial-success behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/pause_all.rs` around lines 283 - 326, Add
a new async test that seeds the repo with multiple active downloads (use
make_downloading and DownloadId), then configure the MockDownloadRepo to fail on
save() for one of them (simulate partial failure), call
CommandBus.handle_pause_all(PauseAllDownloadsCommand), and assert the returned
count equals the number of successful pauses; also verify that successfully
saved downloads have state DownloadState::Paused, the engine
(MockDownloadEngine) received pause calls only for successes, and the event bus
(MockEventBus) emitted events only for successful pauses to document expected
partial-success behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/application/commands/pause_download.rs`:
- Around line 20-23: The code currently saves the Paused state before ensuring
the engine actually paused; change the ordering so the engine pause is attempted
first and only on success update the domain, persist and publish. Concretely:
call download_engine().pause(cmd.id) and handle any error (return it) before
calling download.pause(), then call download_repo().save(&download)? and
event_bus().publish(event). Alternatively, if you must derive the event from
download.pause(), perform the state transition in-memory into a temporary
variable but only persist/publish after download_engine().pause(cmd.id)
succeeds.

In `@src-tauri/src/application/commands/remove_download.rs`:
- Around line 27-30: The flag cmd.delete_files currently only removes the
.vortex-meta via self.file_storage().delete_meta but the name implies the
downloaded file should also be removed; update the remove_download handling to
also delete the actual download at download.destination_path() when
cmd.delete_files is true (call the appropriate file removal method on
self.file_storage(), e.g., delete_file or equivalent) and still delete the
.vortex-meta, or if the original intent was to only remove metadata rename the
parameter (cmd.delete_files -> cmd.delete_metadata or cmd.cleanup_meta_files)
and adjust call sites and docs accordingly; refer to cmd.delete_files,
download.destination_path(), and self.file_storage().delete_meta to locate the
logic to change.

In `@src-tauri/src/application/commands/resume_all.rs`:
- Around line 14-18: The code persists the download as resumed (dl.resume() +
download_repo().save) and publishes success before the engine actually resumes
it, and it also unconditionally calls download_engine().resume for every item
which can bypass concurrency limits; change the flow so you first attempt to
resume via download_engine().resume(dl.id()) and check its Result, and only if
that returns Ok then call dl.resume(), download_repo().save(&dl) and
event_bus().publish(event) and increment count; if download_engine().resume
returns Err, do not persist the resumed state or increment count—log or publish
a failure event instead. If your engine API supports enqueueing resumes to
respect concurrency, use that enqueue method (instead of calling resume for
every item) so the bulk path does not bypass the queue cap.

In `@src-tauri/src/application/commands/start_download.rs`:
- Around line 38-43: The current DownloadId creation using
SystemTime::now().as_millis() in start_download.rs can collide under concurrent
requests; replace it with a collision-safe generator such as a process-wide
AtomicU64 counter (e.g., add a static NEXT_ID: AtomicU64 and return
DownloadId(NEXT_ID.fetch_add(1, Ordering::Relaxed)) in the start_download ID
generation) or switch to a UUID/randomized approach (use the uuid crate or XOR
the timestamp with rand::random::<u64>()) and update the code path that
constructs DownloadId (the block creating DownloadId in start_download.rs) to
call the new generator function instead.

In `@src-tauri/src/application/services/queue_manager.rs`:
- Around line 253-255: The DownloadRetrying event can be handled before
schedule_retry() registers its cancellation token, allowing on_slot_freed() to
restart a download early; fix by ensuring the cancellation token is
inserted/registered before emitting DomainEvent::DownloadRetrying (or call
schedule_retry() prior to publishing), or alternatively make the
DomainEvent::DownloadRetrying handling check for an existing cancellation token
in the retry registry and ignore the event if no token is present; locate
schedule_retry(), the code that publishes DomainEvent::DownloadRetrying, and
on_slot_freed() to implement the change.

In `@src-tauri/src/lib.rs`:
- Around line 35-45: The AppState type is not registered with Tauri, so all IPC
handlers that request State<'_, AppState> (download_start, download_pause,
download_resume, download_cancel, download_retry, download_pause_all,
download_resume_all, download_set_priority, download_remove) will fail; fix this
by calling Builder::manage(...) with a properly constructed AppState instance
inside your run()/main Tauri builder before invoke_handler (i.e., add
.manage(AppState { /* initialize fields used by those handlers */ }) to the
tauri::Builder chain so State<'_, AppState> can be extracted by the listed
handlers).

---

Nitpick comments:
In `@src-tauri/src/application/commands/pause_all.rs`:
- Around line 15-22: The loop mixes propagated errors
(self.download_repo().save(&dl)? ) with ignored ones (let _ =
self.download_engine().pause(dl.id())), so make handling consistent: change to
best-effort batch semantics by catching and logging errors from both save and
download_engine().pause instead of using ?, only increment count and publish the
event when both save and engine pause succeed, and continue to the next download
on any failure; locate the calls to dl.pause(), download_repo().save(&dl),
download_engine().pause(dl.id()), and event_bus().publish(event) to implement
this (use your existing logger/error reporting method to log failures).
- Around line 283-326: Add a new async test that seeds the repo with multiple
active downloads (use make_downloading and DownloadId), then configure the
MockDownloadRepo to fail on save() for one of them (simulate partial failure),
call CommandBus.handle_pause_all(PauseAllDownloadsCommand), and assert the
returned count equals the number of successful pauses; also verify that
successfully saved downloads have state DownloadState::Paused, the engine
(MockDownloadEngine) received pause calls only for successes, and the event bus
(MockEventBus) emitted events only for successful pauses to document expected
partial-success behavior.

In `@src-tauri/src/application/commands/remove_download.rs`:
- Around line 34-37: The remove_download handler currently only publishes
DomainEvent::DownloadCancelled when is_active is true, causing a semantic
inconsistency with cancel_download which always emits DownloadCancelled; either
make behavior consistent or document the distinction. To fix: decide whether
DownloadCancelled should represent a logical cancellation (emit unconditionally)
or only an active-slot release (emit conditionally), then update remove_download
(function remove_download) to match cancel_download by always publishing
DomainEvent::DownloadCancelled { id: cmd.id } if you choose unconditional
emission, or add a clear comment/docstring in remove_download and
cancel_download explaining that remove_download emits only for active downloads
for slot-accounting reasons and leave code as-is so callers/UIs understand the
intended semantics; reference DomainEvent::DownloadCancelled, remove_download,
cancel_download, and the is_active check to locate the change.
- Around line 316-385: Add a test that removes a non-active download and asserts
no cancellation event or engine cancel call occurs: create a non-active download
via make_download(), push it into MockDownloadRepo with make_harness(), call
harness.bus.handle_remove_download(RemoveDownloadCommand { id: DownloadId(1),
delete_files: false }).await.unwrap(), then assert that harness.engine.cancelled
is empty and harness.event_bus.events does NOT contain
DomainEvent::DownloadCancelled { id: DownloadId(1) } (name the test e.g.
test_remove_non_active_no_cancel) to document the intended conditional behavior.

In `@src-tauri/src/application/commands/start_download.rs`:
- Around line 22-31: The HEAD probe currently computes FileSize and resume
support but discards them; update the start_download flow to pass the discovered
values into the Download creation/path that will use them (so pre-allocation or
resume can be enabled). Specifically, in the block around
self.http_client().head(...) where you call extract_filename, map
resp.content_length().map(FileSize) and compute resume from
resp.header("accept-ranges"), thread those values through instead of binding to
_file_size and _resume_supported — e.g., propagate the FileSize and resume
boolean into whatever constructs or function calls create or initialize a
Download (or into the download engine's start function) so the Download type (or
download start function) can use them for pre-allocation and resume logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 90b18947-6619-45dc-956f-30b11d5b07a9

📥 Commits

Reviewing files that changed from the base of the PR and between 70bf444 and 0473fe8.

📒 Files selected for processing (15)
  • CHANGELOG.md
  • src-tauri/src/adapters/driving/mod.rs
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/application/commands/cancel_download.rs
  • src-tauri/src/application/commands/mod.rs
  • src-tauri/src/application/commands/pause_all.rs
  • src-tauri/src/application/commands/pause_download.rs
  • src-tauri/src/application/commands/remove_download.rs
  • src-tauri/src/application/commands/resume_all.rs
  • src-tauri/src/application/commands/resume_download.rs
  • src-tauri/src/application/commands/retry_download.rs
  • src-tauri/src/application/commands/set_priority.rs
  • src-tauri/src/application/commands/start_download.rs
  • src-tauri/src/application/services/queue_manager.rs
  • src-tauri/src/lib.rs

Comment on lines +20 to +23
let event = download.pause()?;
self.download_repo().save(&download)?;
self.download_engine().pause(cmd.id)?;
self.event_bus().publish(event);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don’t persist Paused before the engine has actually stopped.

If download_engine().pause(cmd.id) fails, Line 21 has already saved Paused, so the repo/UI diverge from a still-running transfer. Roll back on engine error, or only persist/publish after the pause succeeds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/pause_download.rs` around lines 20 - 23,
The code currently saves the Paused state before ensuring the engine actually
paused; change the ordering so the engine pause is attempted first and only on
success update the domain, persist and publish. Concretely: call
download_engine().pause(cmd.id) and handle any error (return it) before calling
download.pause(), then call download_repo().save(&download)? and
event_bus().publish(event). Alternatively, if you must derive the event from
download.pause(), perform the state transition in-memory into a temporary
variable but only persist/publish after download_engine().pause(cmd.id)
succeeds.

Comment on lines +27 to +30
if cmd.delete_files {
let meta_path = format!("{}.vortex-meta", download.destination_path());
let _ = self.file_storage().delete_meta(Path::new(&meta_path));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

delete_files only removes metadata, not the actual downloaded file.

The parameter name delete_files suggests it would delete the downloaded content, but the implementation only removes the .vortex-meta file. If this is intentional (e.g., user should manually delete files), consider renaming to delete_metadata or cleanup_meta_files for clarity. If actual file deletion is intended, the downloaded file at destination_path() should also be removed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/remove_download.rs` around lines 27 - 30,
The flag cmd.delete_files currently only removes the .vortex-meta via
self.file_storage().delete_meta but the name implies the downloaded file should
also be removed; update the remove_download handling to also delete the actual
download at download.destination_path() when cmd.delete_files is true (call the
appropriate file removal method on self.file_storage(), e.g., delete_file or
equivalent) and still delete the .vortex-meta, or if the original intent was to
only remove metadata rename the parameter (cmd.delete_files ->
cmd.delete_metadata or cmd.cleanup_meta_files) and adjust call sites and docs
accordingly; refer to cmd.delete_files, download.destination_path(), and
self.file_storage().delete_meta to locate the logic to change.

Comment on lines +14 to +18
if let Ok(event) = dl.resume() {
self.download_repo().save(&dl)?;
let _ = self.download_engine().resume(dl.id());
self.event_bus().publish(event);
count += 1;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

resume_all should not report success before the engine actually resumes the download.

Line 15 persists Downloading, Line 16 drops any engine error, and Lines 17-18 still publish/count a successful resume. That leaves failed resumes looking active, and this bulk path also bypasses the queue’s concurrency cap by resuming every paused item immediately.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/resume_all.rs` around lines 14 - 18, The
code persists the download as resumed (dl.resume() + download_repo().save) and
publishes success before the engine actually resumes it, and it also
unconditionally calls download_engine().resume for every item which can bypass
concurrency limits; change the flow so you first attempt to resume via
download_engine().resume(dl.id()) and check its Result, and only if that returns
Ok then call dl.resume(), download_repo().save(&dl) and
event_bus().publish(event) and increment count; if download_engine().resume
returns Err, do not persist the resumed state or increment count—log or publish
a failure event instead. If your engine API supports enqueueing resumes to
respect concurrency, use that enqueue method (instead of calling resume for
every item) so the bulk path does not bypass the queue cap.

Comment on lines +38 to +43
let id = DownloadId(
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential ID collision under concurrent requests.

Using SystemTime::now().as_millis() for DownloadId generation can produce duplicate IDs if multiple downloads are started within the same millisecond. Consider using a monotonic counter, UUID, or combining timestamp with a random component.

Example fix using atomic counter
use std::sync::atomic::{AtomicU64, Ordering};

static NEXT_ID: AtomicU64 = AtomicU64::new(0);

fn generate_id() -> DownloadId {
    DownloadId(NEXT_ID.fetch_add(1, Ordering::Relaxed))
}

Or combine timestamp with randomness:

let id = DownloadId(
    std::time::SystemTime::now()
        .duration_since(std::time::UNIX_EPOCH)
        .unwrap_or_default()
        .as_millis() as u64
        ^ rand::random::<u64>()
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/start_download.rs` around lines 38 - 43,
The current DownloadId creation using SystemTime::now().as_millis() in
start_download.rs can collide under concurrent requests; replace it with a
collision-safe generator such as a process-wide AtomicU64 counter (e.g., add a
static NEXT_ID: AtomicU64 and return DownloadId(NEXT_ID.fetch_add(1,
Ordering::Relaxed)) in the start_download ID generation) or switch to a
UUID/randomized approach (use the uuid crate or XOR the timestamp with
rand::random::<u64>()) and update the code path that constructs DownloadId (the
block creating DownloadId in start_download.rs) to call the new generator
function instead.

Comment on lines +253 to +255
DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => {
self.on_slot_freed().await
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

DownloadRetrying can race past the backoff window.

In the same file, Lines 171-173 publish DownloadRetrying before schedule_retry() inserts the cancellation token. With this new arm, the listener can win that race and call on_slot_freed() immediately, restarting the download before the delay expires.

Minimal fix
-                self.event_bus.publish(event);
-                self.schedule_retry(id, download.retry_count());
+                self.schedule_retry(id, download.retry_count());
+                self.event_bus.publish(event);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/services/queue_manager.rs` around lines 253 - 255,
The DownloadRetrying event can be handled before schedule_retry() registers its
cancellation token, allowing on_slot_freed() to restart a download early; fix by
ensuring the cancellation token is inserted/registered before emitting
DomainEvent::DownloadRetrying (or call schedule_retry() prior to publishing), or
alternatively make the DomainEvent::DownloadRetrying handling check for an
existing cancellation token in the retry registry and ignore the event if no
token is present; locate schedule_retry(), the code that publishes
DomainEvent::DownloadRetrying, and on_slot_freed() to implement the change.

Comment on lines +35 to +45
.invoke_handler(tauri::generate_handler![
download_start,
download_pause,
download_resume,
download_cancel,
download_retry,
download_pause_all,
download_resume_all,
download_set_priority,
download_remove,
])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
rg -n -C2 'pub struct AppState|State<.*AppState>|\.manage\(' \
  src-tauri/src/adapters/driving/tauri_ipc.rs \
  src-tauri/src/lib.rs

Repository: mpiton/vortex

Length of output: 3931


🏁 Script executed:

# Check the size and read the lib.rs file to see the full Builder setup
wc -l src-tauri/src/lib.rs && cat -n src-tauri/src/lib.rs

Repository: mpiton/vortex

Length of output: 2368


🏁 Script executed:

# Find AppState initialization and construction
rg -n 'AppState\s*\{|AppState::new' src-tauri/src/adapters/driving/tauri_ipc.rs

Repository: mpiton/vortex

Length of output: 81


🏁 Script executed:

sed -n '21,25p' src-tauri/src/adapters/driving/tauri_ipc.rs

Repository: mpiton/vortex

Length of output: 153


AppState is never registered with Tauri.

The IPC commands in tauri_ipc.rs extract State<'_, AppState> (9 commands: download_start, download_pause, download_resume, download_cancel, download_retry, download_pause_all, download_resume_all, download_set_priority, download_remove), but run() does not call Builder::manage(AppState { ... }). Every invocation of these commands will fail at runtime with a state extraction error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/lib.rs` around lines 35 - 45, The AppState type is not
registered with Tauri, so all IPC handlers that request State<'_, AppState>
(download_start, download_pause, download_resume, download_cancel,
download_retry, download_pause_all, download_resume_all, download_set_priority,
download_remove) will fail; fix this by calling Builder::manage(...) with a
properly constructed AppState instance inside your run()/main Tauri builder
before invoke_handler (i.e., add .manage(AppState { /* initialize fields used by
those handlers */ }) to the tauri::Builder chain so State<'_, AppState> can be
extracted by the listed handlers).

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

11 issues found across 15 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/commands/cancel_download.rs">

<violation number="1" location="src-tauri/src/application/commands/cancel_download.rs:35">
P2: DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).</violation>
</file>

<file name="src-tauri/src/application/commands/pause_all.rs">

<violation number="1" location="src-tauri/src/application/commands/pause_all.rs:18">
P1: `handle_pause_all` ignores `download_engine().pause` errors, so failed pauses are still counted and emitted as successful.</violation>
</file>

<file name="src-tauri/src/application/commands/pause_download.rs">

<violation number="1" location="src-tauri/src/application/commands/pause_download.rs:21">
P1: Pause is persisted before the engine pause call, so an engine error can leave the DB in `Paused` while the download is still running.</violation>

<violation number="2" location="src-tauri/src/application/commands/pause_download.rs:21">
P1: Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.</violation>
</file>

<file name="src-tauri/src/application/commands/resume_all.rs">

<violation number="1" location="src-tauri/src/application/commands/resume_all.rs:15">
P1: `resume_all` ignores engine resume failures, causing false success (state/event/count) even when resume fails.</violation>
</file>

<file name="src-tauri/src/application/commands/start_download.rs">

<violation number="1" location="src-tauri/src/application/commands/start_download.rs:42">
P1: Do not generate `DownloadId` from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.</violation>
</file>

<file name="src-tauri/src/application/commands/set_priority.rs">

<violation number="1" location="src-tauri/src/application/commands/set_priority.rs:45">
P3: These test mocks duplicate the same `MockDownloadRepo`/`MockDownloadEngine` scaffolding already defined in other command tests (e.g., `start_download.rs`). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.</violation>
</file>

<file name="src-tauri/src/application/services/queue_manager.rs">

<violation number="1" location="src-tauri/src/application/services/queue_manager.rs:256">
P2: `DownloadResumed` is emitted both by the resume command handler and the download engine. With this new handler incrementing `active_count` for every `DownloadResumed`, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.</violation>

<violation number="2" location="src-tauri/src/application/services/queue_manager.rs:257">
P1: Enforce `max_concurrent` when handling `DownloadResumed`; unconditionally incrementing `active_count` can oversubscribe slots and bypass queue limits.</violation>
</file>

<file name="src-tauri/src/lib.rs">

<violation number="1" location="src-tauri/src/lib.rs:35">
P0: Register `AppState` with the Tauri builder before `run`; commands taking `State<'_, AppState>` will fail at runtime if the state is not managed.</violation>
</file>

<file name="src-tauri/src/application/commands/remove_download.rs">

<violation number="1" location="src-tauri/src/application/commands/remove_download.rs:29">
P2: `delete_files` should also remove the downloaded content file, not just the `.vortex-meta` sidecar; current behavior leaves user data behind unexpectedly.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0: Register AppState with the Tauri builder before run; commands taking State<'_, AppState> will fail at runtime if the state is not managed.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/lib.rs, line 35:

<comment>Register `AppState` with the Tauri builder before `run`; commands taking `State<'_, AppState>` will fail at runtime if the state is not managed.</comment>

<file context>
@@ -24,9 +24,25 @@ pub use application::read_models::{
 #[cfg_attr(mobile, tauri::mobile_entry_point)]
 pub fn run() {
     tauri::Builder::default()
+        .invoke_handler(tauri::generate_handler![
+            download_start,
+            download_pause,
</file context>
Fix with Cubic

for mut dl in downloads {
if let Ok(event) = dl.pause() {
self.download_repo().save(&dl)?;
let _ = self.download_engine().pause(dl.id());
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: handle_pause_all ignores download_engine().pause errors, so failed pauses are still counted and emitted as successful.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_all.rs, line 18:

<comment>`handle_pause_all` ignores `download_engine().pause` errors, so failed pauses are still counted and emitted as successful.</comment>

<file context>
@@ -0,0 +1,327 @@
+        for mut dl in downloads {
+            if let Ok(event) = dl.pause() {
+                self.download_repo().save(&dl)?;
+                let _ = self.download_engine().pause(dl.id());
+                self.event_bus().publish(event);
+                count += 1;
</file context>
Fix with Cubic

.ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;

let event = download.pause()?;
self.download_repo().save(&download)?;
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Pause is persisted before the engine pause call, so an engine error can leave the DB in Paused while the download is still running.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_download.rs, line 21:

<comment>Pause is persisted before the engine pause call, so an engine error can leave the DB in `Paused` while the download is still running.</comment>

<file context>
@@ -0,0 +1,321 @@
+            .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;
+
+        let event = download.pause()?;
+        self.download_repo().save(&download)?;
+        self.download_engine().pause(cmd.id)?;
+        self.event_bus().publish(event);
</file context>
Fix with Cubic

let mut count = 0u32;
for mut dl in downloads {
if let Ok(event) = dl.resume() {
self.download_repo().save(&dl)?;
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: resume_all ignores engine resume failures, causing false success (state/event/count) even when resume fails.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/resume_all.rs, line 15:

<comment>`resume_all` ignores engine resume failures, causing false success (state/event/count) even when resume fails.</comment>

<file context>
@@ -0,0 +1,326 @@
+        let mut count = 0u32;
+        for mut dl in downloads {
+            if let Ok(event) = dl.resume() {
+                self.download_repo().save(&dl)?;
+                let _ = self.download_engine().resume(dl.id());
+                self.event_bus().publish(event);
</file context>
Fix with Cubic

std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64,
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Do not generate DownloadId from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/start_download.rs, line 42:

<comment>Do not generate `DownloadId` from millisecond timestamps; concurrent starts can produce duplicate IDs and corrupt download identity.</comment>

<file context>
@@ -0,0 +1,409 @@
+            std::time::SystemTime::now()
+                .duration_since(std::time::UNIX_EPOCH)
+                .unwrap_or_default()
+                .as_millis() as u64,
+        );
+
</file context>
Fix with Cubic

.ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;

let event = download.pause()?;
self.download_repo().save(&download)?;
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/pause_download.rs, line 21:

<comment>Resume in the engine before saving Downloading state so a failed engine resume cannot leave persisted state inconsistent.</comment>

<file context>
@@ -0,0 +1,321 @@
+            .ok_or_else(|| AppError::NotFound(format!("Download {} not found", cmd.id.0)))?;
+
+        let event = download.pause()?;
+        self.download_repo().save(&download)?;
+        self.download_engine().pause(cmd.id)?;
+        self.event_bus().publish(event);
</file context>
Fix with Cubic

self.download_repo().delete(cmd.id)?;

// Emit event (QueueManager decrements slot if was active)
self.event_bus()
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/cancel_download.rs, line 35:

<comment>DownloadCancelled is published even for non-active downloads, but QueueManager always decrements active_count for this event. Canceling a queued/paused download can drop the active count and allow extra downloads beyond the concurrency limit. Emit DownloadCancelled only when an active slot is actually freed (or carry state in the event).</comment>

<file context>
@@ -0,0 +1,366 @@
+        self.download_repo().delete(cmd.id)?;
+
+        // Emit event (QueueManager decrements slot if was active)
+        self.event_bus()
+            .publish(DomainEvent::DownloadCancelled { id: cmd.id });
+
</file context>
Fix with Cubic

DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => {
self.on_slot_freed().await
}
DomainEvent::DownloadResumed { .. } => {
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: DownloadResumed is emitted both by the resume command handler and the download engine. With this new handler incrementing active_count for every DownloadResumed, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/services/queue_manager.rs, line 256:

<comment>`DownloadResumed` is emitted both by the resume command handler and the download engine. With this new handler incrementing `active_count` for every `DownloadResumed`, a single resume will increment twice and the queue manager can stop scheduling new downloads prematurely. Deduplicate the event (emit once) or guard the increment so it only happens once per resume.</comment>

<file context>
@@ -247,6 +250,13 @@ impl QueueManager {
+                    DomainEvent::DownloadCreated { .. } | DomainEvent::DownloadRetrying { .. } => {
+                        self.on_slot_freed().await
+                    }
+                    DomainEvent::DownloadResumed { .. } => {
+                        self.active_count.fetch_add(1, Ordering::SeqCst);
+                        Ok(())
</file context>
Fix with Cubic


if cmd.delete_files {
let meta_path = format!("{}.vortex-meta", download.destination_path());
let _ = self.file_storage().delete_meta(Path::new(&meta_path));
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: delete_files should also remove the downloaded content file, not just the .vortex-meta sidecar; current behavior leaves user data behind unexpectedly.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/remove_download.rs, line 29:

<comment>`delete_files` should also remove the downloaded content file, not just the `.vortex-meta` sidecar; current behavior leaves user data behind unexpectedly.</comment>

<file context>
@@ -0,0 +1,386 @@
+
+        if cmd.delete_files {
+            let meta_path = format!("{}.vortex-meta", download.destination_path());
+            let _ = self.file_storage().delete_meta(Path::new(&meta_path));
+        }
+
</file context>
Fix with Cubic

EventBus, FileStorage, HttpClient, PluginLoader,
};

struct MockDownloadRepo {
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: These test mocks duplicate the same MockDownloadRepo/MockDownloadEngine scaffolding already defined in other command tests (e.g., start_download.rs). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/application/commands/set_priority.rs, line 45:

<comment>These test mocks duplicate the same `MockDownloadRepo`/`MockDownloadEngine` scaffolding already defined in other command tests (e.g., `start_download.rs`). Consider extracting the shared mocks into a common test helper module so future trait changes don’t require editing every handler test file.</comment>

<file context>
@@ -0,0 +1,286 @@
+        EventBus, FileStorage, HttpClient, PluginLoader,
+    };
+
+    struct MockDownloadRepo {
+        store: Mutex<HashMap<u64, Download>>,
+    }
</file context>
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src-tauri/src/application/queries/get_download_detail.rs (1)

144-158: Consider asserting the specific error variant.

The test verifies an error is returned but doesn't confirm it's specifically a NotFound error. This could mask incorrect error types in future refactors.

🔧 Suggested improvement
     let result = bus
         .handle_get_download_detail(GetDownloadDetailQuery {
             id: DownloadId(999),
         })
         .await;
-    assert!(result.is_err());
+    assert!(matches!(
+        result,
+        Err(crate::application::error::AppError::NotFound(_))
+    ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/queries/get_download_detail.rs` around lines 144 -
158, Update the test_get_download_detail_not_found to assert the specific
NotFound error instead of only checking is_err(); after calling
bus.handle_get_download_detail(...) unwrap the error (e.g., let err =
result.unwrap_err()) and pattern-match or use assert_matches! to verify it is
the NotFound variant (referencing the test function name
test_get_download_detail_not_found, the QueryBus::handle_get_download_detail
call, GetDownloadDetailQuery and DownloadId). Ensure the assertion checks the
exact error enum variant (e.g., MyError::NotFound or RepoError::NotFound) used
by the handler.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 26-30: AppState is never registered with Tauri so commands that
request State<'_, AppState> will panic; fix by constructing an AppState instance
(wrap CommandBus and QueryBus in Arc and set AppState { command_bus, query_bus
}) before building the app and call .manage(app_state) on the
tauri::Builder::default() chain (the same builder that calls .invoke_handler!
for download_start/download_pause/... handlers) so Tauri can inject State<'_,
AppState> into your command handlers.

---

Nitpick comments:
In `@src-tauri/src/application/queries/get_download_detail.rs`:
- Around line 144-158: Update the test_get_download_detail_not_found to assert
the specific NotFound error instead of only checking is_err(); after calling
bus.handle_get_download_detail(...) unwrap the error (e.g., let err =
result.unwrap_err()) and pattern-match or use assert_matches! to verify it is
the NotFound variant (referencing the test function name
test_get_download_detail_not_found, the QueryBus::handle_get_download_detail
call, GetDownloadDetailQuery and DownloadId). Ensure the assertion checks the
exact error enum variant (e.g., MyError::NotFound or RepoError::NotFound) used
by the handler.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0acf2f08-97d3-4579-9e15-8001ada8d5ad

📥 Commits

Reviewing files that changed from the base of the PR and between 0473fe8 and fecb4a9.

📒 Files selected for processing (7)
  • CHANGELOG.md
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/application/queries/count_by_state.rs
  • src-tauri/src/application/queries/get_download_detail.rs
  • src-tauri/src/application/queries/get_downloads.rs
  • src-tauri/src/application/queries/mod.rs
  • src-tauri/src/lib.rs
✅ Files skipped from review due to trivial changes (1)
  • CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • src-tauri/src/lib.rs

Comment on lines +26 to +30
/// Shared application state managed by Tauri.
pub struct AppState {
pub command_bus: Arc<CommandBus>,
pub query_bus: Arc<QueryBus>,
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that .manage() is called with AppState in lib.rs
echo "Checking for .manage() call in lib.rs..."
rg -n '\.manage\(' src-tauri/src/lib.rs

echo ""
echo "Checking AppState usage in lib.rs..."
rg -n 'AppState' src-tauri/src/lib.rs

Repository: mpiton/vortex

Length of output: 232


🏁 Script executed:

cat -n src-tauri/src/lib.rs | head -100

Repository: mpiton/vortex

Length of output: 2526


🏁 Script executed:

grep -A 30 'pub struct AppState' src-tauri/src/adapters/driving/tauri_ipc.rs

Repository: mpiton/vortex

Length of output: 848


🏁 Script executed:

grep -B 3 -A 5 'async fn download_' src-tauri/src/adapters/driving/tauri_ipc.rs | head -50

Repository: mpiton/vortex

Length of output: 1277


🏁 Script executed:

# Verify that no AppState is constructed anywhere in lib.rs
rg -n 'AppState\s*{' src-tauri/src/lib.rs

# Check if there's any managed state setup
rg -n '\.manage' src-tauri/src/lib.rs

# Verify that main.rs or other entry point doesn't handle this
find src-tauri/src -name 'main.rs' -type f

Repository: mpiton/vortex

Length of output: 183


🏁 Script executed:

# Simpler pattern search for AppState construction
rg -n 'AppState' src-tauri/src/lib.rs

# Check main.rs to see if AppState is managed there
cat src-tauri/src/main.rs

# Verify .manage is not in lib.rs (already confirmed from earlier search)
echo "Checking for ANY state management in lib.rs..."
rg -n 'manage\|state' src-tauri/src/lib.rs

Repository: mpiton/vortex

Length of output: 364


Critical: AppState is not managed by Tauri — all commands will panic at runtime.

The AppState struct is defined in tauri_ipc.rs and imported in lib.rs (line 28), but the Tauri builder in lib.rs (lines 34-52) never calls .manage(AppState { ... }). Every command handler (download_start, download_pause, etc.) requires State<'_, AppState> injection, which Tauri cannot provide without explicitly registering the managed state. This will cause a runtime panic on the first command invocation.

Fix in src-tauri/src/lib.rs:

Apply .manage() call
pub fn run() {
    // Construct dependencies and create AppState
    let command_bus = Arc::new(CommandBus::new(/* dependencies */));
    let query_bus = Arc::new(QueryBus::new(/* dependencies */));
    let app_state = AppState { command_bus, query_bus };

    tauri::Builder::default()
        .manage(app_state)  // <-- Add this line
        .invoke_handler(tauri::generate_handler![
            download_start,
            download_pause,
            download_resume,
            download_cancel,
            download_retry,
            download_pause_all,
            download_resume_all,
            download_set_priority,
            download_remove,
            download_list,
            download_detail,
            download_count_by_state,
        ])
        .run(tauri::generate_context!())
        .expect("fatal: failed to start Vortex");
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driving/tauri_ipc.rs` around lines 26 - 30, AppState
is never registered with Tauri so commands that request State<'_, AppState> will
panic; fix by constructing an AppState instance (wrap CommandBus and QueryBus in
Arc and set AppState { command_bus, query_bus }) before building the app and
call .manage(app_state) on the tauri::Builder::default() chain (the same builder
that calls .invoke_handler! for download_start/download_pause/... handlers) so
Tauri can inject State<'_, AppState> into your command handlers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation rust

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant