feat(download): hoster wait-time flow with countdown UI (task 39)#151
feat(download): hoster wait-time flow with countdown UI (task 39)#151
Conversation
Adds a full wait-time pipeline so hoster cooldowns surface as a parked
download with a live countdown instead of a stalled `Waiting` row.
Backend:
* `adapters/driven/network/wait_manager.rs` owns one `tokio::time::sleep`
per parked download, drives the `Waiting -> Downloading` transitions
through `Download::wait` / `resume_from_wait`, and handles
schedule / cancel / skip / natural-expiry as four distinct paths.
* `DomainEvent::DownloadWaitingStarted { id, until_unix_ms,
total_seconds, reason }` and `::DownloadWaitingEnded { id,
expired_naturally }` published alongside the existing transition
signals; the Tauri bridge forwards them as `download-waiting-started`
/ `download-waiting-ended`.
* IPC `download_skip_wait(id)` calls `WaitManager::skip_wait`;
`download_cancel` now invokes `wait_manager.cancel_wait(id)` first so
the timer is aborted before the cancel handler runs.
* `Clock` port grew a default `now_unix_ms()`; `SystemClock` overrides
with millisecond precision.
Frontend:
* `useCountdown(untilUnixMs)` ticks once per second and yields
`{ remainingSeconds, label, expired }` (`MM:SS`, or `HH:MM:SS` past an
hour) — `null` deadline becomes a no-op.
* `downloadStore.waitMap` populated from the new events via
`useDownloadEvents`, exposing per-id wait tickets.
* `WaitCountdownCell` replaces `EtaCell` while a row is `Waiting`,
rendering the live label plus a `SkipForward` icon button wired to
`download_skip_wait`.
Tests:
* 7 `wait_manager` async tests under `#[tokio::test(start_paused =
true)]` cover schedule + Started payload, natural expiry, cancel,
skip, unknown-id skip, three concurrent waits expiring
independently, and silent cancel-on-unknown.
* 6 `useCountdown` tests + 5 `downloadStore` wait-map tests + 3
`WaitCountdownCell` tests + 2 new `DomainEvent` payload tests.
* `tokio` dev-deps gain the `test-util` feature for
`tokio::time::advance` determinism (lock bumped to tokio 1.52.2 by
`cargo update -p tokio`).
Aggregated findings from the three /simplify review agents. Backend: * Drop redundant `.map_err(AppError::from)` on `Download::wait()` and `resume_from_wait()` — `AppError` already derives `#[from] DomainError`. * Centralise `Mutex::lock()` poison handling in a private `handles()` helper so a panic inside any timer task no longer crashes every subsequent `cancel_wait` / `skip_wait` call. * Move the `tokio::spawn` for the timer under the same mutex guard that inserts the resulting `JoinHandle`, closing the spawn-before-insert race that left an orphan entry when the timer fired (or `tokio::time::advance` ran in tests) before the parent inserted the handle. * Replace silent `let _ = me.expire_wait(id)` with a `tracing::debug!` on the dropped error. * `download_remove` IPC now invokes `wait_manager.cancel_wait(id)` so a user deleting a Waiting download no longer leaves a spawned timer alive for the full hoster cooldown. * `active_count` gated behind `#[cfg(test)]` — no production caller. * Trim WHAT-only doc comments on `WaitManager`, `new`, fix a confusing parenthetical on `cancel_wait`, and dedup `drain_scheduler` / `settle_spawns` test helpers into a single `pump_runtime`. Frontend: * `useDownloadEvents` no longer invalidates downloads queries on `download-waiting-started` / `download-waiting-ended` — the accompanying `download-waiting` and `download-resumed-from-wait` events already trigger invalidation. * `useCountdown` clears its `setInterval` once the deadline is reached, so a parked row that has expired stops firing 1 Hz no-op re-renders.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a WaitManager-driven hoster wait-time flow: new DomainEvents (DownloadWaitingStarted/Ended), per-download tokio timers with generation tokens, Tauri IPC to skip/cancel waits, Clock ms API, backend→frontend event bridge, frontend countdown hook/store/UI, and tests across backend and UI. ChangesHoster wait-time flow
Sequence DiagramsequenceDiagram
participant FE as Frontend
participant Store as DownloadStore
participant Hook as useDownloadEvents
participant Tauri as Tauri IPC
participant WM as WaitManager
participant Repo as DownloadRepo
participant Bus as EventBus
FE->>Tauri: invoke download_skip_wait(id)
Tauri->>WM: skip_wait(id)
WM->>Repo: load aggregate(id) and persist resume
WM->>Bus: publish DownloadWaitingEnded{expired_naturally:false}
WM->>Bus: publish DownloadResumedFromWait
WM-->>Tauri: Ok
Bus->>Hook: emit download-waiting-ended
Hook->>Store: clearWait(id)
par natural expiry
WM->>WM: sleep(until)
WM->>Repo: load aggregate(id) and persist resume
WM->>Bus: publish DownloadWaitingEnded{expired_naturally:true}
WM->>Bus: publish DownloadResumedFromWait
Bus->>Hook: emit download-waiting-ended
Hook->>Store: clearWait(id)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ad6bb3c0c1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src-tauri/src/adapters/driving/tauri_ipc.rs (1)
132-141:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftDon't clear the wait lifecycle before cancel/remove actually succeeds.
cancel_wait()already aborts the timer and publishesDownloadWaitingEndedinsrc-tauri/src/adapters/driven/network/wait_manager.rs, Lines 123-128. Ifhandle_cancel_download(...)orhandle_remove_download(...)then fails, the download stays persisted inWaitingbut its timer is gone, so it will never resume naturally and the frontend has already dropped the countdown state.Please split "abort timer" from "publish wait ended", or restore the wait on failure. This needs a regression test for a failing cancel/remove path.
Also applies to: 472-489
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src-tauri/src/adapters/driving/tauri_ipc.rs` around lines 132 - 141, The current download_cancel flow calls state.wait_manager.cancel_wait(DownloadId) which both aborts the timer and publishes DownloadWaitingEnded before calling state.command_bus.handle_cancel_download/handle_remove_download; if the command fails the download remains in Waiting but the timer and frontend countdown are lost. Fix by changing the logic in the download cancel/remove handlers (e.g., download_cancel and the similar block at the other location) to separate aborting the underlying timer from publishing the DownloadWaitingEnded event: either (a) call a method that only aborts the timer without emitting the event, then call handle_cancel_download/handle_remove_download and only publish DownloadWaitingEnded after the command succeeds; or (b) if cancel_wait already aborts and publishes, then on command failure restore the wait state (re-arm the timer and re-publish the waiting status). Update state.wait_manager to expose a non-publishing abort API or an undo/restore API as needed, adjust download_cancel to use it with CancelDownloadCommand and handle failures by restoring the wait, and add a regression test that simulates a failing cancel/remove path to ensure the wait timer and frontend countdown are preserved or restored on failure.
🧹 Nitpick comments (2)
src/hooks/__tests__/useDownloadEvents.test.ts (1)
100-103: ⚡ Quick winPrefer explicit event-name assertions over a raw call count.
Line 102 is brittle: it can pass even if one expected event is missing and an unrelated one is added. Please assert the newly introduced names (
download-waiting-started,download-waiting-ended) insubscribedEventsdirectly.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/hooks/__tests__/useDownloadEvents.test.ts` around lines 100 - 103, Replace the brittle call-count assertion in the test for useDownloadEvents: instead of expect(useTauriEvent).toHaveBeenCalledTimes(15), assert that the hook subscribed to the specific event names by checking that the subscribedEvents includes "download-waiting-started" and "download-waiting-ended" (and any other expected event names) via the useTauriEvent mock or the subscribedEvents array returned by useDownloadEvents; update the test around renderHook(() => useDownloadEvents()) to verify those explicit event names are registered (and optionally assert total length if you still want a count).src-tauri/src/adapters/driven/event/tauri_bridge.rs (1)
45-46: ⚡ Quick winAdd bridge contract tests for the new waiting events.
These two mappings are now part of the frontend IPC contract, but this file’s tests don’t pin their kebab-case event names or camel-cased payload fields yet. A focused pair of assertions here would catch accidental renames/regressions early.
🧪 Example test shape
+ #[test] + fn test_waiting_event_bridge_mapping() { + let (name, payload) = to_tauri_event(&DomainEvent::DownloadWaitingStarted { + id: DownloadId(42), + until_unix_ms: 1_700_000_000_000, + total_seconds: 60, + reason: "hoster cooldown".into(), + }); + assert_eq!(name, "download-waiting-started"); + assert_eq!(payload["id"], 42); + assert_eq!(payload["untilUnixMs"], 1_700_000_000_000_u64); + assert_eq!(payload["totalSeconds"], 60); + assert_eq!(payload["reason"], "hoster cooldown"); + + let (name, payload) = to_tauri_event(&DomainEvent::DownloadWaitingEnded { + id: DownloadId(42), + expired_naturally: true, + }); + assert_eq!(name, "download-waiting-ended"); + assert_eq!(payload["id"], 42); + assert_eq!(payload["expiredNaturally"], true); + }Also applies to: 252-273
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs` around lines 45 - 46, Add unit tests that assert the bridge mappings for DomainEvent::DownloadWaitingStarted and DomainEvent::DownloadWaitingEnded produce the exact kebab-case event names ("download-waiting-started" and "download-waiting-ended") and that their serialized payloads use the expected camelCase field names; update the existing bridge contract test module (the tests covering the event-to-string mappings around the same area as DomainEvent::DownloadStarted/Finished) to include two focused assertions: one that checks the mapping function returns the exact kebab-case event name for each of the symbols DownloadWaitingStarted and DownloadWaitingEnded, and one that serializes a representative payload and asserts the JSON keys match the expected camelCased field names so renames/regressions are caught.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src-tauri/src/adapters/driven/logging/download_log_bridge.rs`:
- Around line 54-64: The log message for DomainEvent::DownloadWaitingEnded uses
"skipped" when expired_naturally is false, which is misleading for cancel-driven
endings; update the suffix selection in the DomainEvent::DownloadWaitingEnded
branch to use a neutral phrase (e.g., "ended early") instead of "skipped" and
keep the existing store.push(id.0, format!("[INFO] Wait {suffix}")) call so the
change only affects the suffix variable used in that format!.
In `@src-tauri/src/adapters/driven/network/wait_manager.rs`:
- Around line 95-107: The code currently inserts a new JoinHandle into the
handles map with guard.insert(id, handle) without aborting any previously stored
handle; update the logic in the block that creates the timer (the tokio::spawn
closure that calls me.expire_wait) to check guard.remove/guard.get for an
existing JoinHandle for the same DownloadId and call .abort() on it before
inserting the new handle so the old timer cannot wake and race with the new
deadline; ensure you still insert the new handle into the handles map afterwards
and keep the existing error handling around me.expire_wait. Also add a
regression test for schedule_wait (or the public entrypoint that triggers this
timer) that schedules the same DownloadId twice in quick succession and asserts
only the latest timer's expiry action (e.g., DownloadWaitingEnded or the
aggregate state change observable via expire_wait) occurs.
In `@src/hooks/useCountdown.ts`:
- Around line 19-33: The effect in useCountdown creates an interval even when
untilUnixMs is already expired; update the useEffect (the effect that reads
untilUnixMs and calls setNow/setInterval) to check if Date.now() >= untilUnixMs
and return early instead of calling setInterval, so no timer is scheduled for
expired deadlines; keep the existing logic that still sets now initially via
setNow when appropriate and retain clearInterval(interval) in the cleanup for
active intervals.
---
Outside diff comments:
In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 132-141: The current download_cancel flow calls
state.wait_manager.cancel_wait(DownloadId) which both aborts the timer and
publishes DownloadWaitingEnded before calling
state.command_bus.handle_cancel_download/handle_remove_download; if the command
fails the download remains in Waiting but the timer and frontend countdown are
lost. Fix by changing the logic in the download cancel/remove handlers (e.g.,
download_cancel and the similar block at the other location) to separate
aborting the underlying timer from publishing the DownloadWaitingEnded event:
either (a) call a method that only aborts the timer without emitting the event,
then call handle_cancel_download/handle_remove_download and only publish
DownloadWaitingEnded after the command succeeds; or (b) if cancel_wait already
aborts and publishes, then on command failure restore the wait state (re-arm the
timer and re-publish the waiting status). Update state.wait_manager to expose a
non-publishing abort API or an undo/restore API as needed, adjust
download_cancel to use it with CancelDownloadCommand and handle failures by
restoring the wait, and add a regression test that simulates a failing
cancel/remove path to ensure the wait timer and frontend countdown are preserved
or restored on failure.
---
Nitpick comments:
In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs`:
- Around line 45-46: Add unit tests that assert the bridge mappings for
DomainEvent::DownloadWaitingStarted and DomainEvent::DownloadWaitingEnded
produce the exact kebab-case event names ("download-waiting-started" and
"download-waiting-ended") and that their serialized payloads use the expected
camelCase field names; update the existing bridge contract test module (the
tests covering the event-to-string mappings around the same area as
DomainEvent::DownloadStarted/Finished) to include two focused assertions: one
that checks the mapping function returns the exact kebab-case event name for
each of the symbols DownloadWaitingStarted and DownloadWaitingEnded, and one
that serializes a representative payload and asserts the JSON keys match the
expected camelCased field names so renames/regressions are caught.
In `@src/hooks/__tests__/useDownloadEvents.test.ts`:
- Around line 100-103: Replace the brittle call-count assertion in the test for
useDownloadEvents: instead of expect(useTauriEvent).toHaveBeenCalledTimes(15),
assert that the hook subscribed to the specific event names by checking that the
subscribedEvents includes "download-waiting-started" and
"download-waiting-ended" (and any other expected event names) via the
useTauriEvent mock or the subscribedEvents array returned by useDownloadEvents;
update the test around renderHook(() => useDownloadEvents()) to verify those
explicit event names are registered (and optionally assert total length if you
still want a count).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9f0fecc2-c771-4377-bcf6-2b02994fa084
⛔ Files ignored due to path filters (1)
src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (22)
CHANGELOG.mdsrc-tauri/Cargo.tomlsrc-tauri/src/adapters/driven/event/tauri_bridge.rssrc-tauri/src/adapters/driven/logging/download_log_bridge.rssrc-tauri/src/adapters/driven/network/mod.rssrc-tauri/src/adapters/driven/network/wait_manager.rssrc-tauri/src/adapters/driven/scheduler/system_clock.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/domain/event.rssrc-tauri/src/domain/ports/driven/clock.rssrc-tauri/src/lib.rssrc/hooks/__tests__/useCountdown.test.tssrc/hooks/__tests__/useDownloadEvents.test.tssrc/hooks/__tests__/useDownloadProgress.test.tssrc/hooks/useCountdown.tssrc/hooks/useDownloadEvents.tssrc/stores/__tests__/downloadStore.test.tssrc/stores/downloadStore.tssrc/types/events.tssrc/views/DownloadsView/DownloadsTable.tsxsrc/views/DownloadsView/WaitCountdownCell.tsxsrc/views/DownloadsView/__tests__/WaitCountdownCell.test.tsx
There was a problem hiding this comment.
3 issues found across 23 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/logging/download_log_bridge.rs">
<violation number="1" location="src-tauri/src/adapters/driven/logging/download_log_bridge.rs:61">
P3: `expired_naturally: false` includes both skip and cancel, but this branch logs it as "skipped" only. Use a neutral label so cancelled waits aren’t misreported.</violation>
</file>
<file name="src-tauri/src/adapters/driven/network/wait_manager.rs">
<violation number="1" location="src-tauri/src/adapters/driven/network/wait_manager.rs:147">
P1: `expire_wait` resumes even when its timer handle was already removed by cancel/skip, which can race with `abort()` and incorrectly resume a cancelled wait.</violation>
</file>
<file name="src-tauri/src/adapters/driving/tauri_ipc.rs">
<violation number="1" location="src-tauri/src/adapters/driving/tauri_ipc.rs:135">
P1: `download_cancel` cancels the wait timer before running a fallible cancel command, so on cancel failure the download can be left in `Waiting` with no timer to resume it.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Merging this PR will degrade performance by 23.7%
Performance Changes
Comparing |
- Abort previous JoinHandle when rescheduling same DownloadId
(insert() drops without cancelling — abort() is required).
- expire_wait short-circuits when the handle was already removed
by cancel/skip (abort() is cooperative; tasks past sleep().await
run to completion).
- Run CancelDownloadCommand before cancel_wait so a fallible cancel
no longer strands the download in Waiting with no timer.
- Log DownloadWaitingEnded { expired_naturally: false } as
"ended early" instead of "skipped" (covers cancel + skip).
- useCountdown skips setInterval when untilUnixMs is already past.
- Add regression test rescheduling_same_id_aborts_previous_timer
+ useCountdown past-deadline test.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3999aa08ea
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
1 issue found across 6 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/network/wait_manager.rs">
<violation number="1" location="src-tauri/src/adapters/driven/network/wait_manager.rs:111">
P1: Reschedule-by-replace is still race-prone: a stale timer can remove the new handle and expire the wait early because `expire_wait` matches only by `DownloadId`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Address two unresolved bot findings on PR #151: - `download_remove` now runs the fallible remove command first and only cancels the wait timer on success, mirroring `download_cancel`. A failing remove no longer strands the download in `Waiting` with no active timer to expire it. - `WaitManager` tags each scheduled timer with a generation token. A stale task that already passed its `.await` (so `abort()` is a no-op) is identified on its way into `expire_wait` by a mismatched generation and bails out, instead of evicting the fresh handle and resuming the aggregate against the elapsed deadline. Adds `stale_timer_does_not_evict_replaced_handle` regression test that reproduces the race deterministically by advancing past the first deadline before rescheduling.
Summary
Implements hoster wait-time scheduler with live countdown UI. Hosters like 1fichier and MediaFire enforce cooldown delays before allowing the next download. This PR adds a tokio-driven timer service, domain events for lifecycle tracking, and a React countdown component with skip/cancel controls. Downloads automatically resume after the wait expires, preventing queue stalls. Closes task 39 sprint requirement.
Why
Free hosters impose enforced delays (typically 15–60 min) to manage bandwidth. Without explicit UI feedback and automatic resume, users either stall the queue indefinitely or resort to manual intervention every wait cycle. This scheduler provides:
skip_wait/cancel_waitcommands abort timer and transition state correctlyChanges
Backend wait scheduler (
src-tauri/src/adapters/driven/network/wait_manager.rs, 526 lines): Tokio-driven timer service managing concurrent waits via HashMap of JoinHandles. Implementsschedule_wait(id, total_seconds, reason)(spawns task, publishes event),skip_wait(id)(aborts, transitions to Downloading),cancel_wait(id)(aborts without transition),expire_wait(id)(called by spawned task at deadline). Includes poison-tolerant Mutex access via privatehandles()helper to prevent crashes if any timer task panics. Tests: 7 async tokio tests withstart_paused = truecovering schedule, natural expiry, cancel, skip, concurrent waits, unknown-id handling.Domain events (
src-tauri/src/domain/event.rs): AddedDownloadWaitingStarted { id, until_unix_ms, total_seconds, reason }andDownloadWaitingEnded { id, expired_naturally }with test cases for rich lifecycle tracking.Clock millisecond precision (
src-tauri/src/domain/ports/driven/clock.rs,src-tauri/src/adapters/driven/scheduler/system_clock.rs): Addednow_unix_ms()method for millisecond-precision deadline computation (avoids rounding drift in countdown UI).IPC event bridge (
src-tauri/src/adapters/driven/event/tauri_bridge.rs): Mapped new events to Tauri payloads (download-waiting-started,download-waiting-ended) with full typing and log integration.IPC command (
src-tauri/src/adapters/driving/tauri_ipc.rs): Addeddownload_skip_wait(id)command. Modifieddownload_cancelanddownload_removeto clean up wait timers first (prevents orphan handles).App initialization (
src-tauri/src/lib.rs): WiredWaitManagerintoAppState. Exported public API. Registered IPC handler. Bumped tokio to 1.52.2 fortest-utilfeature.Countdown hook (
src/hooks/useCountdown.ts, 54 lines): React hook consuming absoluteuntilUnixMsdeadline, computing remaining seconds, formattingMM:SS/HH:MM:SSlabels, detecting expiry. Early-exits setInterval once deadline hit to prevent 1 Hz no-op re-renders. Tests: 6 cases covering initial, ticking, zero-clamp, null, hour format, deadline change.Download events hook (
src/hooks/useDownloadEvents.ts): Addeddownload-waiting-startedanddownload-waiting-endedlisteners syncing wait tickets to Zustand store (no redundant query invalidation; accompanyingdownload-waiting/download-resumed-from-waitevents already invalidate).Store wait tickets (
src/stores/downloadStore.ts): AddedWaitTicket { untilUnixMs, totalSeconds, reason }and mapsetWait(id, ticket)/clearWait(id)with early-return pattern to suppress unnecessary notifies. Tests: 5 cases for set, overwrite, clear, unknown-id no-op, concurrent.Event types (
src/types/events.ts): Added TypeScript payload typesDownloadWaitingStartedPayloadandDownloadWaitingEndedPayload.Countdown cell (
src/views/DownloadsView/WaitCountdownCell.tsx, 70 lines): New component rendering live countdown label + skip button (fallback "Waiting…" pre-event). Mutation viauseTauriMutation. Tests: 3 cases covering fallback, countdown display, skip invoke.Table integration (
src/views/DownloadsView/DownloadsTable.tsx): Conditional cell rendering —WaitCountdownCellwhen state is Waiting, elseEtaCell.CHANGELOG.md: Added two entries (feat + refactor) with version, date, description per Keep a Changelog format.
Dependency: Added
tokio = { version = "1.51.0", features = ["test-util"] }(dev). Updated viacargo update -p tokio→ 1.52.2.Testing
All tests passing locally:
cargo test --workspace→ 1449 tests passnpx vitest run→ 687 tests passSpecific test coverage:
WaitManager: 7 async tests with paused-time scheduling, including spawn-before-insert race fix validation (no orphan handles)useCountdown: 6 tests validating time progression, formatting, deadline expiryWaitCountdownCell: 3 component tests (render fallback, countdown ticking, skip mutation)downloadStore: 5 store tests (set/clear/concurrent operations)useDownloadEvents: Updated event listener count (13 → 15 with wait events)downloadProgressmock state: Updated to includewaitMapcargo test --workspace npx vitest runRelated Issues
Implements task 39 sprint requirement. Prerequisite for task 38 (1fichier hoster plugin, which calls
request_waitduring rate-limiting).Notes for Reviewer
Review strategy (1100+ lines, 23 files):
domain/event.rs(new events),domain/ports/driven/clock.rs(precision)wait_manager.rs— pay special attention to poison mutex recovery (handles()helper) and spawn-before-insert lock guard ordering (prevents race where expired timer removes non-existent entry)lib.rs,tauri_ipc.rs(commands/cleanup),tauri_bridge.rs(event mapping)useCountdown.ts(time logic),downloadStore.ts(store shape),WaitCountdownCell.tsx(UI)pump_runtime()helper for paused-time tokioKey patterns:
wait_manager::handles()usesunwrap_or_else(PoisonError::into_inner)to recover gracefully if a timer task panics, preventing cascading crashes incancel_wait/skip_wait. This is defensive programming for production robustness.tokio::spawnis called within the same mutex guard asinsert, ensuring the JoinHandle exists before the spawned task can run and attempt to remove itself.until_unix_msfrom backend (computed at schedule time), not a relative duration. Eliminates clock-drift jitter in countdown display and matches backend timer expiry exactly.Follow-up tasks (not in this PR):
request_waitAPI)cancel_waiton paused downloads (currently onlyskip_waitfrom countdown cell)Checklist
.unwrap()in production code (all use?or error handling)Summary by cubic
Adds a hoster wait-time flow with a backend scheduler and a live countdown UI. Closes the reschedule race with generation tokens and fixes cancel/remove ordering; downloads auto-resume at expiry and support skip/cancel (Linear task 39).
New Features
WaitManagerschedules per-download waits, emitsDownloadWaitingStarted/DownloadWaitingEndedwithuntilUnixMs, and resumes on expiry.download_skip_wait;download_cancelruns the cancel command first, then cancels the wait;download_removeruns remove first and only cancels the wait on success.download-waiting-started/download-waiting-endedpower exact countdowns.useCountdown,waitMapin the store, andWaitCountdownCellwith a Skip button.now_unix_ms().Bug Fixes
expire_waitshort-circuits if a peer cancel/skip already removed the handle; timers spawn under the same lock and tolerate poisoned mutexes.download_removeno longer strands a Waiting download on failure; log non-natural wait endings as "ended early".useCountdownskips scheduling for past deadlines and stops intervals at expiry.Written for commit af86bf9. Summary will update on new commits.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation