diff --git a/CHANGELOG.md b/CHANGELOG.md index 3c1dd18..98ccd54 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- **Packages persistence** (PRD §6.3, PRD-v2 §P1.7, task 26): SQLite `packages` table (migration `m20260429_000007`) with the schema mandated by PRD-v2 §8 P1 — `id TEXT PRIMARY KEY`, `name`, `source_type` (`container` / `playlist` / `manual` / `split_archive`), nullable `folder_path`, nullable `password` (keyring ref), `auto_extract` (default `1`), `priority` (default `5`), `created_at`. The legacy stub `packages` table from migration 1 (BIGINT id, name only, never wired) is dropped and recreated. The migration also adds `downloads.package_id TEXT REFERENCES packages(id) ON DELETE SET NULL` plus the `idx_downloads_package` index, so deleting a package detaches its members without losing the rows. New `PackageRepository` driven port (`save` / `find_by_id` / `list` / `delete` / `list_downloads`) and `SqlitePackageRepo` adapter with sea-orm entity + `from_domain` / `into_domain` converters. Upserts preserve the original `created_at` so list ordering stays stable across re-saves; `list` orders by `(created_at asc, id asc)`; `list_downloads` orders by `queue_position asc, id asc` so the caller surfaces members in scheduling order. Domain `Package` aggregate gained the new persisted fields plus a `PackageId(String)` typed wrapper and a `PackageSourceType` enum (round-trips via `Display` / `FromStr`); `download_ids` stays in-memory (the FK on `downloads.package_id` is the source of truth on disk). `DomainEvent::PackageCreated.id` switches from `u64` to `PackageId` to match. Twenty-one new unit tests cover the four acceptance criteria (fresh + existing-DB migration, FK `ON DELETE SET NULL` semantics, full-field round-trip, ≥85 % adapter coverage), plus error paths (unknown `source_type`, priority overflow, `created_at` overflow), source-type round-trip per variant, optional fields persisting as `NULL`, `list_downloads` filtering and ordering, and the `InMemoryPackageRepository` mock used by future command / query handlers. Unblocks tasks 27 (Commands Packages), 28 (Queries Packages), 30 (auto-grouping playlist) and 31 (auto-grouping split archives). - **Account rotation on quota** (PRD §6.4, PRD-v2 §P1.6, task 25): new `AccountRotator` application service detects quota exhaustion (HTTP `429` or `traffic_left` below a caller-supplied threshold via `is_quota_signal`), pulls the offending account out of rotation for a hoster-specific cooldown via `mark_exhausted(account_id, service_name, ttl_secs)`, and asks the existing `AccountSelector` for the next best candidate via `next_account(service, strategy) -> NextAccountOutcome`. The outcome enum distinguishes three caller-actionable states: `Picked(Account)` (use the credential), `NoneAvailable` (no enabled / non-expired account configured — fall back to the free path or surface a UI hint), and `AllExhausted { next_eligible_at_ms }` (every eligible account is on cooldown — stall the download in `Waiting` until the earliest deadline so the scheduler can retry without busy-looping). `NextAccountOutcome::error_message(service_name)` returns the PRD §6.4 standard wording (`"All accounts exhausted for {service}"` / `"No account available for {service}"`) so callers attaching the error to `Download.error` stay uniform across hosters. Cooldown lifecycle: `record_traffic_refresh(account_id, traffic_left, threshold)` clears the marker only when the upstream confirms `traffic_left >= threshold` (a `None` observation or below-threshold value leaves the marker in place so a hoster without a traffic counter cannot silently undo every `mark_exhausted`); `clear_exhausted(account_id)` is the explicit reset path, idempotent for unknown ids; expired entries are pruned lazily on the next `next_account` call so no background sweeper is needed. The exhaustion map sits behind a `std::sync::Mutex` in `AccountRotator` (intentionally NOT persisted in SQLite — a process restart wipes the cooldown, which is the desired behaviour for the 5-to-15-minute hoster reset window); a poisoned mutex surfaces as `AppError::Validation("exhausted accounts mutex poisoned")` so callers can distinguish "no candidate" from "internal state corrupted", matching `AccountSelector::pick_round_robin`'s contract. The `AllExhausted` deadline restricts its scan to accounts that actually belong to the queried service so a parallel-service entry cannot leak its cooldown into an unrelated answer. New `AccountSelector::select_best_excluding(service, strategy, exclude_ids)` extends the existing `select_best` with an exclude list (no caching, no behaviour change for empty `exclude`); the prior signature is now a thin wrapper. New `DomainEvent::AccountExhausted { id, service_name, exhausted_until_ms }` forwarded by the Tauri bridge as `account-exhausted` (camelCase `exhaustedUntilMs`). New transient `Account::exhausted_until: Option` field with `mark_exhausted` / `clear_exhausted` / `is_exhausted(now_ms)` / `exhausted_until()` methods — the field is reset to `None` by `Account::reconstruct` so the rotator's in-memory map remains the single source of truth even though SQLite roundtrips drop the marker. New `CommandBus::with_account_rotator` / `account_rotator()` builder & accessor wires the rotator alongside the existing `AccountSelector`. Twenty-two new unit tests cover the four acceptance criteria (`429 → next account`, `all exhausted → AllExhausted with earliest deadline`, `traffic-refresh clears cooldown when above threshold`, full rotator + selector-exclude integration), plus edge cases: zero-TTL no-op, deadline-exclusive equality, cross-service deadline isolation, `None`-traffic refresh keeps cooldown, `404` / `500` ignored by `is_quota_signal`, threshold-equality below-but-not-above, idempotent `clear_exhausted`, lazy cooldown expiry surfaces an account back into rotation. Unblocks task 38 (vortex-mod-1fichier free + premium) which is the first hoster to wire the rotation flow. - **Account auto-selection** (PRD §6.4, PRD-v2 §P1.5, task 24): new `AccountSelector` application service picks the best `Account` per service for the live `AppConfig::account_selection_strategy`. Three strategies: `BestTraffic` (default, ranks `enabled → not expired → most traffic_left → most recent last_validated → smallest id` with `Unlimited` traffic ranking above any finite value), `RoundRobin` (per-service cursor over enabled non-expired candidates ordered by id; a poisoned cursor mutex now surfaces as `AppError::Validation("round-robin cursor mutex poisoned")` so it stays distinguishable from "no eligible account"), and `Manual` (fallback alias of `BestTraffic` until pinning UI lands). The selector reads `AccountRepository::list_by_service` on every call instead of caching: the previous event-driven invalidation could read stale rows when `select_best` landed between `bus.publish(AccountUpdated)` and the spawned `TokioEventBus` subscriber firing. New `CommandBus::resolve_account_for(service_name)` exposes the selector to download / link-grabber flows; failures from `ConfigStore::get_config()` propagate via `?` instead of being swallowed by a default-strategy fallback. New `DomainEvent::NoAccountAvailable { service_name }` (emitted when no candidate passes the filter) and `DomainEvent::AccountSelected { id, service_name, strategy }` (emitted whenever a pick is made), both forwarded by the Tauri bridge as `no-account-available` / `account-selected`. New `account_selection_strategy` field on `AppConfig` / `ConfigPatch` / `apply_patch` plus the matching IPC and TOML serialisation paths (snake_case `"best_traffic" | "round_robin" | "manual"`). The IPC layer rejects unknown strategy values: `ConfigPatchDto` → `ConfigPatch` is `TryFrom` and `settings_update` surfaces `invalid account selection strategy: …` instead of silently ignoring a typo. The TOML store mirrors the rule: `ConfigDto` → `AppConfig` is also `TryFrom`, so a hand-edited `config.toml` carrying an unknown strategy value now fails fast with `StorageError("invalid config: …")` instead of silently coercing to `best_traffic`. Backward compat is preserved: a legacy `config.toml` written before this field existed deserializes the missing key as the empty string via `#[serde(default)]`, and that empty case is treated as `BestTraffic` so an upgrade does not break startup. Eighteen unit tests cover the four acceptance criteria (3-account scenario, all-expired surface, comparative ranking table, round-robin alternance), repo-fresh selection, poisoned-cursor surfacing, IPC rejection of unknown strategies, TOML-store rejection of unknown persisted strategies, legacy-config (missing strategy field) backward compat, and config-error propagation. Unblocks task 25 (rotation auto sur quota). - **Accounts view** (PRD §6.4, PRD-v2 §P1.4, task 23): full Accounts management UI replacing the previous `PlaceholderView`. Header tabs (`All` / `Debrid` / `Premium` / `Free`) drive a category filter on top of the SQLite-backed `account_list` query, with the `(filter, all)` count rendered next to each label. Each row exposes the service, username, account type, derived status badge (`Active` / `Expired` / `Disabled` / `Unverified`), an aria-labelled traffic progress bar (used / total formatted via `formatBytes`), `valid_until` and `last_validated` columns, an enable/disable `Switch`, an inline `Validate` button, and a kebab menu with `Edit` / `Delete`. The new `AddAccountDialog` validates non-empty service / username / password before submission. `EditAccountDialog` posts a partial `AccountPatch` (skips fields that did not change so the keyring rotation only fires when the password field is filled). The `Delete` action honours the existing `settings.confirm_delete` toggle: when enabled it pops the new `DeleteAccountDialog` (translated description naming the row), otherwise it deletes immediately. `ImportAccountsDialog` calls `tauri-plugin-dialog`'s file-pick to anchor the encrypted bundle path, prompts for the passphrase, then calls `account_import` and invalidates the list cache so freshly-imported rows appear without a manual refresh; `ExportAccountsDialog` requires the user to confirm the passphrase, opens the native `save` dialog for the destination, and reports the row count via toast. Nine new Tauri IPC commands wire the existing `CommandBus` / `QueryBus` handlers (tasks 21, 22) to the frontend: `account_add`, `account_update`, `account_delete`, `account_validate`, `account_export`, `account_import`, `account_list`, `account_get`, `account_traffic_get`, all registered in `invoke_handler!` and re-exported from `lib.rs`. The runtime now wires `SqliteAccountRepo` to both buses and provides the `KeyringAccountStore` + `AesGcmPbkdf2Codec` adapters to the `CommandBus`. Adds `useAccountsQuery` (TanStack Query, 30 s `staleTime`) and `accountQueries` cache key factory. New i18n namespace `accounts.*` covers titles, status badges, dialog copy and toast messages in `en.json` + `fr.json`. 13 Vitest tests cover render, empty state, category filter, add → IPC → toast flow, delete → confirm → IPC, export trigger disabled when no accounts, export with passphrase, import with file picker. `AccountValidator` is intentionally not wired in this commit — `account_validate` returns the configured `Validation` error until the first hoster plugin lands (task 38), letting the UI render the failure toast without crashing. The "volume per account" stat from the requirements list is deferred until `history` gains an `account_id` column. diff --git a/src-tauri/src/adapters/driven/event/tauri_bridge.rs b/src-tauri/src/adapters/driven/event/tauri_bridge.rs index 40ffc1c..d3dc3d1 100644 --- a/src-tauri/src/adapters/driven/event/tauri_bridge.rs +++ b/src-tauri/src/adapters/driven/event/tauri_bridge.rs @@ -389,13 +389,14 @@ mod tests { event_name(&DomainEvent::PluginUnloaded { name: "p".into() }), "plugin-unloaded" ); - assert_eq!( - event_name(&DomainEvent::PackageCreated { - id: 1, - name: "pkg".into() - }), - "package-created" - ); + let evt = DomainEvent::PackageCreated { + id: crate::domain::model::package::PackageId::new("pkg-1"), + name: "pkg".into(), + }; + assert_eq!(event_name(&evt), "package-created"); + let (_, payload) = to_tauri_event(&evt); + assert_eq!(payload["id"], "pkg-1"); + assert_eq!(payload["name"], "pkg"); } #[test] diff --git a/src-tauri/src/adapters/driven/sqlite/connection.rs b/src-tauri/src/adapters/driven/sqlite/connection.rs index 9707693..c845c96 100644 --- a/src-tauri/src/adapters/driven/sqlite/connection.rs +++ b/src-tauri/src/adapters/driven/sqlite/connection.rs @@ -189,6 +189,110 @@ mod tests { assert!(other.is_ok(), "different service must be allowed"); } + #[tokio::test] + async fn test_packages_migration_applies_cleanly_on_existing_db() { + // Stand up a DB at the schema state immediately before the + // packages migration (6 migrations applied), seed prior tables, + // then run the remaining migrations and verify the new schema + // exists and existing data is preserved. + let sqlite_opts = sea_orm::sqlx::sqlite::SqliteConnectOptions::from_str("sqlite::memory:") + .unwrap() + .pragma("foreign_keys", "ON"); + let pool = sea_orm::sqlx::sqlite::SqlitePoolOptions::new() + .max_connections(1) + .connect_with(sqlite_opts) + .await + .unwrap(); + let db = sea_orm::SqlxSqliteConnector::from_sqlx_sqlite_pool(pool); + + Migrator::up(&db, Some(6)) + .await + .expect("first 6 migrations"); + + // Seed a download row that must survive the migration. + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO downloads (id, url, file_name, state, priority, queue_position, downloaded_bytes, speed_bytes_per_sec, retry_count, max_retries, segments_count, source_hostname, protocol, resume_supported, destination_path, created_at, updated_at) VALUES (1, 'https://example.com/f.zip', 'f.zip', 'Queued', 5, 0, 0, 0, 0, 5, 1, 'example.com', 'https', 0, '/tmp', 1, 1)" + .to_string(), + )) + .await + .expect("seed download"); + + Migrator::up(&db, None).await.expect("remaining migrations"); + + // packages table replaced with the new schema. + let cols = db + .query_all(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "PRAGMA table_info(packages)".to_string(), + )) + .await + .unwrap(); + let names: Vec = cols + .iter() + .map(|r| r.try_get_by_index::(1).unwrap()) + .collect(); + for required in [ + "id", + "name", + "source_type", + "folder_path", + "password", + "auto_extract", + "priority", + "created_at", + ] { + assert!( + names.iter().any(|n| n == required), + "packages must have column '{required}', got: {names:?}" + ); + } + + // downloads gained the package_id FK column and its index. + let dl_cols = db + .query_all(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "PRAGMA table_info(downloads)".to_string(), + )) + .await + .unwrap(); + let dl_names: Vec = dl_cols + .iter() + .map(|r| r.try_get_by_index::(1).unwrap()) + .collect(); + assert!( + dl_names.iter().any(|n| n == "package_id"), + "downloads must expose 'package_id', got: {dl_names:?}" + ); + + let indexes = db + .query_all(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='downloads'" + .to_string(), + )) + .await + .unwrap(); + let idx_names: Vec = indexes + .iter() + .map(|r| r.try_get_by_index::(0).unwrap()) + .collect(); + assert!( + idx_names.iter().any(|n| n == "idx_downloads_package"), + "expected idx_downloads_package, got: {idx_names:?}" + ); + + // Existing data preserved. + let downloads = db + .query_all(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "SELECT id FROM downloads".to_string(), + )) + .await + .unwrap(); + assert_eq!(downloads.len(), 1, "existing download row preserved"); + } + #[tokio::test] async fn test_wal_mode_enabled() { let test_id = std::process::id(); diff --git a/src-tauri/src/adapters/driven/sqlite/entities/mod.rs b/src-tauri/src/adapters/driven/sqlite/entities/mod.rs index 052212f..a25a460 100644 --- a/src-tauri/src/adapters/driven/sqlite/entities/mod.rs +++ b/src-tauri/src/adapters/driven/sqlite/entities/mod.rs @@ -2,4 +2,5 @@ pub mod account; pub mod download; pub mod download_segment; pub mod history; +pub mod package; pub mod plugin_config; diff --git a/src-tauri/src/adapters/driven/sqlite/entities/package.rs b/src-tauri/src/adapters/driven/sqlite/entities/package.rs new file mode 100644 index 0000000..d41d7a4 --- /dev/null +++ b/src-tauri/src/adapters/driven/sqlite/entities/package.rs @@ -0,0 +1,83 @@ +use sea_orm::entity::prelude::*; + +use crate::domain::error::DomainError; +use crate::domain::model::package::{Package, PackageId, PackageSourceType}; + +#[derive(Clone, Debug, PartialEq, DeriveEntityModel)] +#[sea_orm(table_name = "packages")] +pub struct Model { + #[sea_orm(primary_key, auto_increment = false)] + pub id: String, + pub name: String, + pub source_type: String, + pub folder_path: Option, + pub password: Option, + pub auto_extract: i32, + pub priority: i32, + pub created_at: i64, +} + +#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)] +pub enum Relation {} + +impl ActiveModelBehavior for ActiveModel {} + +impl Model { + pub fn into_domain(self) -> Result { + let source_type: PackageSourceType = self.source_type.parse()?; + let auto_extract = match self.auto_extract { + 0 => false, + 1 => true, + other => { + return Err(DomainError::ValidationError(format!( + "package {}: auto_extract {other} out of bool range", + self.id + ))); + } + }; + let priority = u8::try_from(self.priority).map_err(|_| { + DomainError::ValidationError(format!( + "package {}: priority {} out of u8 range", + self.id, self.priority + )) + })?; + let created_at = u64::try_from(self.created_at).map_err(|_| { + DomainError::ValidationError(format!( + "package {}: created_at {} out of u64 range", + self.id, self.created_at + )) + })?; + Package::reconstruct( + PackageId::new(self.id), + self.name, + source_type, + self.folder_path, + self.password, + auto_extract, + priority, + created_at, + ) + } +} + +impl ActiveModel { + pub fn from_domain(package: &Package) -> Result { + use sea_orm::ActiveValue::Set; + + let id_str = package.id().as_str().to_string(); + let created_at = i64::try_from(package.created_at()).map_err(|_| { + DomainError::ValidationError(format!("package {id_str}: created_at exceeds i64::MAX")) + })?; + + Ok(Self { + id: Set(id_str), + name: Set(package.name().to_string()), + source_type: Set(package.source_type().to_string()), + folder_path: Set(package.folder_path().map(str::to_string)), + password: Set(package.password().map(str::to_string)), + auto_extract: Set(if package.auto_extract() { 1 } else { 0 }), + priority: Set(i32::from(package.priority())), + created_at: Set(created_at), + }) + } +} diff --git a/src-tauri/src/adapters/driven/sqlite/migrations/m20260429_000007_create_packages.rs b/src-tauri/src/adapters/driven/sqlite/migrations/m20260429_000007_create_packages.rs new file mode 100644 index 0000000..5aecf38 --- /dev/null +++ b/src-tauri/src/adapters/driven/sqlite/migrations/m20260429_000007_create_packages.rs @@ -0,0 +1,141 @@ +//! Recreate the `packages` table with the schema mandated by PRD-v2 §8 P1 +//! and add the `downloads.package_id` foreign key column. +//! +//! The legacy `packages` table from migration 1 (BIGINT id, name only) was +//! never wired to any repository or query — it is dropped here without +//! data preservation. Going forward the package id is `TEXT` (caller-chosen +//! string, typically a UUID or slug) and the row carries the persistence +//! fields the future Package CRUD relies on (`source_type`, `folder_path`, +//! `password`, `auto_extract`, `priority`, `created_at`). +//! +//! `downloads.package_id` is added as a nullable `TEXT` foreign key with +//! `ON DELETE SET NULL` semantics: deleting a package detaches its members +//! but keeps every individual download row intact. We use raw `ALTER TABLE` +//! for that column because SQLite's column-level FK syntax is not exposed +//! through sea-orm's column builder. + +use sea_orm::{ConnectionTrait, Statement}; +use sea_orm_migration::prelude::*; + +#[derive(DeriveMigrationName)] +pub struct Migration; + +#[async_trait::async_trait] +impl MigrationTrait for Migration { + async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> { + // Drop the legacy stub table created by the very first migration. + manager + .drop_table(Table::drop().table(Packages::Table).if_exists().to_owned()) + .await?; + + manager + .create_table( + Table::create() + .table(Packages::Table) + .if_not_exists() + .col(ColumnDef::new(Packages::Id).text().not_null().primary_key()) + .col(ColumnDef::new(Packages::Name).text().not_null()) + .col(ColumnDef::new(Packages::SourceType).text().not_null()) + .col(ColumnDef::new(Packages::FolderPath).text().null()) + .col(ColumnDef::new(Packages::Password).text().null()) + .col( + ColumnDef::new(Packages::AutoExtract) + .integer() + .not_null() + .default(1), + ) + .col( + ColumnDef::new(Packages::Priority) + .integer() + .not_null() + .default(5), + ) + .col(ColumnDef::new(Packages::CreatedAt).big_integer().not_null()) + .to_owned(), + ) + .await?; + + // SQLite supports adding a column with a column-constraint FK in a + // single `ALTER TABLE` statement; sea-orm's `add_column` builder does + // not expose `REFERENCES ... ON DELETE`, so issue raw SQL instead. + let conn = manager.get_connection(); + conn.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "ALTER TABLE downloads ADD COLUMN package_id TEXT REFERENCES packages(id) ON DELETE SET NULL" + .to_string(), + )) + .await?; + + manager + .create_index( + Index::create() + .name("idx_downloads_package") + .table(Downloads::Table) + .col(Downloads::PackageId) + .to_owned(), + ) + .await + } + + async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> { + manager + .drop_index( + Index::drop() + .name("idx_downloads_package") + .table(Downloads::Table) + .to_owned(), + ) + .await?; + + manager + .alter_table( + Table::alter() + .table(Downloads::Table) + .drop_column(Downloads::PackageId) + .to_owned(), + ) + .await?; + + manager + .drop_table(Table::drop().table(Packages::Table).to_owned()) + .await?; + + // Restore the legacy schema from migration 1 so rolling back + // leaves migration state 6 with the same shape it had before. + manager + .create_table( + Table::create() + .table(Packages::Table) + .if_not_exists() + .col( + ColumnDef::new(Packages::Id) + .big_integer() + .not_null() + .primary_key(), + ) + .col(ColumnDef::new(Packages::Name).string().not_null()) + .col(ColumnDef::new(Packages::CreatedAt).big_integer().not_null()) + .to_owned(), + ) + .await + } +} + +#[derive(DeriveIden)] +enum Packages { + Table, + Id, + Name, + SourceType, + FolderPath, + Password, + AutoExtract, + Priority, + CreatedAt, +} + +#[derive(DeriveIden)] +enum Downloads { + Table, + PackageId, +} diff --git a/src-tauri/src/adapters/driven/sqlite/migrations/mod.rs b/src-tauri/src/adapters/driven/sqlite/migrations/mod.rs index d43145d..dd0425a 100644 --- a/src-tauri/src/adapters/driven/sqlite/migrations/mod.rs +++ b/src-tauri/src/adapters/driven/sqlite/migrations/mod.rs @@ -6,6 +6,7 @@ mod m20260424_000003_add_checksum_columns; mod m20260425_000004_add_queue_position; mod m20260425_000005_create_plugin_configs; mod m20260428_000006_create_accounts; +mod m20260429_000007_create_packages; pub struct Migrator; @@ -19,6 +20,7 @@ impl MigratorTrait for Migrator { Box::new(m20260425_000004_add_queue_position::Migration), Box::new(m20260425_000005_create_plugin_configs::Migration), Box::new(m20260428_000006_create_accounts::Migration), + Box::new(m20260429_000007_create_packages::Migration), ] } } diff --git a/src-tauri/src/adapters/driven/sqlite/mod.rs b/src-tauri/src/adapters/driven/sqlite/mod.rs index f45c18a..5309309 100644 --- a/src-tauri/src/adapters/driven/sqlite/mod.rs +++ b/src-tauri/src/adapters/driven/sqlite/mod.rs @@ -5,6 +5,7 @@ pub mod download_repo; pub mod entities; pub mod history_repo; pub mod migrations; +pub mod package_repo; pub mod plugin_config_repo; pub mod progress_bridge; pub mod stats_repo; diff --git a/src-tauri/src/adapters/driven/sqlite/package_repo.rs b/src-tauri/src/adapters/driven/sqlite/package_repo.rs new file mode 100644 index 0000000..23b47d1 --- /dev/null +++ b/src-tauri/src/adapters/driven/sqlite/package_repo.rs @@ -0,0 +1,526 @@ +//! SQLite implementation of `PackageRepository` (CQRS write side). + +use sea_orm::{DatabaseConnection, EntityTrait, QueryOrder, sea_query::OnConflict}; + +use crate::adapters::driven::sqlite::entities::package; +use crate::adapters::driven::sqlite::util::{block_on, map_db_err, safe_u64}; +use crate::domain::error::DomainError; +use crate::domain::model::download::DownloadId; +use crate::domain::model::package::{Package, PackageId}; +use crate::domain::ports::driven::package_repository::PackageRepository; + +pub struct SqlitePackageRepo { + db: DatabaseConnection, +} + +impl SqlitePackageRepo { + pub fn new(db: DatabaseConnection) -> Self { + Self { db } + } +} + +impl PackageRepository for SqlitePackageRepo { + fn find_by_id(&self, id: &PackageId) -> Result, DomainError> { + let id_value = id.as_str().to_string(); + block_on(async { + let model = package::Entity::find_by_id(id_value) + .one(&self.db) + .await + .map_err(map_db_err)?; + match model { + Some(m) => Ok(Some(m.into_domain()?)), + None => Ok(None), + } + }) + } + + fn save(&self, package: &Package) -> Result<(), DomainError> { + let active = package::ActiveModel::from_domain(package)?; + + block_on(async { + // Upsert by primary key. `created_at` is intentionally omitted + // from the update column list so the original insertion + // timestamp stays stable across subsequent saves — consistent + // with the account repo's behavior and required for stable + // list ordering. + package::Entity::insert(active) + .on_conflict( + OnConflict::column(package::Column::Id) + .update_columns([ + package::Column::Name, + package::Column::SourceType, + package::Column::FolderPath, + package::Column::Password, + package::Column::AutoExtract, + package::Column::Priority, + ]) + .to_owned(), + ) + .exec(&self.db) + .await + .map_err(map_db_err)?; + Ok(()) + }) + } + + fn list(&self) -> Result, DomainError> { + block_on(async { + let models = package::Entity::find() + .order_by_asc(package::Column::CreatedAt) + .order_by_asc(package::Column::Id) + .all(&self.db) + .await + .map_err(map_db_err)?; + models.into_iter().map(|m| m.into_domain()).collect() + }) + } + + fn delete(&self, id: &PackageId) -> Result<(), DomainError> { + let id_value = id.as_str().to_string(); + block_on(async { + package::Entity::delete_by_id(id_value) + .exec(&self.db) + .await + .map_err(map_db_err)?; + Ok(()) + }) + } + + fn list_downloads(&self, id: &PackageId) -> Result, DomainError> { + // `download::Model` does not yet expose `package_id` as a typed + // column (the FK was added in a later migration), so query via + // raw SQL to keep this commit self-contained. Future tasks that + // wire `package_id` into the download write path can swap this + // for a typed `find().filter(...)` chain. + use sea_orm::{ConnectionTrait, Statement}; + + let id_value = id.as_str().to_string(); + block_on(async { + let rows = self + .db + .query_all(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Sqlite, + "SELECT id FROM downloads WHERE package_id = ? ORDER BY queue_position ASC, id ASC", + [id_value.into()], + )) + .await + .map_err(map_db_err)?; + + rows.into_iter() + .map(|row| { + row.try_get_by_index::(0) + .map(|raw| DownloadId(safe_u64(raw))) + .map_err(map_db_err) + }) + .collect() + }) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::adapters::driven::sqlite::connection::setup_test_db; + use crate::domain::model::package::{Package, PackageId, PackageSourceType}; + use sea_orm::{ConnectionTrait, Statement}; + + fn make_package(id: &str, name: &str, source_type: PackageSourceType) -> Package { + Package::new( + PackageId::new(id), + name.to_string(), + source_type, + 1_700_000_000_000, + ) + } + + /// Insert a minimal `downloads` row referencing a package id. Only the + /// not-null columns required by the schema are populated — irrelevant + /// fields default. Returns the inserted download id (i64). + async fn insert_download_in_package( + db: &sea_orm::DatabaseConnection, + download_id: i64, + queue_position: i64, + package_id: Option<&str>, + ) { + let pkg = match package_id { + Some(p) => format!("'{p}'"), + None => "NULL".to_string(), + }; + let sql = format!( + "INSERT INTO downloads (id, url, file_name, state, priority, queue_position, downloaded_bytes, speed_bytes_per_sec, retry_count, max_retries, segments_count, source_hostname, protocol, resume_supported, destination_path, created_at, updated_at, package_id) VALUES ({download_id}, 'https://example.com/f.zip', 'f.zip', 'Queued', 5, {queue_position}, 0, 0, 0, 5, 1, 'example.com', 'https', 0, '/tmp', 1, 1, {pkg})" + ); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + sql, + )) + .await + .expect("seed download"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_save_and_find_package_round_trip_preserves_all_fields() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let mut pkg = make_package("pkg-1", "Holiday", PackageSourceType::Playlist); + pkg.set_folder_path(Some("/tmp/holiday".to_string())); + pkg.set_password(Some("keyring://pkg/holiday".to_string())); + pkg.set_auto_extract(false); + pkg.set_priority(9).expect("valid priority"); + + repo.save(&pkg).expect("save"); + + let found = repo + .find_by_id(&PackageId::new("pkg-1")) + .expect("find") + .expect("package should exist"); + + assert_eq!(found.id().as_str(), "pkg-1"); + assert_eq!(found.name(), "Holiday"); + assert_eq!(found.source_type(), PackageSourceType::Playlist); + assert_eq!(found.folder_path(), Some("/tmp/holiday")); + assert_eq!(found.password(), Some("keyring://pkg/holiday")); + assert!(!found.auto_extract()); + assert_eq!(found.priority(), 9); + assert_eq!(found.created_at(), 1_700_000_000_000); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_save_upsert_updates_existing_package() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let mut pkg = make_package("pkg-up", "Initial", PackageSourceType::Manual); + repo.save(&pkg).expect("first save"); + + pkg = Package::reconstruct( + PackageId::new("pkg-up"), + "Renamed".to_string(), + PackageSourceType::Container, + Some("/srv/x".to_string()), + None, + false, + 2, + // Different created_at — must NOT overwrite the stored value. + 9_999_999_999_999, + ) + .expect("valid priority"); + repo.save(&pkg).expect("upsert"); + + let found = repo + .find_by_id(&PackageId::new("pkg-up")) + .expect("find") + .expect("present"); + assert_eq!(found.name(), "Renamed"); + assert_eq!(found.source_type(), PackageSourceType::Container); + assert_eq!(found.folder_path(), Some("/srv/x")); + assert!(!found.auto_extract()); + assert_eq!(found.priority(), 2); + assert_eq!( + found.created_at(), + 1_700_000_000_000, + "upsert must not rewrite created_at" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_not_found_returns_none() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + let result = repo + .find_by_id(&PackageId::new("missing")) + .expect("find_by_id"); + assert!(result.is_none()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_list_returns_packages_ordered_by_created_at_then_id() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let a = Package::new( + PackageId::new("a"), + "A".to_string(), + PackageSourceType::Manual, + 10, + ); + let b = Package::new( + PackageId::new("b"), + "B".to_string(), + PackageSourceType::Manual, + 10, + ); + let c = Package::new( + PackageId::new("c"), + "C".to_string(), + PackageSourceType::Manual, + 20, + ); + repo.save(&c).unwrap(); + repo.save(&a).unwrap(); + repo.save(&b).unwrap(); + + let listed = repo.list().expect("list"); + assert_eq!(listed.len(), 3); + assert_eq!(listed[0].id().as_str(), "a"); + assert_eq!(listed[1].id().as_str(), "b"); + assert_eq!(listed[2].id().as_str(), "c"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_delete_removes_package() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + repo.save(&make_package("pkg-del", "X", PackageSourceType::Manual)) + .expect("save"); + repo.delete(&PackageId::new("pkg-del")).expect("delete"); + + let found = repo.find_by_id(&PackageId::new("pkg-del")).expect("find"); + assert!(found.is_none()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_delete_missing_package_is_noop() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + repo.delete(&PackageId::new("ghost")).expect("delete"); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_delete_package_sets_member_downloads_package_id_to_null() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + + repo.save(&make_package("pkg-fk", "FK", PackageSourceType::Manual)) + .expect("save package"); + + // Seed two downloads attached to the package. + insert_download_in_package(&db, 1, 0, Some("pkg-fk")).await; + insert_download_in_package(&db, 2, 1, Some("pkg-fk")).await; + + // Sanity: list_downloads sees both, ordered by queue_position. + let members_before = repo.list_downloads(&PackageId::new("pkg-fk")).unwrap(); + assert_eq!(members_before, vec![DownloadId(1), DownloadId(2)]); + + repo.delete(&PackageId::new("pkg-fk")).expect("delete"); + + // The downloads still exist — only the FK is cleared. + let rows = db + .query_all(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "SELECT id, package_id FROM downloads WHERE id IN (1, 2) ORDER BY id".to_string(), + )) + .await + .expect("query downloads"); + assert_eq!(rows.len(), 2, "downloads must survive package deletion"); + for row in &rows { + let pkg_id: Option = row.try_get_by_index(1).unwrap(); + assert!( + pkg_id.is_none(), + "package_id must be NULL after package deletion (got {pkg_id:?})" + ); + } + + // And list_downloads now returns empty for that package id. + let members_after = repo.list_downloads(&PackageId::new("pkg-fk")).unwrap(); + assert!(members_after.is_empty()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_list_downloads_filters_by_package_id_and_orders_by_queue_position() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db.clone()); + + repo.save(&make_package("pkg-ord", "Ord", PackageSourceType::Manual)) + .expect("save"); + + // 3 downloads in pkg-ord with shuffled queue_position, plus one + // unattached download that must NOT show up in the result. + insert_download_in_package(&db, 100, 5, Some("pkg-ord")).await; + insert_download_in_package(&db, 101, 1, Some("pkg-ord")).await; + insert_download_in_package(&db, 102, 3, Some("pkg-ord")).await; + insert_download_in_package(&db, 999, 0, None).await; + + let members = repo.list_downloads(&PackageId::new("pkg-ord")).unwrap(); + assert_eq!( + members, + vec![DownloadId(101), DownloadId(102), DownloadId(100)], + "results ordered by queue_position ascending and exclude unattached downloads" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_list_downloads_returns_empty_for_unknown_package() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + let members = repo + .list_downloads(&PackageId::new("never-existed")) + .unwrap(); + assert!(members.is_empty()); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_source_type_round_trip_through_db_for_each_variant() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let kinds = [ + ("ct-id", PackageSourceType::Container), + ("pl-id", PackageSourceType::Playlist), + ("mn-id", PackageSourceType::Manual), + ("sa-id", PackageSourceType::SplitArchive), + ]; + for (id, src) in kinds { + let pkg = Package::new(PackageId::new(id), "n".to_string(), src, 0); + repo.save(&pkg).expect("save"); + let found = repo.find_by_id(&PackageId::new(id)).unwrap().unwrap(); + assert_eq!(found.source_type(), src); + } + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_returns_validation_error_on_unknown_source_type() { + // Defensive path: a row whose source_type slipped past the + // application layer (e.g. manual migration, dropped enum + // variant) must surface as ValidationError, not panic. + let db = setup_test_db().await.expect("test db"); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO packages (id, name, source_type, auto_extract, priority, created_at) VALUES ('pkg-bad', 'Bad', 'unknown-type', 1, 5, 0)" + .to_string(), + )) + .await + .expect("seed bad row"); + + let repo = SqlitePackageRepo::new(db); + let err = repo + .find_by_id(&PackageId::new("pkg-bad")) + .expect_err("invalid source_type must fail"); + assert!( + matches!(err, DomainError::ValidationError(_)), + "expected ValidationError, got {err:?}" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_returns_validation_error_when_priority_out_of_u8_range() { + let db = setup_test_db().await.expect("test db"); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO packages (id, name, source_type, auto_extract, priority, created_at) VALUES ('pkg-prio', 'Prio', 'manual', 1, 9999, 0)" + .to_string(), + )) + .await + .expect("seed bad priority"); + + let repo = SqlitePackageRepo::new(db); + let err = repo + .find_by_id(&PackageId::new("pkg-prio")) + .expect_err("priority overflow must fail"); + assert!( + matches!(err, DomainError::ValidationError(_)), + "expected ValidationError, got {err:?}" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_rejects_priority_zero() { + let db = setup_test_db().await.expect("test db"); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO packages (id, name, source_type, auto_extract, priority, created_at) VALUES ('pkg-zero', 'Zero', 'manual', 1, 0, 0)" + .to_string(), + )) + .await + .expect("seed"); + + let repo = SqlitePackageRepo::new(db); + let err = repo + .find_by_id(&PackageId::new("pkg-zero")) + .expect_err("priority 0 must be rejected"); + assert!(matches!(err, DomainError::ValidationError(_))); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_rejects_negative_created_at() { + // A corrupt row with a negative created_at must surface as + // ValidationError instead of being silently coerced to 0 and + // jumping to the front of the ordered list. + let db = setup_test_db().await.expect("test db"); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO packages (id, name, source_type, auto_extract, priority, created_at) VALUES ('pkg-neg', 'Neg', 'manual', 1, 5, -1)" + .to_string(), + )) + .await + .expect("seed"); + + let repo = SqlitePackageRepo::new(db); + let err = repo + .find_by_id(&PackageId::new("pkg-neg")) + .expect_err("negative created_at must be rejected"); + assert!(matches!(err, DomainError::ValidationError(_))); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_find_by_id_rejects_auto_extract_outside_zero_one() { + let db = setup_test_db().await.expect("test db"); + db.execute(Statement::from_string( + sea_orm::DatabaseBackend::Sqlite, + "INSERT INTO packages (id, name, source_type, auto_extract, priority, created_at) VALUES ('pkg-ae', 'AE', 'manual', 7, 5, 0)" + .to_string(), + )) + .await + .expect("seed"); + + let repo = SqlitePackageRepo::new(db); + let err = repo + .find_by_id(&PackageId::new("pkg-ae")) + .expect_err("auto_extract=7 must be rejected"); + assert!(matches!(err, DomainError::ValidationError(_))); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_save_returns_validation_error_when_created_at_overflows_i64() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let pkg = Package::reconstruct( + PackageId::new("pkg-of"), + "Overflow".to_string(), + PackageSourceType::Manual, + None, + None, + true, + 5, + // Beyond i64::MAX → must be rejected at conversion. + u64::MAX, + ) + .expect("valid priority"); + let err = repo.save(&pkg).expect_err("created_at overflow must fail"); + assert!( + matches!(err, DomainError::ValidationError(_)), + "expected ValidationError, got {err:?}" + ); + } + + #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + async fn test_optional_fields_persist_as_null_when_unset() { + let db = setup_test_db().await.expect("test db"); + let repo = SqlitePackageRepo::new(db); + + let pkg = make_package("pkg-null", "N", PackageSourceType::Manual); + repo.save(&pkg).expect("save"); + + let found = repo + .find_by_id(&PackageId::new("pkg-null")) + .unwrap() + .unwrap(); + assert!(found.folder_path().is_none()); + assert!(found.password().is_none()); + // Defaults populated from `Package::new`. + assert!(found.auto_extract()); + assert_eq!(found.priority(), 5); + } +} diff --git a/src-tauri/src/domain/event.rs b/src-tauri/src/domain/event.rs index e6480e3..61c8382 100644 --- a/src-tauri/src/domain/event.rs +++ b/src-tauri/src/domain/event.rs @@ -1,5 +1,6 @@ use crate::domain::model::account::AccountId; use crate::domain::model::download::DownloadId; +use crate::domain::model::package::PackageId; use crate::domain::model::views::HistoryEntry; /// Read-model projection inputs captured at the moment a `Download` is @@ -202,7 +203,7 @@ pub enum DomainEvent { // Packages PackageCreated { - id: u64, + id: PackageId, name: String, }, @@ -379,13 +380,13 @@ mod tests { #[test] fn test_package_created_event() { let event = DomainEvent::PackageCreated { - id: 99, + id: PackageId::new("pkg-99"), name: "My Package".to_string(), }; assert_eq!( event, DomainEvent::PackageCreated { - id: 99, + id: PackageId::new("pkg-99"), name: "My Package".to_string() } ); diff --git a/src-tauri/src/domain/model/mod.rs b/src-tauri/src/domain/model/mod.rs index 65485b2..2641fda 100644 --- a/src-tauri/src/domain/model/mod.rs +++ b/src-tauri/src/domain/model/mod.rs @@ -25,7 +25,7 @@ pub use download::{Download, DownloadId, DownloadState, FileSize, Speed, Url}; pub use http::HttpResponse; pub use link::LinkStatus; pub use meta::{DownloadMeta, SegmentMeta}; -pub use package::Package; +pub use package::{DEFAULT_PACKAGE_PRIORITY, Package, PackageId, PackageSourceType}; pub use plugin::{PluginCategory, PluginInfo, PluginManifest}; pub use queue::{Priority, QueuePosition}; pub use segment::{Segment, SegmentState}; diff --git a/src-tauri/src/domain/model/package.rs b/src-tauri/src/domain/model/package.rs index 3ec76fc..554bca0 100644 --- a/src-tauri/src/domain/model/package.rs +++ b/src-tauri/src/domain/model/package.rs @@ -1,23 +1,168 @@ +use std::fmt; +use std::str::FromStr; + +use crate::domain::error::DomainError; use crate::domain::model::download::DownloadId; +/// Identifier of a `Package` aggregate. Stored as `TEXT` in SQLite — the +/// caller picks the format (UUID, slug…). The wrapper makes the type +/// distinct from a plain `String` and from other `*Id` types. +#[derive(Debug, Clone, PartialEq, Eq, Hash)] +pub struct PackageId(pub String); + +impl PackageId { + pub fn new(value: impl Into) -> Self { + Self(value.into()) + } + + pub fn as_str(&self) -> &str { + &self.0 + } +} + +impl fmt::Display for PackageId { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.write_str(&self.0) + } +} + +/// Origin of a `Package`. Persisted as a lower-snake-case string. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum PackageSourceType { + /// Container file imported from disk (DLC, CCF, RSDF, Metalink…). + Container, + /// Auto-grouped playlist extracted by a crawler plugin. + Playlist, + /// User-built package (manual grouping). + Manual, + /// Multi-part archive auto-grouped by file naming convention. + SplitArchive, +} + +impl fmt::Display for PackageSourceType { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let s = match self { + PackageSourceType::Container => "container", + PackageSourceType::Playlist => "playlist", + PackageSourceType::Manual => "manual", + PackageSourceType::SplitArchive => "split_archive", + }; + f.write_str(s) + } +} + +impl FromStr for PackageSourceType { + type Err = DomainError; + + fn from_str(s: &str) -> Result { + match s { + "container" => Ok(PackageSourceType::Container), + "playlist" => Ok(PackageSourceType::Playlist), + "manual" => Ok(PackageSourceType::Manual), + "split_archive" => Ok(PackageSourceType::SplitArchive), + other => Err(DomainError::ValidationError(format!( + "invalid package source type: {other}" + ))), + } + } +} + +/// Default scheduling priority for a package (1..=10 scale, mid-range). +pub const DEFAULT_PACKAGE_PRIORITY: u8 = 5; + +/// Validate a priority is inside the documented `1..=10` band. +fn validate_package_priority(priority: u8) -> Result { + if (1..=10).contains(&priority) { + Ok(priority) + } else { + Err(DomainError::ValidationError(format!( + "invalid package priority {priority}: must be between 1 and 10" + ))) + } +} + #[derive(Debug, Clone, PartialEq)] pub struct Package { - id: u64, + id: PackageId, name: String, - download_ids: Vec, + source_type: PackageSourceType, + folder_path: Option, + /// Reference to the keyring entry holding the archive password, or + /// `None` when the package has no password. The repo persists the + /// raw string verbatim — the keyring lookup happens elsewhere. + password: Option, + auto_extract: bool, + priority: u8, created_at: u64, + /// In-memory aggregate children. Persistence stores the inverse FK + /// on `downloads.package_id` — never a column on `packages` itself. + download_ids: Vec, } impl Package { - pub fn new(id: u64, name: String) -> Self { + pub fn new( + id: PackageId, + name: String, + source_type: PackageSourceType, + created_at: u64, + ) -> Self { Self { id, name, + source_type, + folder_path: None, + password: None, + auto_extract: true, + priority: DEFAULT_PACKAGE_PRIORITY, + created_at, download_ids: Vec::new(), - created_at: 0, } } + /// Rebuild a package from persisted state without children. Used by + /// the SQLite adapter; the children list is repopulated separately + /// via `PackageRepository::list_downloads`. + #[allow(clippy::too_many_arguments)] + pub fn reconstruct( + id: PackageId, + name: String, + source_type: PackageSourceType, + folder_path: Option, + password: Option, + auto_extract: bool, + priority: u8, + created_at: u64, + ) -> Result { + Ok(Self { + id, + name, + source_type, + folder_path, + password, + auto_extract, + priority: validate_package_priority(priority)?, + created_at, + download_ids: Vec::new(), + }) + } + + pub fn set_folder_path(&mut self, path: Option) { + self.folder_path = path; + } + + pub fn set_password(&mut self, password: Option) { + self.password = password; + } + + pub fn set_auto_extract(&mut self, enabled: bool) { + self.auto_extract = enabled; + } + + pub fn set_priority(&mut self, priority: u8) -> Result<(), DomainError> { + self.priority = validate_package_priority(priority)?; + Ok(()) + } + pub fn add_download(&mut self, id: DownloadId) { if !self.download_ids.contains(&id) { self.download_ids.push(id); @@ -40,14 +185,34 @@ impl Package { &self.download_ids } - pub fn id(&self) -> u64 { - self.id + pub fn id(&self) -> &PackageId { + &self.id } pub fn name(&self) -> &str { &self.name } + pub fn source_type(&self) -> PackageSourceType { + self.source_type + } + + pub fn folder_path(&self) -> Option<&str> { + self.folder_path.as_deref() + } + + pub fn password(&self) -> Option<&str> { + self.password.as_deref() + } + + pub fn auto_extract(&self) -> bool { + self.auto_extract + } + + pub fn priority(&self) -> u8 { + self.priority + } + pub fn created_at(&self) -> u64 { self.created_at } @@ -58,19 +223,85 @@ mod tests { use super::*; fn make_package() -> Package { - Package::new(1, "My Package".to_string()) + Package::new( + PackageId::new("pkg-1"), + "My Package".to_string(), + PackageSourceType::Manual, + 1_700_000_000_000, + ) } #[test] - fn test_package_new() { + fn test_package_new_initialises_defaults() { let p = make_package(); - assert_eq!(p.id(), 1); + assert_eq!(p.id().as_str(), "pkg-1"); assert_eq!(p.name(), "My Package"); - assert_eq!(p.created_at(), 0); + assert_eq!(p.source_type(), PackageSourceType::Manual); + assert!(p.folder_path().is_none()); + assert!(p.password().is_none()); + assert!(p.auto_extract()); + assert_eq!(p.priority(), DEFAULT_PACKAGE_PRIORITY); + assert_eq!(p.created_at(), 1_700_000_000_000); assert_eq!(p.download_count(), 0); assert!(p.downloads().is_empty()); } + #[test] + fn test_package_default_priority_is_five() { + assert_eq!(DEFAULT_PACKAGE_PRIORITY, 5); + let p = make_package(); + assert_eq!(p.priority(), 5); + } + + #[test] + fn test_package_setters_store_optional_fields() { + let mut p = make_package(); + p.set_folder_path(Some("/tmp/dl".to_string())); + p.set_password(Some("keyring://pkg/secret".to_string())); + p.set_auto_extract(false); + p.set_priority(9).expect("valid priority"); + assert_eq!(p.folder_path(), Some("/tmp/dl")); + assert_eq!(p.password(), Some("keyring://pkg/secret")); + assert!(!p.auto_extract()); + assert_eq!(p.priority(), 9); + } + + #[test] + fn test_package_set_priority_rejects_zero() { + let mut p = make_package(); + let err = p.set_priority(0).expect_err("zero is invalid"); + assert!(matches!(err, DomainError::ValidationError(_))); + assert_eq!(p.priority(), DEFAULT_PACKAGE_PRIORITY); + } + + #[test] + fn test_package_set_priority_rejects_above_ten() { + let mut p = make_package(); + let err = p.set_priority(11).expect_err("11 is invalid"); + assert!(matches!(err, DomainError::ValidationError(_))); + assert_eq!(p.priority(), DEFAULT_PACKAGE_PRIORITY); + } + + #[test] + fn test_package_set_priority_accepts_boundaries() { + let mut p = make_package(); + p.set_priority(1).expect("1 valid"); + assert_eq!(p.priority(), 1); + p.set_priority(10).expect("10 valid"); + assert_eq!(p.priority(), 10); + } + + #[test] + fn test_package_setters_clear_optional_fields() { + let mut p = make_package(); + p.set_folder_path(Some("/x".to_string())); + p.set_password(Some("k".to_string())); + p.set_folder_path(None); + p.set_password(None); + assert!(p.folder_path().is_none()); + assert!(p.password().is_none()); + } + #[test] fn test_package_add_download() { let mut p = make_package(); @@ -99,14 +330,14 @@ mod tests { } #[test] - fn test_package_remove_nonexistent() { + fn test_package_remove_nonexistent_is_noop() { let mut p = make_package(); p.remove_download(DownloadId(99)); assert_eq!(p.download_count(), 0); } #[test] - fn test_package_download_count() { + fn test_package_download_count_grows_with_each_unique_id() { let mut p = make_package(); assert_eq!(p.download_count(), 0); p.add_download(DownloadId(1)); @@ -116,10 +347,78 @@ mod tests { } #[test] - fn test_package_contains() { + fn test_package_contains_reflects_membership() { let mut p = make_package(); assert!(!p.contains_download(DownloadId(5))); p.add_download(DownloadId(5)); assert!(p.contains_download(DownloadId(5))); } + + #[test] + fn test_package_reconstruct_preserves_persisted_fields() { + let p = Package::reconstruct( + PackageId::new("pkg-r"), + "Reloaded".to_string(), + PackageSourceType::Container, + Some("/srv/dl".to_string()), + Some("keyring://srv/secret".to_string()), + false, + 7, + 1_700_000_000_001, + ) + .expect("valid priority"); + assert_eq!(p.id().as_str(), "pkg-r"); + assert_eq!(p.name(), "Reloaded"); + assert_eq!(p.source_type(), PackageSourceType::Container); + assert_eq!(p.folder_path(), Some("/srv/dl")); + assert_eq!(p.password(), Some("keyring://srv/secret")); + assert!(!p.auto_extract()); + assert_eq!(p.priority(), 7); + assert_eq!(p.created_at(), 1_700_000_000_001); + assert!(p.downloads().is_empty()); + } + + #[test] + fn test_package_reconstruct_rejects_priority_out_of_range() { + for bad in [0u8, 11, 99] { + let err = Package::reconstruct( + PackageId::new("pkg-r"), + "x".to_string(), + PackageSourceType::Manual, + None, + None, + true, + bad, + 0, + ) + .expect_err("priority must be rejected"); + assert!(matches!(err, DomainError::ValidationError(_))); + } + } + + #[test] + fn test_package_id_display_returns_inner_value() { + assert_eq!(PackageId::new("abc-42").to_string(), "abc-42"); + assert_eq!(PackageId::new("abc-42").as_str(), "abc-42"); + } + + #[test] + fn test_package_source_type_round_trip_via_string() { + for variant in [ + PackageSourceType::Container, + PackageSourceType::Playlist, + PackageSourceType::Manual, + PackageSourceType::SplitArchive, + ] { + let s = variant.to_string(); + let parsed: PackageSourceType = s.parse().expect("round trip"); + assert_eq!(parsed, variant); + } + } + + #[test] + fn test_package_source_type_from_str_rejects_unknown() { + let result: Result = "garbage".parse(); + assert!(matches!(result, Err(DomainError::ValidationError(_)))); + } } diff --git a/src-tauri/src/domain/ports/driven/mod.rs b/src-tauri/src/domain/ports/driven/mod.rs index eeaab0d..ae8a0fd 100644 --- a/src-tauri/src/domain/ports/driven/mod.rs +++ b/src-tauri/src/domain/ports/driven/mod.rs @@ -18,6 +18,7 @@ pub mod file_opener; pub mod file_storage; pub mod history_repository; pub mod http_client; +pub mod package_repository; pub mod passphrase_codec; pub mod plugin_config_store; pub mod plugin_loader; @@ -43,6 +44,7 @@ pub use file_opener::FileOpener; pub use file_storage::FileStorage; pub use history_repository::HistoryRepository; pub use http_client::HttpClient; +pub use package_repository::PackageRepository; pub use passphrase_codec::PassphraseCodec; pub use plugin_config_store::PluginConfigStore; pub use plugin_loader::PluginLoader; diff --git a/src-tauri/src/domain/ports/driven/package_repository.rs b/src-tauri/src/domain/ports/driven/package_repository.rs new file mode 100644 index 0000000..4902562 --- /dev/null +++ b/src-tauri/src/domain/ports/driven/package_repository.rs @@ -0,0 +1,38 @@ +//! Write repository for the `Package` aggregate (CQRS write side). +//! +//! `Package` aggregates a logical group of downloads (manual grouping, +//! auto-grouped playlist, container import, multi-part archive, …). +//! Persistence stores the package row plus the inverse foreign key on +//! `downloads.package_id` so deleting a package detaches its members +//! without losing the download history. + +use crate::domain::error::DomainError; +use crate::domain::model::download::DownloadId; +use crate::domain::model::package::{Package, PackageId}; + +/// Persists and retrieves `Package` aggregates. +pub trait PackageRepository: Send + Sync { + /// Look up a package by its identifier. Returns `None` when no row + /// matches (and not an error). + fn find_by_id(&self, id: &PackageId) -> Result, DomainError>; + + /// Insert or update a package. Implementations upsert by primary key + /// and must preserve `created_at` across subsequent saves so list + /// ordering stays stable. + fn save(&self, package: &Package) -> Result<(), DomainError>; + + /// Every persisted package, ordered by `created_at` ascending then + /// `id` ascending for a stable, deterministic order. + fn list(&self) -> Result, DomainError>; + + /// Delete a package by id. No-op when the row is missing. Member + /// downloads keep existing — `downloads.package_id` is reset to + /// `NULL` by the FK's `ON DELETE SET NULL` clause. + fn delete(&self, id: &PackageId) -> Result<(), DomainError>; + + /// Return the ids of every download currently attached to the given + /// package, ordered by `queue_position` ascending so the caller can + /// surface them in scheduling order. Returns an empty vector when + /// no download references the package. + fn list_downloads(&self, id: &PackageId) -> Result, DomainError>; +} diff --git a/src-tauri/src/domain/ports/driven/tests.rs b/src-tauri/src/domain/ports/driven/tests.rs index b9f5750..ab3613c 100644 --- a/src-tauri/src/domain/ports/driven/tests.rs +++ b/src-tauri/src/domain/ports/driven/tests.rs @@ -15,6 +15,7 @@ use crate::domain::model::credential::Credential; use crate::domain::model::download::{Download, DownloadId, DownloadState}; use crate::domain::model::http::HttpResponse; use crate::domain::model::meta::DownloadMeta; +use crate::domain::model::package::{Package, PackageId, PackageSourceType}; use crate::domain::model::plugin::{PluginCategory, PluginInfo, PluginManifest}; use crate::domain::model::views::{ DownloadDetailView, DownloadFilter, DownloadView, HistoryEntry, HistoryFilter, HistorySort, @@ -727,6 +728,90 @@ impl AccountRepository for InMemoryAccountRepository { } } +// ── InMemoryPackageRepository ──────────────────────────────────── + +struct InMemoryPackageRepository { + store: Mutex>, + /// Members are stored as `(queue_position, download_id)` so that + /// `list_downloads` can mirror the SQLite adapter's ordering + /// contract (asc by `queue_position`). + members: Mutex>>, +} + +impl InMemoryPackageRepository { + fn new() -> Self { + Self { + store: Mutex::new(HashMap::new()), + members: Mutex::new(HashMap::new()), + } + } + + fn attach_download(&self, package_id: &PackageId, queue_position: i64, download: DownloadId) { + self.members + .lock() + .unwrap() + .entry(package_id.clone()) + .or_default() + .push((queue_position, download)); + } +} + +impl PackageRepository for InMemoryPackageRepository { + fn find_by_id(&self, id: &PackageId) -> Result, DomainError> { + Ok(self.store.lock().unwrap().get(id).cloned()) + } + + fn save(&self, package: &Package) -> Result<(), DomainError> { + let mut guard = self.store.lock().unwrap(); + // Mirror SQLite: created_at is insert-only. + let created_at = match guard.get(package.id()) { + Some(existing) => existing.created_at(), + None => package.created_at(), + }; + let stored = Package::reconstruct( + package.id().clone(), + package.name().to_string(), + package.source_type(), + package.folder_path().map(str::to_string), + package.password().map(str::to_string), + package.auto_extract(), + package.priority(), + created_at, + )?; + guard.insert(package.id().clone(), stored); + Ok(()) + } + + fn list(&self) -> Result, DomainError> { + let mut packages: Vec = self.store.lock().unwrap().values().cloned().collect(); + packages.sort_by(|a, b| { + a.created_at() + .cmp(&b.created_at()) + .then_with(|| a.id().as_str().cmp(b.id().as_str())) + }); + Ok(packages) + } + + fn delete(&self, id: &PackageId) -> Result<(), DomainError> { + self.store.lock().unwrap().remove(id); + // FK ON DELETE SET NULL semantics: detach members but keep them. + self.members.lock().unwrap().remove(id); + Ok(()) + } + + fn list_downloads(&self, id: &PackageId) -> Result, DomainError> { + let mut members = self + .members + .lock() + .unwrap() + .get(id) + .cloned() + .unwrap_or_default(); + members.sort_by(|(qa, da), (qb, db)| qa.cmp(qb).then_with(|| da.0.cmp(&db.0))); + Ok(members.into_iter().map(|(_, id)| id).collect()) + } +} + #[test] fn in_memory_account_repository_round_trip_preserves_fields() { let repo = InMemoryAccountRepository::new(); @@ -849,6 +934,178 @@ fn all_driven_port_mocks_are_send_sync() { assert_send_sync::(); assert_send_sync::(); assert_send_sync::(); + assert_send_sync::(); +} + +#[test] +fn in_memory_package_repository_round_trip_preserves_all_fields() { + let repo = InMemoryPackageRepository::new(); + let mut pkg = Package::new( + PackageId::new("pkg-rt"), + "Holiday photos".to_string(), + PackageSourceType::Playlist, + 1_700_000_000_000, + ); + pkg.set_folder_path(Some("/tmp/holiday".to_string())); + pkg.set_password(Some("keyring://pkg/holiday".to_string())); + pkg.set_auto_extract(false); + pkg.set_priority(8).expect("valid priority"); + + repo.save(&pkg).expect("save"); + let found = repo + .find_by_id(&PackageId::new("pkg-rt")) + .expect("find") + .expect("present"); + assert_eq!(found.id().as_str(), "pkg-rt"); + assert_eq!(found.name(), "Holiday photos"); + assert_eq!(found.source_type(), PackageSourceType::Playlist); + assert_eq!(found.folder_path(), Some("/tmp/holiday")); + assert_eq!(found.password(), Some("keyring://pkg/holiday")); + assert!(!found.auto_extract()); + assert_eq!(found.priority(), 8); + assert_eq!(found.created_at(), 1_700_000_000_000); +} + +#[test] +fn in_memory_package_repository_save_preserves_original_created_at() { + let repo = InMemoryPackageRepository::new(); + let original = Package::new( + PackageId::new("pkg-stable"), + "Vol 1".to_string(), + PackageSourceType::Manual, + 1_700_000_000_000, + ); + repo.save(&original).expect("first save"); + + let updated = Package::new( + PackageId::new("pkg-stable"), + "Vol 1 — updated".to_string(), + PackageSourceType::Manual, + 9_999_999_999_999, + ); + repo.save(&updated).expect("upsert"); + + let found = repo + .find_by_id(&PackageId::new("pkg-stable")) + .expect("find") + .expect("present"); + assert_eq!(found.created_at(), 1_700_000_000_000); + assert_eq!(found.name(), "Vol 1 — updated"); +} + +#[test] +fn in_memory_package_repository_list_orders_by_created_at_then_id() { + let repo = InMemoryPackageRepository::new(); + repo.save(&Package::new( + PackageId::new("c"), + "C".to_string(), + PackageSourceType::Manual, + 20, + )) + .unwrap(); + repo.save(&Package::new( + PackageId::new("a"), + "A".to_string(), + PackageSourceType::Manual, + 10, + )) + .unwrap(); + repo.save(&Package::new( + PackageId::new("b"), + "B".to_string(), + PackageSourceType::Manual, + 10, + )) + .unwrap(); + + let listed = repo.list().expect("list"); + assert_eq!(listed.len(), 3); + // Ordered by (created_at asc, id asc) → a, b, c + assert_eq!(listed[0].id().as_str(), "a"); + assert_eq!(listed[1].id().as_str(), "b"); + assert_eq!(listed[2].id().as_str(), "c"); +} + +#[test] +fn in_memory_package_repository_delete_drops_member_attachments() { + let repo = InMemoryPackageRepository::new(); + let pkg = Package::new( + PackageId::new("pkg-del"), + "Doomed".to_string(), + PackageSourceType::Manual, + 0, + ); + repo.save(&pkg).unwrap(); + repo.attach_download(&PackageId::new("pkg-del"), 0, DownloadId(1)); + assert_eq!( + repo.list_downloads(&PackageId::new("pkg-del")) + .unwrap() + .len(), + 1 + ); + + repo.delete(&PackageId::new("pkg-del")).unwrap(); + assert!( + repo.find_by_id(&PackageId::new("pkg-del")) + .unwrap() + .is_none() + ); + assert!( + repo.list_downloads(&PackageId::new("pkg-del")) + .unwrap() + .is_empty() + ); +} + +#[test] +fn in_memory_package_repository_list_downloads_returns_attached_ids() { + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-x"); + repo.save(&Package::new( + pkg_id.clone(), + "X".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + repo.attach_download(&pkg_id, 0, DownloadId(7)); + repo.attach_download(&pkg_id, 1, DownloadId(11)); + + let members = repo.list_downloads(&pkg_id).unwrap(); + assert_eq!(members, vec![DownloadId(7), DownloadId(11)]); + // Other packages have no members. + assert!( + repo.list_downloads(&PackageId::new("ghost")) + .unwrap() + .is_empty() + ); +} + +#[test] +fn in_memory_package_repository_list_downloads_orders_by_queue_position() { + // Mock must mirror the SQLite adapter's contract: members come back + // ordered by queue_position regardless of insertion order, otherwise + // port-level tests would let production diverge from the mock. + let repo = InMemoryPackageRepository::new(); + let pkg_id = PackageId::new("pkg-order"); + repo.save(&Package::new( + pkg_id.clone(), + "Ordered".to_string(), + PackageSourceType::Manual, + 0, + )) + .unwrap(); + // Insert out of order on purpose. + repo.attach_download(&pkg_id, 5, DownloadId(50)); + repo.attach_download(&pkg_id, 1, DownloadId(10)); + repo.attach_download(&pkg_id, 3, DownloadId(30)); + + let members = repo.list_downloads(&pkg_id).unwrap(); + assert_eq!( + members, + vec![DownloadId(10), DownloadId(30), DownloadId(50)], + "list_downloads must sort by queue_position asc" + ); } #[test] diff --git a/src-tauri/src/lib.rs b/src-tauri/src/lib.rs index 3d4b7a1..45d661d 100644 --- a/src-tauri/src/lib.rs +++ b/src-tauri/src/lib.rs @@ -46,6 +46,7 @@ pub use adapters::driven::sqlite::connection; pub use adapters::driven::sqlite::download_read_repo::SqliteDownloadReadRepo; pub use adapters::driven::sqlite::download_repo::SqliteDownloadRepo; pub use adapters::driven::sqlite::history_repo::SqliteHistoryRepo; +pub use adapters::driven::sqlite::package_repo::SqlitePackageRepo; pub use adapters::driven::sqlite::progress_bridge::spawn_sqlite_progress_bridge; pub use adapters::driven::sqlite::stats_repo::SqliteStatsRepo; pub use adapters::driven::tray::{