feat(domain): package read queries with SQL-aggregated stats#132
feat(domain): package read queries with SQL-aggregated stats#132
Conversation
Adds three CQRS query handlers (`list_packages`, `get_package`, `list_package_downloads`) backed by a new `PackageReadRepository` port and `SqlitePackageReadRepo` adapter. The adapter computes `downloads_count`, `total_bytes`, `downloaded_bytes`, `progress_percent` and `all_completed` in a single `LEFT JOIN packages → downloads` with `GROUP BY p.id` so listing N packages stays one round-trip instead of N+1. `PackageFilter` AND-combines `source_type` (exact lowercase wire form) and `name_q` (case-insensitive substring via `LOWER(p.name) LIKE %?%`); blank `name_q` is treated as "no filter". `PackageViewDto` (camelCase serde) re-exposes the aggregated `PackageView` to the frontend with no password / credential reference field, by construction. `list_package_downloads` reuses the existing `DownloadView` so the React layer can render member rows with the same component as the main downloads list. Three Tauri IPC commands wired (`package_list`, `package_get`, `package_list_downloads`); `package_list` validates an unknown `source_type` argument up-front. Twenty-three new unit + integration tests cover the three acceptance criteria (SQL-side stats, fuzzy name_q, in-memory SQLite fixtures) plus edge cases (mixed states, all-completed flip, unknown total → 0, deterministic ordering, no leak across packages, missing-repo validation, DTO camelCase + no-password serialization). Unblocks task 29 (Vue Packages React).
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a package read-side CQRS surface: new Changes
Sequence Diagram(s)sequenceDiagram
participant UI as Frontend (Tauri)
participant IPC as Tauri IPC Handler
participant QB as QueryBus
participant Repo as SqlitePackageReadRepo
participant DB as SQLite DB
UI->>IPC: invoke "package_list"(source_type?, name_q?)
IPC->>QB: ListPackagesQuery(filter?)
QB->>Repo: find_packages(filter?)
Repo->>DB: SQL LEFT JOIN packages ↔ downloads (with filters, ordering)
DB-->>Repo: rows (aggregated)
Repo-->>QB: Vec<PackageView>
QB-->>IPC: Vec<PackageViewDto>
IPC-->>UI: return DTOs (JSON)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0a3348a95a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src-tauri/src/adapters/driving/tauri_ipc.rs (1)
3080-3087: ⚡ Quick winCanonicalize
source_typeafter parsing before buildingPackageFilter.Right now the parsed value is used only for validation, then the raw input is passed to
PackageFilter. Using the parsed/canonical wire value makes filtering resilient and avoids subtle false-empty results.Proposed patch
- let source_type = source_type - .map(|s| s.trim().to_string()) - .filter(|s| !s.is_empty()); - if let Some(ref raw) = source_type { - // Validate eagerly so callers see "invalid source type" instead - // of an empty result set. - parse_package_source_type(raw)?; - } + let source_type = match source_type + .map(|s| s.trim().to_string()) + .filter(|s| !s.is_empty()) + { + Some(raw) => { + // Validate eagerly and normalize to wire form. + let parsed = parse_package_source_type(&raw)?; + Some(parsed.to_string()) + } + None => None, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driving/tauri_ipc.rs` around lines 3080 - 3087, The code validates source_type with parse_package_source_type(raw) but then uses the original raw string when building PackageFilter, risking mismatches; modify the flow so parse_package_source_type(raw) returns the canonical/wire representation (or derive it from the parsed enum) and assign that canonical string back into source_type (replace the raw) before constructing PackageFilter; specifically, call parse_package_source_type(raw) to obtain the canonical value and set source_type = Some(canonical) so PackageFilter is built from the canonical wire value rather than the raw input.src-tauri/src/application/test_support.rs (1)
671-676: ⚡ Quick winAlign fake download ordering with SQLite contract.
InMemoryPackageReadRepocurrently preserves caller order, but production returns downloads ordered byqueue_position ASC, id ASC. Sorting in the fake keeps handler tests faithful to real behavior.Proposed patch
- pub(crate) fn attach_downloads(&self, package_id: &str, downloads: Vec<DownloadView>) { + pub(crate) fn attach_downloads(&self, package_id: &str, mut downloads: Vec<DownloadView>) { + downloads.sort_by(|a, b| { + a.queue_position + .cmp(&b.queue_position) + .then_with(|| a.id.0.cmp(&b.id.0)) + }); self.downloads_by_package .lock() .unwrap() .insert(package_id.to_string(), downloads); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/test_support.rs` around lines 671 - 676, The fake repository's attach_downloads currently inserts the provided downloads preserving caller order; change it to sort the Vec<DownloadView> by (queue_position ASC, id ASC) before inserting so behavior matches SQLite. In the attach_downloads method on the InMemoryPackageReadRepo/test_support (the function attach_downloads and the downloads_by_package map), sort the downloads vector using DownloadView.queue_position then DownloadView.id and insert the sorted list into downloads_by_package.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 146-147: The code currently coerces invalid model.priority to 5
via u8::try_from(model.priority).unwrap_or(5), hiding bad persisted data; change
this to surface a validation/storage error instead: replace the unwrap_or usage
in package_read_repo.rs (the code that assigns priority_u8 from model.priority)
with explicit validation that converts model.priority using u8::try_from and on
Err returns or propagates a descriptive error (e.g.,
RepositoryError::InvalidData or map_err with context) so callers can
handle/storage can be fixed; keep the created_at conversion (safe_u64)
unchanged.
---
Nitpick comments:
In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 3080-3087: The code validates source_type with
parse_package_source_type(raw) but then uses the original raw string when
building PackageFilter, risking mismatches; modify the flow so
parse_package_source_type(raw) returns the canonical/wire representation (or
derive it from the parsed enum) and assign that canonical string back into
source_type (replace the raw) before constructing PackageFilter; specifically,
call parse_package_source_type(raw) to obtain the canonical value and set
source_type = Some(canonical) so PackageFilter is built from the canonical wire
value rather than the raw input.
In `@src-tauri/src/application/test_support.rs`:
- Around line 671-676: The fake repository's attach_downloads currently inserts
the provided downloads preserving caller order; change it to sort the
Vec<DownloadView> by (queue_position ASC, id ASC) before inserting so behavior
matches SQLite. In the attach_downloads method on the
InMemoryPackageReadRepo/test_support (the function attach_downloads and the
downloads_by_package map), sort the downloads vector using
DownloadView.queue_position then DownloadView.id and insert the sorted list into
downloads_by_package.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6d951eef-4f34-434a-aa4b-64eca5a2f619
📒 Files selected for processing (17)
CHANGELOG.mdsrc-tauri/src/adapters/driven/sqlite/mod.rssrc-tauri/src/adapters/driven/sqlite/package_read_repo.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/queries/get_package.rssrc-tauri/src/application/queries/list_package_downloads.rssrc-tauri/src/application/queries/list_packages.rssrc-tauri/src/application/queries/mod.rssrc-tauri/src/application/query_bus.rssrc-tauri/src/application/read_models/mod.rssrc-tauri/src/application/read_models/package_view.rssrc-tauri/src/application/test_support.rssrc-tauri/src/domain/model/mod.rssrc-tauri/src/domain/model/views.rssrc-tauri/src/domain/ports/driven/mod.rssrc-tauri/src/domain/ports/driven/package_read_repository.rssrc-tauri/src/lib.rs
The package aggregate's `downloaded_bytes_sum` used `SUM(d.downloaded_bytes)` which can lag behind `total_bytes` for downloads in the `Completed` state (last-segment commit drift, restart races, etc.). The per-download view already treats `Completed` as 100% via `compute_progress_percent_for_download`, so the package showed a lower percentage than the sum of its rendered children. Switch the SUM to a CASE: completed members contribute their full `total_bytes` (with a `downloaded_bytes` fallback when total is NULL), non-completed members keep contributing their actual downloaded bytes. Two regression tests cover the drift case and the NULL-total Completed fallback.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 192-197: The name_q handling must escape SQL LIKE metacharacters
so inputs like "%" or "_" are treated literally; update the block that sets
lowered_name/params and clauses (where f.name_q is used, and variables
lowered_name, clauses, params, Value) to first escape backslashes, percent and
underscore characters (e.g. replace '\' -> '\\', '%' -> '\%', '_' -> '\_'), then
lowercase that escaped string, wrap it with %...% and push it into params, and
change the clause to include an explicit ESCAPE clause (e.g. "LOWER(p.name) LIKE
? ESCAPE '\\'") so the database interprets the escape sequences correctly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9e636e23-0c54-4e33-8304-cb474253c0ce
📒 Files selected for processing (1)
src-tauri/src/adapters/driven/sqlite/package_read_repo.rs
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c40208fa65
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| p.id, p.name, p.source_type, p.folder_path, p.auto_extract, p.priority, p.created_at, \ | ||
| COUNT(d.id) AS downloads_count, \ | ||
| COALESCE(SUM(COALESCE(d.total_bytes, 0)), 0) AS total_bytes_sum, \ | ||
| COALESCE(SUM(CASE WHEN d.state = 'Completed' THEN COALESCE(d.total_bytes, d.downloaded_bytes) ELSE d.downloaded_bytes END), 0) AS downloaded_bytes_sum, \ |
There was a problem hiding this comment.
Clamp mixed package progress when completed totals are unknown
This aggregate can produce progress_percent > 100 for mixed-state packages: when a Completed member has total_bytes = NULL, downloaded_bytes_sum adds its downloaded_bytes (line 123) but total_bytes_sum still adds 0 (line 122). If the same package also has non-completed rows with known totals, the numerator can exceed the denominator, so aggregate_progress_percent reports impossible values (e.g., 800%+). Please make numerator/denominator treatment consistent for unknown totals (or cap aggregate progress to 100) in this branch.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This aggregate can produce
progress_percent > 100for mixed-state packages: when aCompletedmember hastotal_bytes = NULL,downloaded_bytes_sumadds itsdownloaded_bytes(line 123) buttotal_bytes_sumstill adds0(line 122).
Caught a real regression I introduced in c40208f. Fixed in 242fe4a — both sums now share the same per-row CASE for Completed rows (COALESCE(total_bytes, downloaded_bytes)), so the NULL-total fallback hits the numerator and the denominator together. Added test_find_packages_mixed_state_with_unknown_completed_total_stays_under_100 (500 NULL-total Completed + 50/100 Downloading → 91.7%, not 500%) and updated the existing single-member test to assert symmetric credit.
There was a problem hiding this comment.
To use Codex here, create an environment for this repo.
Two issues in `SqlitePackageReadRepo::find_packages` aggregation: 1. Asymmetric treatment of Completed members with NULL `total_bytes`. The previous CASE only hit `downloaded_bytes_sum`, leaving the denominator at zero, so a small Downloading member alongside a NULL-total Completed member produced `progress_percent` well over 100% (e.g. 500 / 100 = 500%). Apply the same `COALESCE(total_bytes, downloaded_bytes)` fallback to both numerator and denominator so the ratio stays bounded. 2. `name_q` filter passed user input straight into `LIKE` without escaping `%` / `_`. A query for `100%` matched `100 packages`, and `foo_bar` matched `foo-bar`. Add `escape_like` helper, escape `\`, `%`, `_` in the input, and append `ESCAPE '\'` to the SQL clause. Regression tests: - `test_find_packages_mixed_state_with_unknown_completed_total_stays_under_100` - `test_find_packages_filter_name_q_escapes_like_percent_metachar` - `test_find_packages_filter_name_q_escapes_like_underscore_metachar` - `test_find_packages_filter_name_q_escapes_backslash_metachar` Adjusted existing `test_find_packages_completed_member_without_total_falls_back_to_downloaded` to reflect the symmetric numerator/denominator credit.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 123-140: Update the public comment/contract for
PackageView.total_bytes in src-tauri/src/domain/model/views.rs to reflect the
new aggregation logic in PACKAGE_AGG_SELECT: explain that for downloads in state
'Completed' with NULL total_bytes the aggregate treats their contribution as
downloaded_bytes (not zero), so total_bytes_sum may include downloaded_bytes for
Completed items; adjust wording to remove the old claim that unknown sizes
contribute 0 and explicitly describe the symmetric CASE behavior used for
total_bytes_sum and downloaded_bytes_sum so callers know the true semantics.
- Around line 206-216: The current code in package_read_repo.rs builds a clause
using "LOWER(p.name) LIKE ? ESCAPE '\'" (see the f.name_q branch, clauses,
params and escape_like usage) which fails for non-ASCII Unicode case-insensitive
matches; replace this approach by either (A) using a case-insensitive collation
in the SQL (e.g., "p.name LIKE ? ESCAPE '\\' COLLATE NOCASE" or add COLLATE
NOCASE on the column) so SQLite handles Unicode-insensitive matching, or (B)
perform Unicode case-folding on the search term in Rust before pushing the param
(apply Unicode case-fold via Rust's to_lowercase() or Unicode casefold crate)
and keep the literal escaping via escape_like; update the clauses and params
construction accordingly and remove reliance on LOWER(p.name) LIKE.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9ef17365-4af0-46db-b10a-23e369e1126b
📒 Files selected for processing (1)
src-tauri/src/adapters/driven/sqlite/package_read_repo.rs
…vity Stock SQLite's `LOWER()` only case-folds ASCII, so the previous `LOWER(p.name) LIKE ?` clause would silently fail to match `CAFÉ` against a search for `café`. Drop the SQL-side `LIKE` and the `escape_like` helper entirely; instead fetch the source-type-filtered rows and apply the substring match in Rust via `name.to_lowercase().contains(needle_lower)`. Mirrors the pattern already used by `history_repo`. As a side benefit, `%` and `_` no longer need escaping — they are ordinary bytes to `str::contains`. Updated `PackageView.total_bytes` and `PackageFilter.name_q` doc comments in `domain/model/views.rs` to match the new aggregate semantics (Completed-NULL fallback) and the Unicode-aware filter contract. Added `test_find_packages_filter_name_q_matches_non_ascii_case_insensitively` covering `CAFÉ`/`café` and `Ärger`/`ärger`. Renamed the three metachar-literal tests to drop the `_escapes_` prefix since literalness now comes from filtering in Rust, not from a SQL ESCAPE clause.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src-tauri/src/adapters/driven/sqlite/package_read_repo.rs (1)
199-207: Consider an indexed normalized-name strategy for future scale.Current Rust-side filtering is semantically correct; if package counts grow significantly, consider a persisted normalized/case-folded name column with an index so Unicode-correct matching can stay SQL-side without full in-memory post-filtering.
Also applies to: 227-231
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs` around lines 199 - 207, The current in-memory Unicode-aware filtering using trimmed.to_lowercase() and name_needle in package_read_repo.rs (and similar logic in the history_repo path) is fine functionally but will be inefficient at scale; introduce a persisted normalized/case-folded name column (e.g., normalized_name) with a SQLite index, update insert/update paths to populate it using a Unicode-aware normalization/casefold routine, migrate existing rows to fill normalized_name, and modify the query logic that sets name_needle to instead bind the pre-normalized search term to a SQL WHERE clause on normalized_name (falling back to the Rust-side filter only if normalization/indexing is unavailable) so matches can be performed on the DB side efficiently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 38-46: The computed progress in aggregate_progress_percent can
exceed 100.0 when downloaded_bytes > total_bytes; update
aggregate_progress_percent to compute the ratio via round_one_dp(downloaded as
f64 / total as f64 * 100.0) and then clamp the returned f64 into the [0.0,
100.0] range (e.g., max(0.0, min(100.0, value))). Apply the same clamping
wherever a raw percent is computed (ensure any other percent calculations
matching this logic also clamp to [0.0,100.0]).
---
Nitpick comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 199-207: The current in-memory Unicode-aware filtering using
trimmed.to_lowercase() and name_needle in package_read_repo.rs (and similar
logic in the history_repo path) is fine functionally but will be inefficient at
scale; introduce a persisted normalized/case-folded name column (e.g.,
normalized_name) with a SQLite index, update insert/update paths to populate it
using a Unicode-aware normalization/casefold routine, migrate existing rows to
fill normalized_name, and modify the query logic that sets name_needle to
instead bind the pre-normalized search term to a SQL WHERE clause on
normalized_name (falling back to the Rust-side filter only if
normalization/indexing is unavailable) so matches can be performed on the DB
side efficiently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d1d2ef0-230d-4b33-8502-99398811b34f
📒 Files selected for processing (2)
src-tauri/src/adapters/driven/sqlite/package_read_repo.rssrc-tauri/src/domain/model/views.rs
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1a36e14767
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| .collect::<Result<Vec<_>, _>>()?; | ||
|
|
||
| let downloads = download::Entity::find() | ||
| .filter(download::Column::Id.is_in(download_ids.clone())) |
There was a problem hiding this comment.
Chunk ID filters to prevent oversized SQLite IN queries
When a package has a large number of member downloads, is_in(download_ids.clone()) expands to one SQL bind parameter per id; this can exceed SQLite's host-parameter limit (build-dependent, commonly 999 or 32766) and make find_package_downloads fail with a storage error instead of returning members. This affects real workloads that bulk-group many downloads into a single package, and it occurs before any fallback path because both the downloads and segments lookups use the full unchunked id list.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
When a package has a large number of member downloads,
is_in(download_ids.clone())expands to one SQL bind parameter per id; this can exceed SQLite's host-parameter limit (build-dependent, commonly 999 or 32766) and makefind_package_downloadsfail with a storage error.
Fixed in eceacd2. Added SQLITE_IN_CHUNK = 900 (well below the 999 floor) and chunk both the download::Entity::find().filter(Id.is_in(...)) lookup and the download_segment lookup with download_ids.chunks(SQLITE_IN_CHUNK). The order-preserving by_id HashMap merge step was already chunk-agnostic. Regression test test_find_package_downloads_chunks_oversized_in_query seeds SQLITE_IN_CHUNK + 50 rows in one package and asserts every member comes back in queue order.
There was a problem hiding this comment.
To use Codex here, create an environment for this repo.
Two unrelated correctness issues in `SqlitePackageReadRepo` raised in review: 1. `aggregate_progress_percent` and `compute_progress_percent_for_download` could return values above 100 when a non-Completed row persisted `downloaded_bytes > total_bytes` (segment over-fetch, retry double-count). Apply `.min(100.0)` to both to honour the documented `[0.0, 100.0]` contract. 2. `find_package_downloads` previously bound every member ID into a single `Id IN (?, ?, ...)` clause. Packages with thousands of members (bulk RSS / podcast imports) would exceed `SQLITE_MAX_VARIABLE_NUMBER` (999 on capped builds, 32 766 on modern). Add a `SQLITE_IN_CHUNK = 900` constant and chunk both the download lookup and the segment lookup; the order-preserving `by_id` HashMap merge step is already chunk-agnostic. Regression tests: - `test_find_packages_progress_clamps_above_one_hundred_percent` - `test_find_package_downloads_chunks_oversized_in_query` (seeds `SQLITE_IN_CHUNK + 50` rows and verifies every one is returned in queue order).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/package_read_repo.rs`:
- Around line 157-160: The ETA calculation in package_read_repo.rs can return a
non-None value for already-completed rows; update the match guard for
eta_seconds so it only computes an ETA when total is Some(t), speed > 0, and
downloaded < t (i.e., not completed). Concretely, change the condition currently
matching Some(t) if speed > 0 && t > downloaded to ensure downloaded < t (or !=
t) is enforced so eta_seconds is set to None for completed downloads; reference
the eta_seconds binding and the total/speed/downloaded variables in the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 28db4266-6447-499b-a663-141acef42dc4
📒 Files selected for processing (1)
src-tauri/src/adapters/driven/sqlite/package_read_repo.rs
`download_row_to_view` could return a positive `eta_seconds` for a row in state `Completed` whenever the persisted bytes were stale (engine crashed mid-flush, manual state edit) — the per-row view already reports 100% progress on Completed, so the UI got a self-contradictory "done, X seconds remaining" tile. Add an explicit state guard so `Completed` always yields `eta_seconds = None` regardless of byte counters. Regression test `test_find_package_downloads_eta_is_none_for_completed_with_stale_speed` seeds a Completed row at 600/1000 bytes with speed=100 (which would naively compute ETA=4s) and asserts `eta_seconds == None` while a Downloading sibling with the same numbers still gets `Some(4)`.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: cbf5fa4163
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| DomainError::StorageError(format!("invalid download state in DB: {}", model.state)) | ||
| })?; | ||
| let priority_u8 = u8::try_from(model.priority).unwrap_or(5); | ||
| let created_at = safe_u64(model.created_at); |
There was a problem hiding this comment.
Preserve legacy created_at fallback in package download views
download_row_to_view sets created_at directly from model.created_at, but legacy rows can persist created_at = 0 and SqliteDownloadReadRepo explicitly normalizes those via id/updated_at fallback. As written, package_list_downloads can return created_at: 0 for downloads that show a valid timestamp elsewhere, which can surface incorrect dates and inconsistent ordering behavior in clients that rely on created_at as a secondary key.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
download_row_to_viewsetscreated_atdirectly frommodel.created_at, but legacy rows can persistcreated_at = 0andSqliteDownloadReadRepoexplicitly normalizes those via id/updated_at fallback.
Fixed in f30fd83. Extracted the fallback chain from download_read_repo::read_created_at into a shared resolve_download_created_at(raw_ca, raw_id, raw_ua) helper in util.rs (takes primitives so util does not depend on entity types) and routed both readers through it. Regression test test_find_package_downloads_resolves_legacy_zero_created_at covers the three fallback branches: snowflake-id inference (1_700_000_000_000 << 12 → recovered), updated_at fallback when the id has no embedded ms, and the MIN_PLAUSIBLE_UNIX_MS sentinel when nothing is salvageable.
There was a problem hiding this comment.
To use Codex here, create an environment for this repo.
`download_row_to_view` in `package_read_repo` was reading `model.created_at` raw, returning `0` for legacy rows persisted before the timestamp backfill landed. The Downloads view (`download_read_repo`) already recovers a real timestamp via id-inferred → updated_at → `MIN_PLAUSIBLE_UNIX_MS` fallback, so a download could surface a 1970 date in `package_list_downloads` while showing the correct date in the main downloads list. Extract the fallback into a shared `resolve_download_created_at(raw_ca, raw_id, raw_ua)` helper in `util.rs` (taking primitives so `util` does not depend on entity types) and route both `download_read_repo` and `package_read_repo` through it. Regression test `test_find_package_downloads_resolves_legacy_zero_created_at` covers all three branches: snowflake-id inference, updated_at fallback, and the `MIN_PLAUSIBLE_UNIX_MS` sentinel for rows with no signal at all.
Summary
Implements CQRS read-side queries for Package aggregates with SQL-level aggregation (COUNT, SUM, AVG) of member downloads. Unblocks the Package UI (task 29). Fully tested: 16 new integration tests, all acceptance criteria met.
Why
Packages need pre-computed statistics (download count, total size, progress %) without N+1 query overhead. CQRS separation: PackageReadRepository returns flattened DTOs from a single GROUP BY query, never exposes mutation. Matches established pattern from Accounts queries.
Changes
PackageViewandPackageFilterdomain models (domain/model/views.rs)PackageReadRepositoryport trait (domain/ports/driven/package_read_repository.rs) with three methods:find_packages(filter)— lists with optional source_type and name_q (fuzzy) filtersfind_package_by_id(id)— returns single package or Nonefind_package_downloads(id)— returns member downloads ordered by queue_positionSqlitePackageReadRepoadapter (adapters/driven/sqlite/package_read_repo.rs):PACKAGE_AGG_SELECTconstant: singleLEFT JOIN ... GROUP BYquery with stats aggregationrow_to_view()for SQL→domain mapping (16 integration tests verify correctness)ListPackagesQuery,GetPackageQuery,ListPackageDownloadsQueryPackageViewDtoread model with#[serde(rename_all = "camelCase")]for frontendpackage_list,package_get,package_list_downloadsTesting
All acceptance criteria verified:
LEFT JOIN ... GROUP BY p.id, not N+1 application queries. Tested bytest_find_packages_aggregates_member_downloads.LOWER(p.name) LIKE ?is case-insensitive substring match. Verified bytest_find_packages_filter_by_name_q_is_case_insensitive_substring.setup_test_db()+ fixtures, no mocks. All pass locally and in CI.Full test output: 1299 cargo tests pass, 605 vitest pass, clippy clean, cargo fmt clean, oxlint clean.
Related Issues
Checklist
Summary by cubic
Add read-side package queries with SQL-aggregated stats to avoid N+1 and unblock the Packages UI (task 28). Also clamps progress, scales large member lookups, suppresses ETAs on Completed, and fixes legacy created_at fallback in package member views.
New Features
PackageReadRepositoryandSqlitePackageReadRepocomputedownloads_count,total_bytes,downloaded_bytes,progress_percent, andall_completedvia oneLEFT JOIN ... GROUP BY.ListPackagesQuery,GetPackageQuery,ListPackageDownloadsQuerywired intoQueryBus; DTOPackageViewDto(camelCase, no password). IPC:package_list,package_get,package_list_downloads; list results ordered by(created_at, id)and member downloads byqueue_position.source_typeexact match;name_qis a Unicode-aware, case-insensitive substring in Rust; both AND together; blankname_qignored.Bug Fixes
Completedmembers at fulltotal_bytes(fallback todownloaded_byteswhen total is NULL) symmetrically in numerator and denominator.progress_percentto 100% whendownloaded_bytes > total_bytesfor non-completed rows (aggregate and per-download views).find_package_downloadsto stay under SQLite’s variable limit for large packages.eta_secondsforCompleteddownloads.created_atwith a fallback chain (id-inferred →updated_at→ sentinel) viaresolve_download_created_at, used in bothpackage_read_repoanddownload_read_repoto avoid “1970” dates and keep ordering consistent.Written for commit f30fd83. Summary will update on new commits. Review in cubic
Summary by CodeRabbit