feat(core): history queries/commands IPC handlers (task 01)#93
Conversation
Expose CQRS history surface through Tauri IPC: - queries: history_list, history_search, history_get_by_id - commands: history_export (CSV RFC 4180 / JSON pretty), history_delete_entry, history_clear, history_purge_older_than(days) HistoryRepository port extended with list/search/find_by_id/delete_by_id/ delete_all — implemented by SqliteHistoryRepo with date-range filtering, sort-by field and pagination. HistoryEntry now carries the primary key so the UI can target individual rows (HistoryViewDto exposes entry_id). CommandBus gains a history_repo port; a shared test_support module provides NoopHistoryRepo / InMemoryHistoryRepo doubles so each handler suite does not redeclare trait stubs. Incidental clippy cleanup surfaced by -D warnings on stable 1.95: - toggle_clipboard / registry: assert! replacing assert_eq!(_, true/false) - register_local_file: contains(&event) instead of iter().any(...) - config: apply_patch moved before #[cfg(test)] block - adapters/driven/stats: dropped dead in-memory module Tests: 666 lib pass (4 ignored); new coverage on history_repo 97%, delete_history 100%, export_history 97% (incl. CSV quoting and pretty- JSON golden), purge_history 100%, list/search/get_history_entry handlers exercised end-to-end via QueryBus.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a full history feature: repository APIs (list/search/find/delete/clear), DTO/read-model changes, SQLite and in-memory repo implementations, CQRS commands/queries and CommandBus/QueryBus handlers, Tauri IPC handlers (list/search/get/export/delete/clear/purge), export formatting, and test wiring updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Frontend as Frontend (IPC)
participant Tauri as Tauri IPC
participant Bus as CommandBus / QueryBus
participant Repo as HistoryRepository
participant FS as Filesystem
rect rgba(200,200,255,0.5)
Frontend->>Tauri: history_export(format, path)
Tauri->>Bus: ExportHistoryCommand
Bus->>Repo: list(filter=None, sort=None, limit=MAX_PAGE, offset=0..)
Repo-->>Bus: Vec<HistoryEntry>
Bus->>FS: ensure_parent_dir(path)
Bus->>FS: write_file(path, encoded_bytes)
Bus-->>Tauri: Ok(count)
Tauri-->>Frontend: result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Greptile SummaryThis PR wires the history CQRS surface to Tauri IPC, adding Confidence Score: 5/5Safe to merge — all prior P0/P1 concerns are resolved and the only remaining finding is a P2 style suggestion about code duplication. All previously raised issues (LIKE wildcard escaping, hostname filter, export truncation, InMemoryHistoryRepo sort) are fully addressed. The new implementation is well-tested with high coverage, the pagination is stable thanks to the id DESC tiebreaker, and the CSV injection mitigation follows OWASP guidance. The sole remaining finding is a minor duplication concern in test support code. src-tauri/src/application/test_support.rs — duplicated host extraction function that could drift from the production version.
|
| Filename | Overview |
|---|---|
| src-tauri/src/adapters/driven/sqlite/history_repo.rs | Major new impl: adds list, search, find_by_id, delete_by_id, delete_all. Hostname filtering is now done in Rust via extract_host (correctly avoiding LIKE wildcard issues). Stable pagination with id DESC tiebreaker is well-designed. Search fetches most-recent 500 rows and filters in Rust. |
| src-tauri/src/application/commands/export_history.rs | New file. Paginates list() to avoid the prior 500-row truncation. CSV escaping guards formula injection per OWASP guidance; JSON uses serde_json. Accumulates all rows in memory before writing — acceptable tradeoff for a desktop download manager. |
| src-tauri/src/adapters/driving/tauri_ipc.rs | Adds 7 history IPC commands/queries. history_purge_older_than guards days==0. build_history_filter collapses blank strings to None. All new handlers delegate cleanly to the bus with appropriate error mapping. |
| src-tauri/src/application/test_support.rs | New consolidated test-doubles module. InMemoryHistoryRepo correctly implements sort, hostname (via duplicated host_component), pagination, and search consistent with the SQLite adapter's documented contract. host_component is a copy of extract_host and could diverge on bugfixes. |
| src-tauri/src/domain/ports/driven/history_repository.rs | Port extended with list, search, find_by_id, delete_by_id, delete_all. MAX_HISTORY_PAGE_SIZE and MAX_HISTORY_SEARCH_RESULTS constants are defined here and shared by all implementations. |
| src-tauri/src/application/commands/purge_history.rs | Simple new handler that delegates delete_older_than to the repo. IPC layer handles days==0 guard and computes the timestamp. |
| src-tauri/src/application/commands/delete_history.rs | New file. delete_history_entry maps delete_by_id returning false to a NotFound error. clear_history delegates delete_all and returns count. |
| src-tauri/src/application/read_models/history_view.rs | Adds entry_id (primary key) to HistoryViewDto. camelCase serialization confirmed by test. download_id is serialized as a string, which is consistent with the existing pattern. |
Sequence Diagram
sequenceDiagram
participant FE as Frontend (Vue)
participant IPC as tauri_ipc.rs
participant QB as QueryBus
participant CB as CommandBus
participant Repo as SqliteHistoryRepo
FE->>IPC: history_list(dateFrom, dateTo, hostname, sort, limit, offset)
IPC->>QB: handle_list_history(ListHistoryQuery)
QB->>Repo: list(filter, sort, limit, offset)
Note over Repo: SQL date filter → hostname match in Rust → paginate
Repo-->>FE: Vec<HistoryViewDto>
FE->>IPC: history_search(q)
IPC->>QB: handle_search_history(SearchHistoryQuery)
QB->>Repo: search(query)
Note over Repo: Fetch most-recent 500 rows, filter in Rust
Repo-->>FE: Vec<HistoryViewDto>
FE->>IPC: history_export(format, path)
IPC->>CB: handle_export_history(ExportHistoryCommand)
loop Paginate all rows (500/page)
CB->>Repo: list(None, None, 500, offset)
Repo-->>CB: page
end
Note over CB: encode CSV/JSON → fs::write(path)
CB-->>FE: exported count
FE->>IPC: history_purge_older_than(days)
Note over IPC: Guard days==0, compute cutoff = now - days*86400
IPC->>CB: handle_purge_history(PurgeHistoryCommand)
CB->>Repo: delete_older_than(before_timestamp)
Repo-->>FE: rows removed
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src-tauri/src/application/test_support.rs
Line: 37-52
Comment:
**Duplicated `host_component` / `extract_host` function**
`host_component` here is a verbatim copy of `extract_host` in `history_repo.rs`. Since both functions implement the same host-parsing logic, a bug fixed in the production implementation won't automatically fix the test double. Any future change to `extract_host` (e.g. handling edge cases like `file://` or data URLs) will need to be mirrored here manually.
Consider extracting `extract_host` into a shared utility crate/module (e.g. `crate::domain::util`) so that both the SQLite adapter and the in-memory test double call the same code.
How can I resolve this? If you propose a fix, please make it concise.Reviews (5): Last reviewed commit: "fix: address PR review comments" | Re-trigger Greptile
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (3)
src-tauri/src/application/command_bus.rs (1)
34-48: Consider replacing positional constructor args with a dependencies struct.With the new port,
CommandBus::newis getting error-prone to maintain. ACommandBusDepsstruct (or builder) would reduce wiring mistakes in future additions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/command_bus.rs` around lines 34 - 48, CommandBus::new currently takes many positional arguments which is error-prone; introduce a CommandBusDeps struct (or builder) to hold all dependencies (download_repo, download_engine, event_bus, file_storage, http_client, plugin_loader, config_store, credential_store, clipboard_observer, archive_extractor, history_repo, plugin_store_client) and change the constructor signature to accept CommandBusDeps (or a builder) instead of the long positional list; update all call sites to construct and pass CommandBusDeps and adjust internal field assignments in CommandBus::new to copy from the deps struct.src-tauri/src/application/queries/search_history.rs (1)
17-128: Extract the repeated query-test harness intoapplication::test_support.This file reintroduces the same
Noop*fixtures andmake_bus()wiring that now also exists inget_history_entry.rs, even though this PR already added shared history test support. Keeping them inline means every port change will keep forcing edits across unrelated handler tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/queries/search_history.rs` around lines 17 - 128, The repeated test harness (NoopExtractor, NoopDownloadRead, NoopStats, NoopPluginRead and make_bus) should be moved into the shared application::test_support module and reused: delete these duplicated structs and the make_bus function from this file and import the shared fixtures and helper (e.g. use crate::application::test_support::{NoopExtractor, NoopDownloadRead, NoopStats, NoopPluginRead, make_bus};), ensuring make_bus still returns QueryBus wired with Arc<dyn HistoryRepository> and referencing QueryBus and HistoryRepository types so tests compile against the central test support instead of repeating the implementations.src-tauri/src/application/test_support.rs (1)
122-146: Implementsortin the shared history test double.
InMemoryHistoryRepo::listcurrently ignores thesortargument, so any handler test built on this helper can only exercise the default ordering. That makes the newListHistoryQuery.sortsurface effectively untestable at the application layer and lets sort-wiring regressions slip through.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/application/test_support.rs` around lines 122 - 146, In InMemoryHistoryRepo::list, the _sort parameter is ignored; update the function to apply the requested ordering before pagination by matching on the sort argument (the HistorySort enum) and sorting the filtered Vec<HistoryEntry> accordingly (e.g., sort_by_key on completed_at with std::cmp::Reverse for descending, or ascending without Reverse; if HistorySort supports other fields, sort by those fields similarly). Ensure you perform this sort on the filtered variable (after collect and before calculating start/take) so ListHistoryQuery.sort behavior is exercised in tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Line 12: The changelog line mentions the DTO field as entry_id but the
UI-facing history view DTO uses entryId; update the CHANGELOG.md entry to use
the actual identifier name entryId (and optionally note both names if you want
backwards-compatibility context) so consumers and testers see the correct
payload shape; look for the sentence referencing HistoryEntry and replace
entry_id with entryId and ensure the phrasing matches the History view DTO
naming.
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 101-110: The hostname filter currently builds a LIKE on
history::Column::Url (Url LIKE "%{host}%"), which matches host strings anywhere
in the URL; change the logic to compare the request’s actual hostname instead:
if your schema has a persisted hostname column (e.g.
history::Column::SourceHostname or equivalent) use an equality or
hostname-contains clause against that column; otherwise, parse
history::Column::Url into a URL host and compare that host (using
url::Url::parse on the stored URL before adding the condition) rather than doing
a substring match. Update the block that references filter.hostname and
history::Column::Url so it uses the hostname column or parsed host comparison
(and skip empty host), ensuring you add the condition with .eq or the
parsed-host comparison instead of Url.like("%...%").
- Around line 132-145: The search() implementation only escapes '%' but relies
on backslash being an escape character which SQLite doesn't use by default and
also fails to escape '_' as a wildcard; update the code that builds the like
pattern (the like variable) to escape both '%' and '_' by prefixing them with a
chosen escape char (e.g., '\') and then ensure the generated SQL uses an ESCAPE
'\' clause for the LIKE comparisons (or replace the LIKE-based filters with a
safe substring predicate such as instr(column, pattern) if you prefer not to use
ESCAPE). Locate the search function and the construction of like, the
Condition::any with history::Column::FileName/Url/DestinationPath.like(...), and
either append the ESCAPE clause to those LIKE expressions or switch to instr()
so the query no longer treats '_' as a wildcard.
In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 2010-2022: In history_purge_older_than, reject days == 0 before
calling current_unix_seconds and building the PurgeHistoryCommand: add a guard
at the start of history_purge_older_than that returns an Err(String) (e.g.,
"days must be >= 1") when days is zero (or otherwise validate days >= 1), so you
never compute cutoff = now.saturating_sub(...) or pass before_timestamp equal to
now to command_bus.handle_purge_history; keep the existing error mapping
(.map_err(|e| e.to_string())) for downstream failures.
- Around line 1891-1905: The build_history_filter function currently treats an
empty hostname string as a valid filter; change it to normalize hostname by
converting Some("") to None before deciding if a HistoryFilter should be
returned. Specifically, in build_history_filter, map or filter the hostname
parameter (e.g., let hostname = hostname.filter(|h| !h.is_empty()) or
hostname.and_then(|h| if h.trim().is_empty() { None } else { Some(h) })) and
then use that cleaned hostname when constructing HistoryFilter and when checking
whether to return None or Some(...), so empty or whitespace-only hostnames
behave like “no filter.”
In `@src-tauri/src/application/commands/export_history.rs`:
- Around line 37-67: The CSV exporter must neutralize spreadsheet formulas:
update escape_csv_field (or create a small wrapper used by encode_csv) to check
if the input starts with any of the dangerous characters ['=', '+', '-', '@']
and, if so, prefix the cell value with a single quote (') before applying the
existing quote/escape logic; then ensure encode_csv uses this neutralizing
escape for the file_name, url, and destination_path fields of HistoryEntry so
exported CSV cells cannot be interpreted as formulas by Excel/Sheets.
In `@src-tauri/src/domain/model/views.rs`:
- Around line 100-101: The doc for the View field `hostname` currently promises
a case-insensitive hostname match but the SQLite adapter in `history_repo.rs`
uses a `url LIKE '%{host}%'` filter that matches path/query segments; update the
adapter to parse the URL and filter by the extracted hostname (case-insensitive
compare or normalized lowercased equality) instead of doing a LIKE on the full
URL, or alternatively adjust the `hostname` docstring in `views.rs` to state it
is a substring match against the full URL; reference the `hostname` field in
`views.rs` and the URL filtering logic in `history_repo.rs` when making the
change.
In `@src-tauri/src/domain/ports/driven/history_repository.rs`:
- Around line 24-39: The list and search methods allow unbounded results; add
server-side caps and pagination: introduce a constant (e.g.
HISTORY_MAX_PAGE_SIZE) and enforce it inside the implementations of list and
search so that when limit is None you use HISTORY_MAX_PAGE_SIZE, and when a
provided limit exceeds HISTORY_MAX_PAGE_SIZE you clamp it down; additionally
extend the search API to accept pagination parameters (add limit: Option<usize>,
offset: Option<usize> to the search signature) or at minimum apply the same
HISTORY_MAX_PAGE_SIZE default and offset support so search also returns at most
the capped page; update callers of list/search and any code handling
HistoryEntry/DomainError accordingly (refer to functions list, search, and types
HistoryFilter, HistorySort, HistoryEntry, DomainError).
---
Nitpick comments:
In `@src-tauri/src/application/command_bus.rs`:
- Around line 34-48: CommandBus::new currently takes many positional arguments
which is error-prone; introduce a CommandBusDeps struct (or builder) to hold all
dependencies (download_repo, download_engine, event_bus, file_storage,
http_client, plugin_loader, config_store, credential_store, clipboard_observer,
archive_extractor, history_repo, plugin_store_client) and change the constructor
signature to accept CommandBusDeps (or a builder) instead of the long positional
list; update all call sites to construct and pass CommandBusDeps and adjust
internal field assignments in CommandBus::new to copy from the deps struct.
In `@src-tauri/src/application/queries/search_history.rs`:
- Around line 17-128: The repeated test harness (NoopExtractor,
NoopDownloadRead, NoopStats, NoopPluginRead and make_bus) should be moved into
the shared application::test_support module and reused: delete these duplicated
structs and the make_bus function from this file and import the shared fixtures
and helper (e.g. use crate::application::test_support::{NoopExtractor,
NoopDownloadRead, NoopStats, NoopPluginRead, make_bus};), ensuring make_bus
still returns QueryBus wired with Arc<dyn HistoryRepository> and referencing
QueryBus and HistoryRepository types so tests compile against the central test
support instead of repeating the implementations.
In `@src-tauri/src/application/test_support.rs`:
- Around line 122-146: In InMemoryHistoryRepo::list, the _sort parameter is
ignored; update the function to apply the requested ordering before pagination
by matching on the sort argument (the HistorySort enum) and sorting the filtered
Vec<HistoryEntry> accordingly (e.g., sort_by_key on completed_at with
std::cmp::Reverse for descending, or ascending without Reverse; if HistorySort
supports other fields, sort by those fields similarly). Ensure you perform this
sort on the filtered variable (after collect and before calculating start/take)
so ListHistoryQuery.sort behavior is exercised in tests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1c1bc89b-71bf-4d96-897b-ee5e8e63a7f3
📒 Files selected for processing (46)
CHANGELOG.mdsrc-tauri/src/adapters/driven/mod.rssrc-tauri/src/adapters/driven/plugin/registry.rssrc-tauri/src/adapters/driven/sqlite/history_repo.rssrc-tauri/src/adapters/driven/stats/in_memory_stats_repo.rssrc-tauri/src/adapters/driven/stats/mod.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/command_bus.rssrc-tauri/src/application/commands/cancel_download.rssrc-tauri/src/application/commands/clear_downloads_by_state.rssrc-tauri/src/application/commands/delete_history.rssrc-tauri/src/application/commands/export_history.rssrc-tauri/src/application/commands/install_plugin.rssrc-tauri/src/application/commands/mod.rssrc-tauri/src/application/commands/pause_all.rssrc-tauri/src/application/commands/pause_download.rssrc-tauri/src/application/commands/purge_history.rssrc-tauri/src/application/commands/register_local_file.rssrc-tauri/src/application/commands/remove_download.rssrc-tauri/src/application/commands/resume_all.rssrc-tauri/src/application/commands/resume_download.rssrc-tauri/src/application/commands/retry_download.rssrc-tauri/src/application/commands/set_priority.rssrc-tauri/src/application/commands/start_download.rssrc-tauri/src/application/commands/toggle_clipboard.rssrc-tauri/src/application/commands/toggle_plugin.rssrc-tauri/src/application/commands/uninstall_plugin.rssrc-tauri/src/application/commands/update_config.rssrc-tauri/src/application/mod.rssrc-tauri/src/application/queries/count_by_state.rssrc-tauri/src/application/queries/get_download_detail.rssrc-tauri/src/application/queries/get_downloads.rssrc-tauri/src/application/queries/get_history_entry.rssrc-tauri/src/application/queries/list_history.rssrc-tauri/src/application/queries/list_plugins.rssrc-tauri/src/application/queries/mod.rssrc-tauri/src/application/queries/search_history.rssrc-tauri/src/application/query_bus.rssrc-tauri/src/application/read_models/history_view.rssrc-tauri/src/application/test_support.rssrc-tauri/src/domain/model/config.rssrc-tauri/src/domain/model/views.rssrc-tauri/src/domain/ports/driven/history_repository.rssrc-tauri/src/domain/ports/driven/tests.rssrc-tauri/src/lib.rssrc-tauri/tests/app_state_wiring.rs
💤 Files with no reviewable changes (3)
- src-tauri/src/adapters/driven/stats/mod.rs
- src-tauri/src/adapters/driven/mod.rs
- src-tauri/src/adapters/driven/stats/in_memory_stats_repo.rs
|
|
||
| ### Added | ||
|
|
||
| - History IPC surface: queries `history_list(date_from, date_to, hostname, sort_field, sort_direction, limit, offset)`, `history_search(q)`, `history_get_by_id(id)` and commands `history_export(format, path)` (CSV RFC 4180 or JSON pretty-printed), `history_delete_entry(id)`, `history_clear`, `history_purge_older_than(days)`. `HistoryEntry` now carries the primary key as `entry_id` in the DTO so the frontend can target individual rows. The `HistoryRepository` port gained `list`, `search`, `find_by_id`, `delete_by_id` and `delete_all`, implemented by `SqliteHistoryRepo`. (task 01) |
There was a problem hiding this comment.
Use the actual history identifier name in the changelog.
This entry documents the DTO as entry_id on HistoryEntry, but the new UI-facing field is entryId on the history view DTO. Leaving it as-is will send manual testers and downstream consumers to the wrong payload shape.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CHANGELOG.md` at line 12, The changelog line mentions the DTO field as
entry_id but the UI-facing history view DTO uses entryId; update the
CHANGELOG.md entry to use the actual identifier name entryId (and optionally
note both names if you want backwards-compatibility context) so consumers and
testers see the correct payload shape; look for the sentence referencing
HistoryEntry and replace entry_id with entryId and ensure the phrasing matches
the History view DTO naming.
| /// List history entries with optional filter, sort and pagination. | ||
| /// | ||
| /// Defaults: sort by `completed_at DESC`, no pagination. | ||
| fn list( | ||
| &self, | ||
| filter: Option<HistoryFilter>, | ||
| sort: Option<HistorySort>, | ||
| limit: Option<usize>, | ||
| offset: Option<usize>, | ||
| ) -> Result<Vec<HistoryEntry>, DomainError>; | ||
|
|
||
| /// Full-text search across file name, URL and destination path. | ||
| /// | ||
| /// Returns entries where any of those columns contain `query` | ||
| /// (case-insensitive). | ||
| fn search(&self, query: &str) -> Result<Vec<HistoryEntry>, DomainError>; |
There was a problem hiding this comment.
Bound history query sizes to avoid unbounded IPC payloads.
Line 26 and Line 39 effectively allow unbounded reads (list defaults to no pagination, search has no pagination at all). Large history tables can cause high memory usage and slow IPC responses. Add a server-side cap and/or pagination to search, and enforce a default max for list.
Suggested API adjustment
- /// Full-text search across file name, URL and destination path.
+ /// Search across file name, URL and destination path.
+ /// Implementations should enforce bounded results.
///
- /// Returns entries where any of those columns contain `query`
- /// (case-insensitive).
- fn search(&self, query: &str) -> Result<Vec<HistoryEntry>, DomainError>;
+ /// Returns entries where any of those columns contain `query`
+ /// (case-insensitive), with pagination.
+ fn search(
+ &self,
+ query: &str,
+ limit: Option<usize>,
+ offset: Option<usize>,
+ ) -> Result<Vec<HistoryEntry>, DomainError>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src-tauri/src/domain/ports/driven/history_repository.rs` around lines 24 -
39, The list and search methods allow unbounded results; add server-side caps
and pagination: introduce a constant (e.g. HISTORY_MAX_PAGE_SIZE) and enforce it
inside the implementations of list and search so that when limit is None you use
HISTORY_MAX_PAGE_SIZE, and when a provided limit exceeds HISTORY_MAX_PAGE_SIZE
you clamp it down; additionally extend the search API to accept pagination
parameters (add limit: Option<usize>, offset: Option<usize> to the search
signature) or at minimum apply the same HISTORY_MAX_PAGE_SIZE default and offset
support so search also returns at most the capped page; update callers of
list/search and any code handling HistoryEntry/DomainError accordingly (refer to
functions list, search, and types HistoryFilter, HistorySort, HistoryEntry,
DomainError).
There was a problem hiding this comment.
7 issues found across 46 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">
<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:109">
P2: The hostname filter matches against the entire URL with `LIKE "%{host}%"`, so it will produce false positives when the hostname string appears in the path, query string, or fragment (e.g., filtering by `foo.com` would also match `https://example.com/?redirect=https://foo.com/bar`). Parse the URL and compare against the extracted host, or match against a persisted `source_hostname` column instead.</violation>
<violation number="2" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:133">
P2: `history.search` uses LIKE escaping incorrectly: `%` is rewritten but no `ESCAPE` clause is applied, so wildcard characters in user input are not handled as literals.</violation>
</file>
<file name="src-tauri/src/application/queries/list_history.rs">
<violation number="1" location="src-tauri/src/application/queries/list_history.rs:40">
P3: This test adds substantial duplicate test-double and bus-construction code already present in sibling query tests; extract/reuse a shared query test support helper instead of copying another full `Noop*` block.</violation>
</file>
<file name="src-tauri/src/application/commands/export_history.rs">
<violation number="1" location="src-tauri/src/application/commands/export_history.rs:38">
P1: CSV export is vulnerable to spreadsheet formula injection because leading formula characters are not neutralized before writing user-controlled fields.</violation>
</file>
<file name="src-tauri/src/application/queries/get_history_entry.rs">
<violation number="1" location="src-tauri/src/application/queries/get_history_entry.rs:44">
P3: This test adds another full copy of query-bus noop adapters already duplicated in other history query tests; extract shared query test support instead of repeating these structs/impls again.</violation>
</file>
<file name="src-tauri/src/application/queries/search_history.rs">
<violation number="1" location="src-tauri/src/application/queries/search_history.rs:37">
P3: This test module duplicates the same Noop query-bus stubs used in other history query tests; extract/shared test support should be reused to avoid drift across handlers.</violation>
</file>
<file name="src-tauri/src/adapters/driving/tauri_ipc.rs">
<violation number="1" location="src-tauri/src/adapters/driving/tauri_ipc.rs:2015">
P0: When `days == 0`, `cutoff` equals `now`, so `delete_older_than(now)` will delete nearly the entire history table. Reject `days == 0` to prevent an accidental full wipe via a bad UI default or malformed IPC call.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- history_purge_older_than: reject days == 0 (would wipe the whole table) - history_list: blank/whitespace hostname collapses to "no filter" so an empty input field shows all rows instead of none - Hostname filter now compares against the URL's parsed host, exactly and case-insensitive, instead of a substring anywhere in the URL. Stops false positives like `?next=https://foo.com/` matching host `foo.com` - Search drops the raw SQL LIKE and filters case-insensitively in Rust, so `_` and `%` in user input are treated as literals (no silent wildcard matches, no broken ESCAPE clause) - Both list and search cap results at 500 rows per call (MAX_HISTORY_PAGE_SIZE / MAX_HISTORY_SEARCH_RESULTS) so the IPC payload stays bounded regardless of history size - CSV export guards spreadsheet formula prefixes (=, +, -, @, TAB, CR) with a leading apostrophe per OWASP CSV-injection guidance - InMemoryHistoryRepo / domain port test fakes now honor the sort field and compare hostnames case-insensitively, matching the SQLite adapter - Extract make_history_query_bus test helper into application::test_support and drop the duplicated NoopDownloadRead/NoopStats/NoopPluginRead/ NoopExtractor blocks from list_history, search_history and get_history_entry suites - Changelog reflects the real DTO field name (`entryId`) and documents the new safety caps / days>=1 requirement
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 165-186: The current hostname_filter branch limits the SQL query
to MAX_HISTORY_PAGE_SIZE then applies matches_hostname_filter, which can miss
matches beyond the first SQL page; change history_list to either (preferred)
loop-page the SQL query (using query.limit(...) and query.offset(...) or
incrementing an internal offset) and keep fetching ordered rows until you've
collected at least effective_offset + effective_limit hostname-matching entries,
then apply the Rust skip(effective_offset).take(effective_limit) on that
collected matches, or alternatively persist a normalized host column and apply
the hostname filter directly in SQL; update the code path involving
hostname_filter, MAX_HISTORY_PAGE_SIZE, query, model_to_entry,
matches_hostname_filter, effective_offset, and effective_limit to implement the
chosen approach.
In `@src-tauri/src/application/commands/export_history.rs`:
- Around line 17-18: The export currently calls history_repo().list(None, None,
None, None) which is capped at MAX_HISTORY_PAGE_SIZE and will silently truncate
exports; update export_history.rs to either (A) page through
HistoryRepository::list by repeatedly calling history_repo().list(next_cursor,
None, None, None) (or equivalent pagination parameters) and accumulating results
until a page smaller than MAX_HISTORY_PAGE_SIZE is returned, or (B) add a
dedicated uncapped repository method (e.g.,
HistoryRepository::list_all_for_export) and use that to retrieve all entries
before converting to CSV/JSON; ensure you replace the single-use entries =
self.history_repo().list(...) usage with the chosen approach and preserve
existing error handling.
In `@src-tauri/src/application/test_support.rs`:
- Around line 197-210: The test double method search currently scans the whole
store and treats "" as a match; update the mock search (function search on the
test repo that accesses self.entries and returns Vec<HistoryEntry>) to mirror
SqliteHistoryRepo: if query is empty return Ok(vec![]), otherwise only examine
the most recent MAX_HISTORY_SEARCH_RESULTS entries (take the tail of the entries
vector in reverse/insertion order) and perform the same field contains checks on
file_name, url, and destination_path before cloning and collecting results;
reference MAX_HISTORY_SEARCH_RESULTS, search, self.entries, and HistoryEntry to
locate where to change behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3502fa6d-f74e-469f-b54a-d30bd663f56e
📒 Files selected for processing (11)
CHANGELOG.mdsrc-tauri/src/adapters/driven/sqlite/history_repo.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/application/commands/export_history.rssrc-tauri/src/application/queries/get_history_entry.rssrc-tauri/src/application/queries/list_history.rssrc-tauri/src/application/queries/search_history.rssrc-tauri/src/application/test_support.rssrc-tauri/src/domain/model/views.rssrc-tauri/src/domain/ports/driven/history_repository.rssrc-tauri/src/domain/ports/driven/tests.rs
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (3)
- src-tauri/src/application/queries/search_history.rs
- src-tauri/src/domain/model/views.rs
- src-tauri/src/application/queries/get_history_entry.rs
There was a problem hiding this comment.
3 issues found across 11 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/application/test_support.rs">
<violation number="1" location="src-tauri/src/application/test_support.rs:42">
P3: Hostname parsing incorrectly truncates IPv6 hosts because it splits on the first colon.</violation>
</file>
<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">
<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:167">
P2: Hostname-filtered pagination is incorrect because rows are capped before host filtering, so offsets/pages can miss valid matches.</violation>
</file>
<file name="src-tauri/src/domain/ports/driven/history_repository.rs">
<violation number="1" location="src-tauri/src/domain/ports/driven/history_repository.rs:13">
P1: `history_export` silently truncates at 500 rows. The `list(None, None, None, None)` call in `handle_export_history` is contractually capped to `MAX_HISTORY_PAGE_SIZE` (500) per the trait doc added here. Any user with more than 500 history entries receives a silently incomplete export file — no error, no warning. Either loop through `list` pages until a short page is returned, or add a dedicated uncapped repository method for export.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- history_export paginates through list() 500 rows at a time so the file contains every row even when the table holds more than one page worth. Before, the export silently truncated at MAX_HISTORY_PAGE_SIZE. - SqliteHistoryRepo::list with a hostname filter now pages the SQL query internally until it accumulates enough post-filter matches (or the table is exhausted). Previously the first 500 SQL rows were fetched in one go and offsets past that window silently missed valid rows. - InMemoryHistoryRepo::search matches the SQLite adapter's contract: empty query returns no rows, and only the most recent MAX_HISTORY_SEARCH_RESULTS entries are scanned. - extract_host keeps IPv6 literals intact (`[::1]:8080` → `[::1]`). Same fix in the test-support helper. - Trait doc clarifies that callers needing the full history must page through list() until a short page is returned.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 170-172: The current query built in
history::Entity::find().filter(condition).order_by(column, order) only orders by
the requested column which can make pagination unstable; update the query to
append a deterministic secondary sort key (e.g. history::Column::Id) so ties are
broken consistently — modify the call site that builds base_query (used by
list() and history_export) to add a second .order_by(history::Column::Id,
Order::Asc) (or matching order direction) after the primary .order_by(column,
order) so pagination is stable and rows won’t be skipped or duplicated across
pages.
- Around line 189-223: The code currently accumulates all hostname-matching
HistoryEntry objects into matches up to want_total (effective_offset +
effective_limit), causing O(offset) memory; change the loop to track a separate
counter seen_matches (u64) that increments whenever
matches_hostname_filter(&entry.url, &host) is true, but only push the entry into
matches (Vec<HistoryEntry>) once seen_matches >= effective_offset and until
matches.len() reaches effective_limit; update break conditions to stop when
matches.len() >= effective_limit or when fetched < page_size, and remove
reliance on want_total for pushing—use seen_matches to skip the first
effective_offset matches and only store up to effective_limit entries, keeping
other symbols (base_query, sql_offset, page_size, model_to_entry) unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8442ade1-6d7a-4e0a-9f93-fe1c009953fc
📒 Files selected for processing (4)
src-tauri/src/adapters/driven/sqlite/history_repo.rssrc-tauri/src/application/commands/export_history.rssrc-tauri/src/application/test_support.rssrc-tauri/src/domain/ports/driven/history_repository.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- src-tauri/src/application/commands/export_history.rs
- src-tauri/src/domain/ports/driven/history_repository.rs
- src-tauri/src/application/test_support.rs
There was a problem hiding this comment.
1 issue found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">
<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:189">
P2: The hostname-filter branch collects up to `offset + limit` matching entries into `matches`, then discards the first `offset` at the end. Since `offset` comes from the IPC caller and is not capped, this retains O(offset) entries in memory even though only `limit` rows are returned. Track a `skipped` counter instead and only start pushing entries once the offset window is consumed.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- Append id DESC as secondary sort in list() so pagination stays stable when the primary sort key has ties (e.g. many entries sharing the same completed_at). Without it, OFFSET could skip or duplicate rows across successive pages, which the paginated export loop amplifies. - Hostname-filter branch tracks a `skipped` counter and only retains up to `limit` matches in memory. Previously it collected `offset + limit` rows and discarded the offset window at the end, growing memory linearly with caller-supplied offsets.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src-tauri/src/adapters/driven/sqlite/history_repo.rs (2)
47-53: Consider pre-computing lowercase hostname outside the hot loop.In
list(),matches_hostname_filteris called for every row in potentially many SQL pages, yetwanted.to_ascii_lowercase()always produces the same result sincehostis constant for the entire loop. Pre-computing this before the loop avoids repeated allocations.♻️ Suggested refactor
-fn matches_hostname_filter(entry_url: &str, wanted: &str) -> bool { - let wanted = wanted.to_ascii_lowercase(); +fn matches_hostname_filter(entry_url: &str, wanted_lower: &str) -> bool { match extract_host(entry_url) { - Some(host) => host.to_ascii_lowercase() == wanted, + Some(host) => host.eq_ignore_ascii_case(wanted_lower), None => false, } }Then in
list(), pre-compute before the loop:let host_lower = host.to_ascii_lowercase(); // ... in loop: if !matches_hostname_filter(&entry.url, &host_lower) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs` around lines 47 - 53, matches_hostname_filter currently calls wanted.to_ascii_lowercase() on every row which allocates repeatedly; precompute the lowercase wanted once in list() and pass that pre-lowercased string into matches_hostname_filter (or rename param to make intent clear). Specifically, in list() compute let host_lower = host.to_ascii_lowercase() before iterating SQL rows and call matches_hostname_filter(&entry.url, &host_lower), and update matches_hostname_filter to treat its wanted parameter as already-lowercased (remove the to_ascii_lowercase() call and compare host.to_ascii_lowercase() == wanted).
584-584: Usei % 4 == 0instead ofis_multiple_of(4)for broader compatibility.The
is_multiple_ofmethod on integer types requires Rust 1.86.0. No MSRV is explicitly defined inCargo.toml, so if the project intends to support older versions, use the traditional idiomi % 4 == 0.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs` at line 584, Replace the use of the unstable/newer integer method is_multiple_of(4) with the traditional modulus check to maintain MSRV compatibility: change the conditional in the block that sets e.url (the expression using i.is_multiple_of(4)) to use i % 4 == 0 instead, keeping the rest of the assignment logic in the same place (the code where e.url is assigned based on i).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 236-256: In the search method, avoid the unnecessary DB round-trip
when the query is empty by checking query.is_empty() (or needle.is_empty())
before calling block_on/history::Entity::find(); if empty return Ok(Vec::new())
immediately; move the empty-query guard above the block_on call and keep the
rest of the logic that maps models via model_to_entry and filters with
matches_search and map_db_err unchanged (MAX_HISTORY_SEARCH_RESULTS can remain
as-is).
---
Nitpick comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 47-53: matches_hostname_filter currently calls
wanted.to_ascii_lowercase() on every row which allocates repeatedly; precompute
the lowercase wanted once in list() and pass that pre-lowercased string into
matches_hostname_filter (or rename param to make intent clear). Specifically, in
list() compute let host_lower = host.to_ascii_lowercase() before iterating SQL
rows and call matches_hostname_filter(&entry.url, &host_lower), and update
matches_hostname_filter to treat its wanted parameter as already-lowercased
(remove the to_ascii_lowercase() call and compare host.to_ascii_lowercase() ==
wanted).
- Line 584: Replace the use of the unstable/newer integer method
is_multiple_of(4) with the traditional modulus check to maintain MSRV
compatibility: change the conditional in the block that sets e.url (the
expression using i.is_multiple_of(4)) to use i % 4 == 0 instead, keeping the
rest of the assignment logic in the same place (the code where e.url is assigned
based on i).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b91779b1-51fe-4816-9540-8c1e108711ba
📒 Files selected for processing (1)
src-tauri/src/adapters/driven/sqlite/history_repo.rs
Move the empty-query short-circuit in search() above the sqlite fetch so a blank needle skips the round-trip entirely instead of pulling up to 500 rows and discarding them.
Summary
Expose the CQRS history surface through Tauri IPC so the frontend can drive the History view, retention purge, re-download flow and Link Grabber duplicate detection. Implements sprint task 01 (P0, 3 SP).
New IPC surface
Queries
history_list(dateFrom?, dateTo?, hostname?, sortField?, sortDirection?, limit?, offset?)→Vec<HistoryViewDto>history_search(q)→Vec<HistoryViewDto>history_get_by_id(id)→HistoryViewDtoCommands
history_export(format, path)— writes CSV (RFC 4180) or pretty JSONhistory_delete_entry(id)history_clear()→u64rows removedhistory_purge_older_than(days)→u64rows removedUnder the hood
HistoryRepositoryport extended withlist,search,find_by_id,delete_by_id,delete_all; implemented bySqliteHistoryRepousing sea-orm with date-range / hostname filtering, sort-by-field and pagination.HistoryEntrynow carries the primary key;HistoryViewDtoexposes it asentryIdso the UI can target individual rows.CommandBusgains ahistory_repoport. A sharedapplication/test_supportmodule (NoopHistoryRepo / InMemoryHistoryRepo) removes the per-test trait-stub duplication across all 18 existing handler suites.Incidental clippy cleanup
Surfaced by
-D warningson rustc 1.95:toggle_clipboard,registry:assert!replacesassert_eq!(_, true/false)register_local_file:contains(&event)instead ofiter().any(|e| *e == …)config:apply_patchmoved above#[cfg(test)]blockadapters/driven/stats: removed the dead in-memory module (nothing imported it)Verification
cargo test --lib→ 666 passed, 4 ignored, 0 failedcargo clippy --all-targets -- -D warnings→ cleancargo fmt --check→ cleansqlite/history_repo97.6% ·delete_history100% ·export_history97.4% ·purge_history100% ·history_view_dto100%Test plan
history_listfilters by date rangehistory_searchfinds rows by file name / URLhistory_exportproduces round-trippable CSV and JSON fileshistory_purge_older_than(30)removes rows older than 30 dayshistory_cleardrains the tableSummary by cubic
Expose the history CQRS surface over Tauri IPC so the frontend can drive the History view, export, retention purge, re-download flow, and Link Grabber duplicate detection. Adds safer host filtering, 500-row caps, CSV injection guard, full export via pagination, and stable pagination with an id tiebreaker (task 01).
New Features
history_list(filter/sort/limit/offset),history_search(q),history_get_by_id(id)(list/search capped at 500 rows).history_export(format, path)— CSV (RFC 4180 with formula guard) or pretty JSON,history_delete_entry(id),history_clear()→u64,history_purge_older_than(days ≥ 1)→u64.HistoryViewDtoincludesentryIdfor targeting rows.Refactors
HistoryRepositorywithlist,search,find_by_id,delete_by_id,delete_all; implemented inSqliteHistoryRepowith date range, exact hostname match against the URL host (case-insensitive; blank hostname disables the filter), sort, and pagination. Search treats%/_literally and is case-insensitive.history_exportpaginates to include all rows; hostname-filteredlistpaginates across DB pages with bounded memory and addsid DESCas a secondary sort for stable paging;extract_hostpreserves IPv6;searchscans only the most recent 500 entries and returns no rows for empty queries, short-circuiting before any DB fetch. Removed the dead in-memory stats module and addressed clippy nits.Written for commit 35a0815. Summary will update on new commits.
Summary by CodeRabbit
New Features
Bug Fixes / Safety
Tests
Documentation