Skip to content

feat(core): history queries/commands IPC handlers (task 01)#93

Merged
mpiton merged 5 commits intomainfrom
feat/task-01-history-ipc
Apr 23, 2026
Merged

feat(core): history queries/commands IPC handlers (task 01)#93
mpiton merged 5 commits intomainfrom
feat/task-01-history-ipc

Conversation

@mpiton
Copy link
Copy Markdown
Owner

@mpiton mpiton commented Apr 22, 2026

Summary

Expose the CQRS history surface through Tauri IPC so the frontend can drive the History view, retention purge, re-download flow and Link Grabber duplicate detection. Implements sprint task 01 (P0, 3 SP).

New IPC surface

Queries

  • history_list(dateFrom?, dateTo?, hostname?, sortField?, sortDirection?, limit?, offset?)Vec<HistoryViewDto>
  • history_search(q)Vec<HistoryViewDto>
  • history_get_by_id(id)HistoryViewDto

Commands

  • history_export(format, path) — writes CSV (RFC 4180) or pretty JSON
  • history_delete_entry(id)
  • history_clear()u64 rows removed
  • history_purge_older_than(days)u64 rows removed

Under the hood

  • HistoryRepository port extended with list, search, find_by_id, delete_by_id, delete_all; implemented by SqliteHistoryRepo using sea-orm with date-range / hostname filtering, sort-by-field and pagination.
  • HistoryEntry now carries the primary key; HistoryViewDto exposes it as entryId so the UI can target individual rows.
  • CommandBus gains a history_repo port. A shared application/test_support module (NoopHistoryRepo / InMemoryHistoryRepo) removes the per-test trait-stub duplication across all 18 existing handler suites.

Incidental clippy cleanup

Surfaced by -D warnings on rustc 1.95:

  • toggle_clipboard, registry: assert! replaces assert_eq!(_, true/false)
  • register_local_file: contains(&event) instead of iter().any(|e| *e == …)
  • config: apply_patch moved above #[cfg(test)] block
  • adapters/driven/stats: removed the dead in-memory module (nothing imported it)

Verification

  • cargo test --lib → 666 passed, 4 ignored, 0 failed
  • cargo clippy --all-targets -- -D warnings → clean
  • cargo fmt --check → clean
  • Coverage (lines): sqlite/history_repo 97.6% · delete_history 100% · export_history 97.4% · purge_history 100% · history_view_dto 100%

Test plan


Summary by cubic

Expose the history CQRS surface over Tauri IPC so the frontend can drive the History view, export, retention purge, re-download flow, and Link Grabber duplicate detection. Adds safer host filtering, 500-row caps, CSV injection guard, full export via pagination, and stable pagination with an id tiebreaker (task 01).

  • New Features

    • Queries: history_list(filter/sort/limit/offset), history_search(q), history_get_by_id(id) (list/search capped at 500 rows).
    • Commands: history_export(format, path) — CSV (RFC 4180 with formula guard) or pretty JSON, history_delete_entry(id), history_clear()u64, history_purge_older_than(days ≥ 1)u64.
    • HistoryViewDto includes entryId for targeting rows.
  • Refactors

    • Extended HistoryRepository with list, search, find_by_id, delete_by_id, delete_all; implemented in SqliteHistoryRepo with date range, exact hostname match against the URL host (case-insensitive; blank hostname disables the filter), sort, and pagination. Search treats %/_ literally and is case-insensitive.
    • Correctness/perf: history_export paginates to include all rows; hostname-filtered list paginates across DB pages with bounded memory and adds id DESC as a secondary sort for stable paging; extract_host preserves IPv6; search scans only the most recent 500 entries and returns no rows for empty queries, short-circuiting before any DB fetch. Removed the dead in-memory stats module and addressed clippy nits.

Written for commit 35a0815. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • History IPC: list (filter/sort/limit/offset), search, get-by-id, export (CSV/JSON with quoting and formula-guarding), delete entry, clear all, purge older-than — export writes a file and returns count
    • History views now include entryId; hostname-aware listing and stable pagination
  • Bug Fixes / Safety

    • Rejects purge with days == 0; list/search results capped at 500; empty search returns no results
  • Tests

    • Expanded test coverage for listing, searching, pagination, hostname parsing, export formatting, and deletion behaviors
  • Documentation

    • Unreleased changelog entry added documenting history APIs

Expose CQRS history surface through Tauri IPC:
- queries: history_list, history_search, history_get_by_id
- commands: history_export (CSV RFC 4180 / JSON pretty), history_delete_entry,
  history_clear, history_purge_older_than(days)

HistoryRepository port extended with list/search/find_by_id/delete_by_id/
delete_all — implemented by SqliteHistoryRepo with date-range filtering,
sort-by field and pagination. HistoryEntry now carries the primary key
so the UI can target individual rows (HistoryViewDto exposes entry_id).

CommandBus gains a history_repo port; a shared test_support module
provides NoopHistoryRepo / InMemoryHistoryRepo doubles so each handler
suite does not redeclare trait stubs.

Incidental clippy cleanup surfaced by -D warnings on stable 1.95:
- toggle_clipboard / registry: assert! replacing assert_eq!(_, true/false)
- register_local_file: contains(&event) instead of iter().any(...)
- config: apply_patch moved before #[cfg(test)] block
- adapters/driven/stats: dropped dead in-memory module

Tests: 666 lib pass (4 ignored); new coverage on history_repo 97%,
delete_history 100%, export_history 97% (incl. CSV quoting and pretty-
JSON golden), purge_history 100%, list/search/get_history_entry handlers
exercised end-to-end via QueryBus.
@github-actions github-actions Bot added documentation Improvements or additions to documentation rust labels Apr 22, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 10e0c789-2b91-49c3-ae2e-d52376820425

📥 Commits

Reviewing files that changed from the base of the PR and between 9fc587d and 35a0815.

📒 Files selected for processing (1)
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs

📝 Walkthrough

Walkthrough

Adds a full history feature: repository APIs (list/search/find/delete/clear), DTO/read-model changes, SQLite and in-memory repo implementations, CQRS commands/queries and CommandBus/QueryBus handlers, Tauri IPC handlers (list/search/get/export/delete/clear/purge), export formatting, and test wiring updates.

Changes

Cohort / File(s) Summary
Domain model & ports
src-tauri/src/domain/model/views.rs, src-tauri/src/domain/ports/driven/history_repository.rs
Added HistoryEntry.id, HistoryFilter, HistorySortField/HistorySort; added MAX_HISTORY_PAGE_SIZE and MAX_HISTORY_SEARCH_RESULTS; extended HistoryRepository trait with list, search, find_by_id, delete_by_id, delete_all.
SQLite history adapter
src-tauri/src/adapters/driven/sqlite/history_repo.rs
Implemented hostname parsing, stable sorting (secondary id), capped pagination (SQL LIMIT/OFFSET + host-filtered hybrid), search (bounded fetch + Rust substring matching), find_by_id, delete_by_id, delete_all; added tests for hosts/pagination/ties/deletes.
Application commands (handlers)
src-tauri/src/application/commands/export_history.rs, .../delete_history.rs, .../purge_history.rs
Added handlers on CommandBus: handle_export_history (CSV/JSON export, formula-guarding, paging through repo), handle_delete_history_entry, handle_clear_history, handle_purge_history; includes unit tests.
Application queries (handlers)
src-tauri/src/application/queries/list_history.rs, .../search_history.rs, .../get_history_entry.rs
Added handle_list_history, handle_search_history, handle_get_history_entry on QueryBus, delegating to HistoryRepository; added tests for listing, filtering, pagination, search, and not-found behavior.
CQRS types & wiring
src-tauri/src/application/commands/mod.rs, src-tauri/src/application/queries/mod.rs
Added ExportHistoryFormat/Command, DeleteHistoryEntryCommand, ClearHistoryCommand, PurgeHistoryCommand; replaced GetHistoryQuery with ListHistoryQuery; added SearchHistoryQuery, GetHistoryEntryQuery.
Tauri IPC surface
src-tauri/src/adapters/driving/tauri_ipc.rs, src-tauri/src/lib.rs
Registered IPC handlers: history_list, history_search, history_get_by_id, history_export, history_delete_entry, history_clear, history_purge_older_than; added parsing/validation for filters/sort/export formats; wired history_repo into bus.
Test support & in-memory repos
src-tauri/src/application/test_support.rs, src-tauri/src/domain/ports/driven/tests.rs
Added NoopHistoryRepo and InMemoryHistoryRepo with id assignment and implementations for list/search/find/delete/clear/purge; helper make_history_command_bus/make_history_query_bus; expanded tests.
CommandBus / wiring updates
src-tauri/src/application/command_bus.rs, src-tauri/tests/app_state_wiring.rs
Injected history_repo: Arc<dyn HistoryRepository> into CommandBus::new(...); updated tests and integration wiring to pass history repo.
Read-model DTO change
src-tauri/src/application/read_models/history_view.rs
HistoryViewDto now includes entry_id (serializes as entryId); tests updated.
Stats removal & minor cleanup
src-tauri/src/adapters/driven/stats/in_memory_stats_repo.rs, src-tauri/src/adapters/driven/stats/mod.rs, src-tauri/src/adapters/driven/mod.rs, src-tauri/src/adapters/driven/plugin/registry.rs
Removed in-memory stats repo and its module export; removed test-only stats mod; minor test assertion style tweaks.
Widespread test DI updates
many src-tauri/src/application/commands/*.rs, src-tauri/src/application/queries/*.rs, src-tauri/src/application/query_bus.rs
Updated many test helpers to inject NoopHistoryRepo into CommandBus and expanded MockHistoryRepo stubs with the new history methods; small assertion-style adjustments.

Sequence Diagram(s)

sequenceDiagram
  participant Frontend as Frontend (IPC)
  participant Tauri as Tauri IPC
  participant Bus as CommandBus / QueryBus
  participant Repo as HistoryRepository
  participant FS as Filesystem

  rect rgba(200,200,255,0.5)
  Frontend->>Tauri: history_export(format, path)
  Tauri->>Bus: ExportHistoryCommand
  Bus->>Repo: list(filter=None, sort=None, limit=MAX_PAGE, offset=0..)
  Repo-->>Bus: Vec<HistoryEntry>
  Bus->>FS: ensure_parent_dir(path)
  Bus->>FS: write_file(path, encoded_bytes)
  Bus-->>Tauri: Ok(count)
  Tauri-->>Frontend: result
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

frontend

Poem

🐰 I hopped through logs and found a key,
Lists and searches now run free,
IPC whispers, DB replies,
CSV, JSON under rabbit skies,
A tiny export, full of glee.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and specifically summarizes the main objective: exposing history CQRS operations via Tauri IPC handlers, with '(task 01)' indicating this is the first part of a planned series.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/task-01-history-ipc

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown

greptile-apps Bot commented Apr 22, 2026

Greptile Summary

This PR wires the history CQRS surface to Tauri IPC, adding history_list, history_search, history_get_by_id, history_export, history_delete_entry, history_clear, and history_purge_older_than. All previously flagged concerns (LIKE wildcard escaping, hostname filter correctness, silent export truncation, InMemoryHistoryRepo sort fidelity) have been addressed — hostname matching is now done entirely in Rust, export paginates to include all rows, and the test double correctly applies all sort fields.

Confidence Score: 5/5

Safe to merge — all prior P0/P1 concerns are resolved and the only remaining finding is a P2 style suggestion about code duplication.

All previously raised issues (LIKE wildcard escaping, hostname filter, export truncation, InMemoryHistoryRepo sort) are fully addressed. The new implementation is well-tested with high coverage, the pagination is stable thanks to the id DESC tiebreaker, and the CSV injection mitigation follows OWASP guidance. The sole remaining finding is a minor duplication concern in test support code.

src-tauri/src/application/test_support.rs — duplicated host extraction function that could drift from the production version.

Important Files Changed

Filename Overview
src-tauri/src/adapters/driven/sqlite/history_repo.rs Major new impl: adds list, search, find_by_id, delete_by_id, delete_all. Hostname filtering is now done in Rust via extract_host (correctly avoiding LIKE wildcard issues). Stable pagination with id DESC tiebreaker is well-designed. Search fetches most-recent 500 rows and filters in Rust.
src-tauri/src/application/commands/export_history.rs New file. Paginates list() to avoid the prior 500-row truncation. CSV escaping guards formula injection per OWASP guidance; JSON uses serde_json. Accumulates all rows in memory before writing — acceptable tradeoff for a desktop download manager.
src-tauri/src/adapters/driving/tauri_ipc.rs Adds 7 history IPC commands/queries. history_purge_older_than guards days==0. build_history_filter collapses blank strings to None. All new handlers delegate cleanly to the bus with appropriate error mapping.
src-tauri/src/application/test_support.rs New consolidated test-doubles module. InMemoryHistoryRepo correctly implements sort, hostname (via duplicated host_component), pagination, and search consistent with the SQLite adapter's documented contract. host_component is a copy of extract_host and could diverge on bugfixes.
src-tauri/src/domain/ports/driven/history_repository.rs Port extended with list, search, find_by_id, delete_by_id, delete_all. MAX_HISTORY_PAGE_SIZE and MAX_HISTORY_SEARCH_RESULTS constants are defined here and shared by all implementations.
src-tauri/src/application/commands/purge_history.rs Simple new handler that delegates delete_older_than to the repo. IPC layer handles days==0 guard and computes the timestamp.
src-tauri/src/application/commands/delete_history.rs New file. delete_history_entry maps delete_by_id returning false to a NotFound error. clear_history delegates delete_all and returns count.
src-tauri/src/application/read_models/history_view.rs Adds entry_id (primary key) to HistoryViewDto. camelCase serialization confirmed by test. download_id is serialized as a string, which is consistent with the existing pattern.

Sequence Diagram

sequenceDiagram
    participant FE as Frontend (Vue)
    participant IPC as tauri_ipc.rs
    participant QB as QueryBus
    participant CB as CommandBus
    participant Repo as SqliteHistoryRepo

    FE->>IPC: history_list(dateFrom, dateTo, hostname, sort, limit, offset)
    IPC->>QB: handle_list_history(ListHistoryQuery)
    QB->>Repo: list(filter, sort, limit, offset)
    Note over Repo: SQL date filter → hostname match in Rust → paginate
    Repo-->>FE: Vec<HistoryViewDto>

    FE->>IPC: history_search(q)
    IPC->>QB: handle_search_history(SearchHistoryQuery)
    QB->>Repo: search(query)
    Note over Repo: Fetch most-recent 500 rows, filter in Rust
    Repo-->>FE: Vec<HistoryViewDto>

    FE->>IPC: history_export(format, path)
    IPC->>CB: handle_export_history(ExportHistoryCommand)
    loop Paginate all rows (500/page)
        CB->>Repo: list(None, None, 500, offset)
        Repo-->>CB: page
    end
    Note over CB: encode CSV/JSON → fs::write(path)
    CB-->>FE: exported count

    FE->>IPC: history_purge_older_than(days)
    Note over IPC: Guard days==0, compute cutoff = now - days*86400
    IPC->>CB: handle_purge_history(PurgeHistoryCommand)
    CB->>Repo: delete_older_than(before_timestamp)
    Repo-->>FE: rows removed
Loading

Fix All in Claude Code

Prompt To Fix All With AI
This is a comment left during a code review.
Path: src-tauri/src/application/test_support.rs
Line: 37-52

Comment:
**Duplicated `host_component` / `extract_host` function**

`host_component` here is a verbatim copy of `extract_host` in `history_repo.rs`. Since both functions implement the same host-parsing logic, a bug fixed in the production implementation won't automatically fix the test double. Any future change to `extract_host` (e.g. handling edge cases like `file://` or data URLs) will need to be mirrored here manually.

Consider extracting `extract_host` into a shared utility crate/module (e.g. `crate::domain::util`) so that both the SQLite adapter and the in-memory test double call the same code.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (5): Last reviewed commit: "fix: address PR review comments" | Re-trigger Greptile

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/application/test_support.rs Outdated
Comment thread src-tauri/src/application/test_support.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (3)
src-tauri/src/application/command_bus.rs (1)

34-48: Consider replacing positional constructor args with a dependencies struct.

With the new port, CommandBus::new is getting error-prone to maintain. A CommandBusDeps struct (or builder) would reduce wiring mistakes in future additions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/command_bus.rs` around lines 34 - 48,
CommandBus::new currently takes many positional arguments which is error-prone;
introduce a CommandBusDeps struct (or builder) to hold all dependencies
(download_repo, download_engine, event_bus, file_storage, http_client,
plugin_loader, config_store, credential_store, clipboard_observer,
archive_extractor, history_repo, plugin_store_client) and change the constructor
signature to accept CommandBusDeps (or a builder) instead of the long positional
list; update all call sites to construct and pass CommandBusDeps and adjust
internal field assignments in CommandBus::new to copy from the deps struct.
src-tauri/src/application/queries/search_history.rs (1)

17-128: Extract the repeated query-test harness into application::test_support.

This file reintroduces the same Noop* fixtures and make_bus() wiring that now also exists in get_history_entry.rs, even though this PR already added shared history test support. Keeping them inline means every port change will keep forcing edits across unrelated handler tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/queries/search_history.rs` around lines 17 - 128,
The repeated test harness (NoopExtractor, NoopDownloadRead, NoopStats,
NoopPluginRead and make_bus) should be moved into the shared
application::test_support module and reused: delete these duplicated structs and
the make_bus function from this file and import the shared fixtures and helper
(e.g. use crate::application::test_support::{NoopExtractor, NoopDownloadRead,
NoopStats, NoopPluginRead, make_bus};), ensuring make_bus still returns QueryBus
wired with Arc<dyn HistoryRepository> and referencing QueryBus and
HistoryRepository types so tests compile against the central test support
instead of repeating the implementations.
src-tauri/src/application/test_support.rs (1)

122-146: Implement sort in the shared history test double.

InMemoryHistoryRepo::list currently ignores the sort argument, so any handler test built on this helper can only exercise the default ordering. That makes the new ListHistoryQuery.sort surface effectively untestable at the application layer and lets sort-wiring regressions slip through.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/test_support.rs` around lines 122 - 146, In
InMemoryHistoryRepo::list, the _sort parameter is ignored; update the function
to apply the requested ordering before pagination by matching on the sort
argument (the HistorySort enum) and sorting the filtered Vec<HistoryEntry>
accordingly (e.g., sort_by_key on completed_at with std::cmp::Reverse for
descending, or ascending without Reverse; if HistorySort supports other fields,
sort by those fields similarly). Ensure you perform this sort on the filtered
variable (after collect and before calculating start/take) so
ListHistoryQuery.sort behavior is exercised in tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CHANGELOG.md`:
- Line 12: The changelog line mentions the DTO field as entry_id but the
UI-facing history view DTO uses entryId; update the CHANGELOG.md entry to use
the actual identifier name entryId (and optionally note both names if you want
backwards-compatibility context) so consumers and testers see the correct
payload shape; look for the sentence referencing HistoryEntry and replace
entry_id with entryId and ensure the phrasing matches the History view DTO
naming.

In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 101-110: The hostname filter currently builds a LIKE on
history::Column::Url (Url LIKE "%{host}%"), which matches host strings anywhere
in the URL; change the logic to compare the request’s actual hostname instead:
if your schema has a persisted hostname column (e.g.
history::Column::SourceHostname or equivalent) use an equality or
hostname-contains clause against that column; otherwise, parse
history::Column::Url into a URL host and compare that host (using
url::Url::parse on the stored URL before adding the condition) rather than doing
a substring match. Update the block that references filter.hostname and
history::Column::Url so it uses the hostname column or parsed host comparison
(and skip empty host), ensuring you add the condition with .eq or the
parsed-host comparison instead of Url.like("%...%").
- Around line 132-145: The search() implementation only escapes '%' but relies
on backslash being an escape character which SQLite doesn't use by default and
also fails to escape '_' as a wildcard; update the code that builds the like
pattern (the like variable) to escape both '%' and '_' by prefixing them with a
chosen escape char (e.g., '\') and then ensure the generated SQL uses an ESCAPE
'\' clause for the LIKE comparisons (or replace the LIKE-based filters with a
safe substring predicate such as instr(column, pattern) if you prefer not to use
ESCAPE). Locate the search function and the construction of like, the
Condition::any with history::Column::FileName/Url/DestinationPath.like(...), and
either append the ESCAPE clause to those LIKE expressions or switch to instr()
so the query no longer treats '_' as a wildcard.

In `@src-tauri/src/adapters/driving/tauri_ipc.rs`:
- Around line 2010-2022: In history_purge_older_than, reject days == 0 before
calling current_unix_seconds and building the PurgeHistoryCommand: add a guard
at the start of history_purge_older_than that returns an Err(String) (e.g.,
"days must be >= 1") when days is zero (or otherwise validate days >= 1), so you
never compute cutoff = now.saturating_sub(...) or pass before_timestamp equal to
now to command_bus.handle_purge_history; keep the existing error mapping
(.map_err(|e| e.to_string())) for downstream failures.
- Around line 1891-1905: The build_history_filter function currently treats an
empty hostname string as a valid filter; change it to normalize hostname by
converting Some("") to None before deciding if a HistoryFilter should be
returned. Specifically, in build_history_filter, map or filter the hostname
parameter (e.g., let hostname = hostname.filter(|h| !h.is_empty()) or
hostname.and_then(|h| if h.trim().is_empty() { None } else { Some(h) })) and
then use that cleaned hostname when constructing HistoryFilter and when checking
whether to return None or Some(...), so empty or whitespace-only hostnames
behave like “no filter.”

In `@src-tauri/src/application/commands/export_history.rs`:
- Around line 37-67: The CSV exporter must neutralize spreadsheet formulas:
update escape_csv_field (or create a small wrapper used by encode_csv) to check
if the input starts with any of the dangerous characters ['=', '+', '-', '@']
and, if so, prefix the cell value with a single quote (') before applying the
existing quote/escape logic; then ensure encode_csv uses this neutralizing
escape for the file_name, url, and destination_path fields of HistoryEntry so
exported CSV cells cannot be interpreted as formulas by Excel/Sheets.

In `@src-tauri/src/domain/model/views.rs`:
- Around line 100-101: The doc for the View field `hostname` currently promises
a case-insensitive hostname match but the SQLite adapter in `history_repo.rs`
uses a `url LIKE '%{host}%'` filter that matches path/query segments; update the
adapter to parse the URL and filter by the extracted hostname (case-insensitive
compare or normalized lowercased equality) instead of doing a LIKE on the full
URL, or alternatively adjust the `hostname` docstring in `views.rs` to state it
is a substring match against the full URL; reference the `hostname` field in
`views.rs` and the URL filtering logic in `history_repo.rs` when making the
change.

In `@src-tauri/src/domain/ports/driven/history_repository.rs`:
- Around line 24-39: The list and search methods allow unbounded results; add
server-side caps and pagination: introduce a constant (e.g.
HISTORY_MAX_PAGE_SIZE) and enforce it inside the implementations of list and
search so that when limit is None you use HISTORY_MAX_PAGE_SIZE, and when a
provided limit exceeds HISTORY_MAX_PAGE_SIZE you clamp it down; additionally
extend the search API to accept pagination parameters (add limit: Option<usize>,
offset: Option<usize> to the search signature) or at minimum apply the same
HISTORY_MAX_PAGE_SIZE default and offset support so search also returns at most
the capped page; update callers of list/search and any code handling
HistoryEntry/DomainError accordingly (refer to functions list, search, and types
HistoryFilter, HistorySort, HistoryEntry, DomainError).

---

Nitpick comments:
In `@src-tauri/src/application/command_bus.rs`:
- Around line 34-48: CommandBus::new currently takes many positional arguments
which is error-prone; introduce a CommandBusDeps struct (or builder) to hold all
dependencies (download_repo, download_engine, event_bus, file_storage,
http_client, plugin_loader, config_store, credential_store, clipboard_observer,
archive_extractor, history_repo, plugin_store_client) and change the constructor
signature to accept CommandBusDeps (or a builder) instead of the long positional
list; update all call sites to construct and pass CommandBusDeps and adjust
internal field assignments in CommandBus::new to copy from the deps struct.

In `@src-tauri/src/application/queries/search_history.rs`:
- Around line 17-128: The repeated test harness (NoopExtractor,
NoopDownloadRead, NoopStats, NoopPluginRead and make_bus) should be moved into
the shared application::test_support module and reused: delete these duplicated
structs and the make_bus function from this file and import the shared fixtures
and helper (e.g. use crate::application::test_support::{NoopExtractor,
NoopDownloadRead, NoopStats, NoopPluginRead, make_bus};), ensuring make_bus
still returns QueryBus wired with Arc<dyn HistoryRepository> and referencing
QueryBus and HistoryRepository types so tests compile against the central test
support instead of repeating the implementations.

In `@src-tauri/src/application/test_support.rs`:
- Around line 122-146: In InMemoryHistoryRepo::list, the _sort parameter is
ignored; update the function to apply the requested ordering before pagination
by matching on the sort argument (the HistorySort enum) and sorting the filtered
Vec<HistoryEntry> accordingly (e.g., sort_by_key on completed_at with
std::cmp::Reverse for descending, or ascending without Reverse; if HistorySort
supports other fields, sort by those fields similarly). Ensure you perform this
sort on the filtered variable (after collect and before calculating start/take)
so ListHistoryQuery.sort behavior is exercised in tests.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1c1bc89b-71bf-4d96-897b-ee5e8e63a7f3

📥 Commits

Reviewing files that changed from the base of the PR and between 641d4cc and 496b76f.

📒 Files selected for processing (46)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/mod.rs
  • src-tauri/src/adapters/driven/plugin/registry.rs
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs
  • src-tauri/src/adapters/driven/stats/in_memory_stats_repo.rs
  • src-tauri/src/adapters/driven/stats/mod.rs
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/application/command_bus.rs
  • src-tauri/src/application/commands/cancel_download.rs
  • src-tauri/src/application/commands/clear_downloads_by_state.rs
  • src-tauri/src/application/commands/delete_history.rs
  • src-tauri/src/application/commands/export_history.rs
  • src-tauri/src/application/commands/install_plugin.rs
  • src-tauri/src/application/commands/mod.rs
  • src-tauri/src/application/commands/pause_all.rs
  • src-tauri/src/application/commands/pause_download.rs
  • src-tauri/src/application/commands/purge_history.rs
  • src-tauri/src/application/commands/register_local_file.rs
  • src-tauri/src/application/commands/remove_download.rs
  • src-tauri/src/application/commands/resume_all.rs
  • src-tauri/src/application/commands/resume_download.rs
  • src-tauri/src/application/commands/retry_download.rs
  • src-tauri/src/application/commands/set_priority.rs
  • src-tauri/src/application/commands/start_download.rs
  • src-tauri/src/application/commands/toggle_clipboard.rs
  • src-tauri/src/application/commands/toggle_plugin.rs
  • src-tauri/src/application/commands/uninstall_plugin.rs
  • src-tauri/src/application/commands/update_config.rs
  • src-tauri/src/application/mod.rs
  • src-tauri/src/application/queries/count_by_state.rs
  • src-tauri/src/application/queries/get_download_detail.rs
  • src-tauri/src/application/queries/get_downloads.rs
  • src-tauri/src/application/queries/get_history_entry.rs
  • src-tauri/src/application/queries/list_history.rs
  • src-tauri/src/application/queries/list_plugins.rs
  • src-tauri/src/application/queries/mod.rs
  • src-tauri/src/application/queries/search_history.rs
  • src-tauri/src/application/query_bus.rs
  • src-tauri/src/application/read_models/history_view.rs
  • src-tauri/src/application/test_support.rs
  • src-tauri/src/domain/model/config.rs
  • src-tauri/src/domain/model/views.rs
  • src-tauri/src/domain/ports/driven/history_repository.rs
  • src-tauri/src/domain/ports/driven/tests.rs
  • src-tauri/src/lib.rs
  • src-tauri/tests/app_state_wiring.rs
💤 Files with no reviewable changes (3)
  • src-tauri/src/adapters/driven/stats/mod.rs
  • src-tauri/src/adapters/driven/mod.rs
  • src-tauri/src/adapters/driven/stats/in_memory_stats_repo.rs

Comment thread CHANGELOG.md Outdated

### Added

- History IPC surface: queries `history_list(date_from, date_to, hostname, sort_field, sort_direction, limit, offset)`, `history_search(q)`, `history_get_by_id(id)` and commands `history_export(format, path)` (CSV RFC 4180 or JSON pretty-printed), `history_delete_entry(id)`, `history_clear`, `history_purge_older_than(days)`. `HistoryEntry` now carries the primary key as `entry_id` in the DTO so the frontend can target individual rows. The `HistoryRepository` port gained `list`, `search`, `find_by_id`, `delete_by_id` and `delete_all`, implemented by `SqliteHistoryRepo`. (task 01)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the actual history identifier name in the changelog.

This entry documents the DTO as entry_id on HistoryEntry, but the new UI-facing field is entryId on the history view DTO. Leaving it as-is will send manual testers and downstream consumers to the wrong payload shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` at line 12, The changelog line mentions the DTO field as
entry_id but the UI-facing history view DTO uses entryId; update the
CHANGELOG.md entry to use the actual identifier name entryId (and optionally
note both names if you want backwards-compatibility context) so consumers and
testers see the correct payload shape; look for the sentence referencing
HistoryEntry and replace entry_id with entryId and ensure the phrasing matches
the History view DTO naming.

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/adapters/driving/tauri_ipc.rs
Comment thread src-tauri/src/adapters/driving/tauri_ipc.rs
Comment thread src-tauri/src/application/commands/export_history.rs
Comment thread src-tauri/src/domain/model/views.rs Outdated
Comment on lines +24 to +39
/// List history entries with optional filter, sort and pagination.
///
/// Defaults: sort by `completed_at DESC`, no pagination.
fn list(
&self,
filter: Option<HistoryFilter>,
sort: Option<HistorySort>,
limit: Option<usize>,
offset: Option<usize>,
) -> Result<Vec<HistoryEntry>, DomainError>;

/// Full-text search across file name, URL and destination path.
///
/// Returns entries where any of those columns contain `query`
/// (case-insensitive).
fn search(&self, query: &str) -> Result<Vec<HistoryEntry>, DomainError>;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bound history query sizes to avoid unbounded IPC payloads.

Line 26 and Line 39 effectively allow unbounded reads (list defaults to no pagination, search has no pagination at all). Large history tables can cause high memory usage and slow IPC responses. Add a server-side cap and/or pagination to search, and enforce a default max for list.

Suggested API adjustment
-    /// Full-text search across file name, URL and destination path.
+    /// Search across file name, URL and destination path.
+    /// Implementations should enforce bounded results.
     ///
-    /// Returns entries where any of those columns contain `query`
-    /// (case-insensitive).
-    fn search(&self, query: &str) -> Result<Vec<HistoryEntry>, DomainError>;
+    /// Returns entries where any of those columns contain `query`
+    /// (case-insensitive), with pagination.
+    fn search(
+        &self,
+        query: &str,
+        limit: Option<usize>,
+        offset: Option<usize>,
+    ) -> Result<Vec<HistoryEntry>, DomainError>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/domain/ports/driven/history_repository.rs` around lines 24 -
39, The list and search methods allow unbounded results; add server-side caps
and pagination: introduce a constant (e.g. HISTORY_MAX_PAGE_SIZE) and enforce it
inside the implementations of list and search so that when limit is None you use
HISTORY_MAX_PAGE_SIZE, and when a provided limit exceeds HISTORY_MAX_PAGE_SIZE
you clamp it down; additionally extend the search API to accept pagination
parameters (add limit: Option<usize>, offset: Option<usize> to the search
signature) or at minimum apply the same HISTORY_MAX_PAGE_SIZE default and offset
support so search also returns at most the capped page; update callers of
list/search and any code handling HistoryEntry/DomainError accordingly (refer to
functions list, search, and types HistoryFilter, HistorySort, HistoryEntry,
DomainError).

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 issues found across 46 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">

<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:109">
P2: The hostname filter matches against the entire URL with `LIKE "%{host}%"`, so it will produce false positives when the hostname string appears in the path, query string, or fragment (e.g., filtering by `foo.com` would also match `https://example.com/?redirect=https://foo.com/bar`). Parse the URL and compare against the extracted host, or match against a persisted `source_hostname` column instead.</violation>

<violation number="2" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:133">
P2: `history.search` uses LIKE escaping incorrectly: `%` is rewritten but no `ESCAPE` clause is applied, so wildcard characters in user input are not handled as literals.</violation>
</file>

<file name="src-tauri/src/application/queries/list_history.rs">

<violation number="1" location="src-tauri/src/application/queries/list_history.rs:40">
P3: This test adds substantial duplicate test-double and bus-construction code already present in sibling query tests; extract/reuse a shared query test support helper instead of copying another full `Noop*` block.</violation>
</file>

<file name="src-tauri/src/application/commands/export_history.rs">

<violation number="1" location="src-tauri/src/application/commands/export_history.rs:38">
P1: CSV export is vulnerable to spreadsheet formula injection because leading formula characters are not neutralized before writing user-controlled fields.</violation>
</file>

<file name="src-tauri/src/application/queries/get_history_entry.rs">

<violation number="1" location="src-tauri/src/application/queries/get_history_entry.rs:44">
P3: This test adds another full copy of query-bus noop adapters already duplicated in other history query tests; extract shared query test support instead of repeating these structs/impls again.</violation>
</file>

<file name="src-tauri/src/application/queries/search_history.rs">

<violation number="1" location="src-tauri/src/application/queries/search_history.rs:37">
P3: This test module duplicates the same Noop query-bus stubs used in other history query tests; extract/shared test support should be reused to avoid drift across handlers.</violation>
</file>

<file name="src-tauri/src/adapters/driving/tauri_ipc.rs">

<violation number="1" location="src-tauri/src/adapters/driving/tauri_ipc.rs:2015">
P0: When `days == 0`, `cutoff` equals `now`, so `delete_older_than(now)` will delete nearly the entire history table. Reject `days == 0` to prevent an accidental full wipe via a bad UI default or malformed IPC call.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/adapters/driving/tauri_ipc.rs
Comment thread src-tauri/src/application/commands/export_history.rs Outdated
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/application/queries/list_history.rs Outdated
Comment thread src-tauri/src/application/queries/get_history_entry.rs Outdated
Comment thread src-tauri/src/application/queries/search_history.rs Outdated
- history_purge_older_than: reject days == 0 (would wipe the whole table)
- history_list: blank/whitespace hostname collapses to "no filter" so an
  empty input field shows all rows instead of none
- Hostname filter now compares against the URL's parsed host, exactly and
  case-insensitive, instead of a substring anywhere in the URL. Stops
  false positives like `?next=https://foo.com/` matching host `foo.com`
- Search drops the raw SQL LIKE and filters case-insensitively in Rust,
  so `_` and `%` in user input are treated as literals (no silent
  wildcard matches, no broken ESCAPE clause)
- Both list and search cap results at 500 rows per call
  (MAX_HISTORY_PAGE_SIZE / MAX_HISTORY_SEARCH_RESULTS) so the IPC payload
  stays bounded regardless of history size
- CSV export guards spreadsheet formula prefixes (=, +, -, @, TAB, CR)
  with a leading apostrophe per OWASP CSV-injection guidance
- InMemoryHistoryRepo / domain port test fakes now honor the sort field
  and compare hostnames case-insensitively, matching the SQLite adapter
- Extract make_history_query_bus test helper into application::test_support
  and drop the duplicated NoopDownloadRead/NoopStats/NoopPluginRead/
  NoopExtractor blocks from list_history, search_history and
  get_history_entry suites
- Changelog reflects the real DTO field name (`entryId`) and documents
  the new safety caps / days>=1 requirement
Comment thread src-tauri/src/application/commands/export_history.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 165-186: The current hostname_filter branch limits the SQL query
to MAX_HISTORY_PAGE_SIZE then applies matches_hostname_filter, which can miss
matches beyond the first SQL page; change history_list to either (preferred)
loop-page the SQL query (using query.limit(...) and query.offset(...) or
incrementing an internal offset) and keep fetching ordered rows until you've
collected at least effective_offset + effective_limit hostname-matching entries,
then apply the Rust skip(effective_offset).take(effective_limit) on that
collected matches, or alternatively persist a normalized host column and apply
the hostname filter directly in SQL; update the code path involving
hostname_filter, MAX_HISTORY_PAGE_SIZE, query, model_to_entry,
matches_hostname_filter, effective_offset, and effective_limit to implement the
chosen approach.

In `@src-tauri/src/application/commands/export_history.rs`:
- Around line 17-18: The export currently calls history_repo().list(None, None,
None, None) which is capped at MAX_HISTORY_PAGE_SIZE and will silently truncate
exports; update export_history.rs to either (A) page through
HistoryRepository::list by repeatedly calling history_repo().list(next_cursor,
None, None, None) (or equivalent pagination parameters) and accumulating results
until a page smaller than MAX_HISTORY_PAGE_SIZE is returned, or (B) add a
dedicated uncapped repository method (e.g.,
HistoryRepository::list_all_for_export) and use that to retrieve all entries
before converting to CSV/JSON; ensure you replace the single-use entries =
self.history_repo().list(...) usage with the chosen approach and preserve
existing error handling.

In `@src-tauri/src/application/test_support.rs`:
- Around line 197-210: The test double method search currently scans the whole
store and treats "" as a match; update the mock search (function search on the
test repo that accesses self.entries and returns Vec<HistoryEntry>) to mirror
SqliteHistoryRepo: if query is empty return Ok(vec![]), otherwise only examine
the most recent MAX_HISTORY_SEARCH_RESULTS entries (take the tail of the entries
vector in reverse/insertion order) and perform the same field contains checks on
file_name, url, and destination_path before cloning and collecting results;
reference MAX_HISTORY_SEARCH_RESULTS, search, self.entries, and HistoryEntry to
locate where to change behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3502fa6d-f74e-469f-b54a-d30bd663f56e

📥 Commits

Reviewing files that changed from the base of the PR and between 496b76f and 5b45b4d.

📒 Files selected for processing (11)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs
  • src-tauri/src/adapters/driving/tauri_ipc.rs
  • src-tauri/src/application/commands/export_history.rs
  • src-tauri/src/application/queries/get_history_entry.rs
  • src-tauri/src/application/queries/list_history.rs
  • src-tauri/src/application/queries/search_history.rs
  • src-tauri/src/application/test_support.rs
  • src-tauri/src/domain/model/views.rs
  • src-tauri/src/domain/ports/driven/history_repository.rs
  • src-tauri/src/domain/ports/driven/tests.rs
✅ Files skipped from review due to trivial changes (1)
  • CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • src-tauri/src/application/queries/search_history.rs
  • src-tauri/src/domain/model/views.rs
  • src-tauri/src/application/queries/get_history_entry.rs

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/application/commands/export_history.rs Outdated
Comment thread src-tauri/src/application/test_support.rs
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 11 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/test_support.rs">

<violation number="1" location="src-tauri/src/application/test_support.rs:42">
P3: Hostname parsing incorrectly truncates IPv6 hosts because it splits on the first colon.</violation>
</file>

<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">

<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:167">
P2: Hostname-filtered pagination is incorrect because rows are capped before host filtering, so offsets/pages can miss valid matches.</violation>
</file>

<file name="src-tauri/src/domain/ports/driven/history_repository.rs">

<violation number="1" location="src-tauri/src/domain/ports/driven/history_repository.rs:13">
P1: `history_export` silently truncates at 500 rows. The `list(None, None, None, None)` call in `handle_export_history` is contractually capped to `MAX_HISTORY_PAGE_SIZE` (500) per the trait doc added here. Any user with more than 500 history entries receives a silently incomplete export file — no error, no warning. Either loop through `list` pages until a short page is returned, or add a dedicated uncapped repository method for export.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/domain/ports/driven/history_repository.rs
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/application/test_support.rs Outdated
- history_export paginates through list() 500 rows at a time so the file
  contains every row even when the table holds more than one page worth.
  Before, the export silently truncated at MAX_HISTORY_PAGE_SIZE.
- SqliteHistoryRepo::list with a hostname filter now pages the SQL query
  internally until it accumulates enough post-filter matches (or the
  table is exhausted). Previously the first 500 SQL rows were fetched
  in one go and offsets past that window silently missed valid rows.
- InMemoryHistoryRepo::search matches the SQLite adapter's contract:
  empty query returns no rows, and only the most recent
  MAX_HISTORY_SEARCH_RESULTS entries are scanned.
- extract_host keeps IPv6 literals intact (`[::1]:8080` → `[::1]`).
  Same fix in the test-support helper.
- Trait doc clarifies that callers needing the full history must page
  through list() until a short page is returned.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 170-172: The current query built in
history::Entity::find().filter(condition).order_by(column, order) only orders by
the requested column which can make pagination unstable; update the query to
append a deterministic secondary sort key (e.g. history::Column::Id) so ties are
broken consistently — modify the call site that builds base_query (used by
list() and history_export) to add a second .order_by(history::Column::Id,
Order::Asc) (or matching order direction) after the primary .order_by(column,
order) so pagination is stable and rows won’t be skipped or duplicated across
pages.
- Around line 189-223: The code currently accumulates all hostname-matching
HistoryEntry objects into matches up to want_total (effective_offset +
effective_limit), causing O(offset) memory; change the loop to track a separate
counter seen_matches (u64) that increments whenever
matches_hostname_filter(&entry.url, &host) is true, but only push the entry into
matches (Vec<HistoryEntry>) once seen_matches >= effective_offset and until
matches.len() reaches effective_limit; update break conditions to stop when
matches.len() >= effective_limit or when fetched < page_size, and remove
reliance on want_total for pushing—use seen_matches to skip the first
effective_offset matches and only store up to effective_limit entries, keeping
other symbols (base_query, sql_offset, page_size, model_to_entry) unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8442ade1-6d7a-4e0a-9f93-fe1c009953fc

📥 Commits

Reviewing files that changed from the base of the PR and between 5b45b4d and 1a48e28.

📒 Files selected for processing (4)
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs
  • src-tauri/src/application/commands/export_history.rs
  • src-tauri/src/application/test_support.rs
  • src-tauri/src/domain/ports/driven/history_repository.rs
🚧 Files skipped from review as they are similar to previous changes (3)
  • src-tauri/src/application/commands/export_history.rs
  • src-tauri/src/domain/ports/driven/history_repository.rs
  • src-tauri/src/application/test_support.rs

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 4 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/adapters/driven/sqlite/history_repo.rs">

<violation number="1" location="src-tauri/src/adapters/driven/sqlite/history_repo.rs:189">
P2: The hostname-filter branch collects up to `offset + limit` matching entries into `matches`, then discards the first `offset` at the end. Since `offset` comes from the IPC caller and is not capped, this retains O(offset) entries in memory even though only `limit` rows are returned. Track a `skipped` counter instead and only start pushing entries once the offset window is consumed.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs Outdated
- Append id DESC as secondary sort in list() so pagination stays stable
  when the primary sort key has ties (e.g. many entries sharing the same
  completed_at). Without it, OFFSET could skip or duplicate rows across
  successive pages, which the paginated export loop amplifies.
- Hostname-filter branch tracks a `skipped` counter and only retains up
  to `limit` matches in memory. Previously it collected
  `offset + limit` rows and discarded the offset window at the end,
  growing memory linearly with caller-supplied offsets.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src-tauri/src/adapters/driven/sqlite/history_repo.rs (2)

47-53: Consider pre-computing lowercase hostname outside the hot loop.

In list(), matches_hostname_filter is called for every row in potentially many SQL pages, yet wanted.to_ascii_lowercase() always produces the same result since host is constant for the entire loop. Pre-computing this before the loop avoids repeated allocations.

♻️ Suggested refactor
-fn matches_hostname_filter(entry_url: &str, wanted: &str) -> bool {
-    let wanted = wanted.to_ascii_lowercase();
+fn matches_hostname_filter(entry_url: &str, wanted_lower: &str) -> bool {
     match extract_host(entry_url) {
-        Some(host) => host.to_ascii_lowercase() == wanted,
+        Some(host) => host.eq_ignore_ascii_case(wanted_lower),
         None => false,
     }
 }

Then in list(), pre-compute before the loop:

let host_lower = host.to_ascii_lowercase();
// ... in loop:
if !matches_hostname_filter(&entry.url, &host_lower) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs` around lines 47 - 53,
matches_hostname_filter currently calls wanted.to_ascii_lowercase() on every row
which allocates repeatedly; precompute the lowercase wanted once in list() and
pass that pre-lowercased string into matches_hostname_filter (or rename param to
make intent clear). Specifically, in list() compute let host_lower =
host.to_ascii_lowercase() before iterating SQL rows and call
matches_hostname_filter(&entry.url, &host_lower), and update
matches_hostname_filter to treat its wanted parameter as already-lowercased
(remove the to_ascii_lowercase() call and compare host.to_ascii_lowercase() ==
wanted).

584-584: Use i % 4 == 0 instead of is_multiple_of(4) for broader compatibility.

The is_multiple_of method on integer types requires Rust 1.86.0. No MSRV is explicitly defined in Cargo.toml, so if the project intends to support older versions, use the traditional idiom i % 4 == 0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs` at line 584, Replace
the use of the unstable/newer integer method is_multiple_of(4) with the
traditional modulus check to maintain MSRV compatibility: change the conditional
in the block that sets e.url (the expression using i.is_multiple_of(4)) to use i
% 4 == 0 instead, keeping the rest of the assignment logic in the same place
(the code where e.url is assigned based on i).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 236-256: In the search method, avoid the unnecessary DB round-trip
when the query is empty by checking query.is_empty() (or needle.is_empty())
before calling block_on/history::Entity::find(); if empty return Ok(Vec::new())
immediately; move the empty-query guard above the block_on call and keep the
rest of the logic that maps models via model_to_entry and filters with
matches_search and map_db_err unchanged (MAX_HISTORY_SEARCH_RESULTS can remain
as-is).

---

Nitpick comments:
In `@src-tauri/src/adapters/driven/sqlite/history_repo.rs`:
- Around line 47-53: matches_hostname_filter currently calls
wanted.to_ascii_lowercase() on every row which allocates repeatedly; precompute
the lowercase wanted once in list() and pass that pre-lowercased string into
matches_hostname_filter (or rename param to make intent clear). Specifically, in
list() compute let host_lower = host.to_ascii_lowercase() before iterating SQL
rows and call matches_hostname_filter(&entry.url, &host_lower), and update
matches_hostname_filter to treat its wanted parameter as already-lowercased
(remove the to_ascii_lowercase() call and compare host.to_ascii_lowercase() ==
wanted).
- Line 584: Replace the use of the unstable/newer integer method
is_multiple_of(4) with the traditional modulus check to maintain MSRV
compatibility: change the conditional in the block that sets e.url (the
expression using i.is_multiple_of(4)) to use i % 4 == 0 instead, keeping the
rest of the assignment logic in the same place (the code where e.url is assigned
based on i).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b91779b1-51fe-4816-9540-8c1e108711ba

📥 Commits

Reviewing files that changed from the base of the PR and between 1a48e28 and 9fc587d.

📒 Files selected for processing (1)
  • src-tauri/src/adapters/driven/sqlite/history_repo.rs

Comment thread src-tauri/src/adapters/driven/sqlite/history_repo.rs
Move the empty-query short-circuit in search() above the sqlite fetch so
a blank needle skips the round-trip entirely instead of pulling up to
500 rows and discarding them.
@mpiton mpiton merged commit 4d095c8 into main Apr 23, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation rust

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant