feat(settings): add configurable history retention with daily purge worker#108
feat(settings): add configurable history retention with daily purge worker#108
Conversation
…orker Implements PRD-v2 P0.14 (task 14). Adds `history_retention_days: i64` to AppConfig (default 30, presets 7/30/90/365/0=unlimited) exposed in the General Settings tab as a Select dropdown. Backend ships a Clock domain port with a SystemClock adapter under adapters/driven/scheduler/ and a HistoryPurgeWorker daemon spawned from Tauri setup that hard-deletes history rows older than the configured retention window. The worker persists its last run as a Unix-epoch timestamp inside <app_data_dir>/.history_purge_state. On startup it skips if the sentinel is younger than 24h, then re-fires every 24h via tokio::time::interval with MissedTickBehavior::Skip. retention_days <= 0 is a no-op that does not write the sentinel, so the next run re-fires the moment the user re-enables retention; corrupt sentinels fall back to "never ran". Domain helper normalize_history_retention_days clamps negatives to 0 for hand-edited config.toml safety. Migration safety: existing config.toml files without the field hydrate to the AppConfig::default value (30), not serde's i64 zero, because ConfigDto::default delegates to AppConfig::default. Coverage: 12 worker tests (run_once cutoff, zero/negative retention, sentinel write, run_if_due no-state/recent/expired/corrupt, mock clock daily cycle, helper smoke), 5 config field tests, 2 Toml roundtrip + migration tests, 2 frontend dropdown tests. Total suite: 837 Rust + 572 vitest, clippy clean, oxlint clean.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
✅ Files skipped from review due to trivial changes (6)
📝 WalkthroughWalkthroughAdds a configurable Changes
Sequence Diagram(s)sequenceDiagram
participant App as App Startup
participant Worker as HistoryPurgeWorker
participant Config as ConfigStore
participant Clock as SystemClock
participant Repo as HistoryRepository
participant State as State File
App->>Worker: spawn(config_store, clock, history_repo, state_path)
activate Worker
loop daily
Worker->>State: read_sentinel()
State-->>Worker: last_run_timestamp / error
alt sentinel missing/corrupt or ≥24h elapsed
Worker->>Clock: now_unix_secs()
Clock-->>Worker: current_time
Worker->>Config: load_history_retention_days()
Config-->>Worker: retention_days
alt retention_days > 0
Worker->>Repo: delete_older_than(now - retention_days*86400)
Repo-->>Worker: deleted_count
Worker->>State: write_sentinel(current_time)
State-->>Worker: written
else retention_days ≤ 0
Worker-->>Worker: no-op (unlimited retention)
end
else Sentinel recent
Worker-->>Worker: skip run
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Greptile SummaryThis PR adds configurable history retention (7/30/90/365 days or unlimited) with a Confidence Score: 5/5Safe to merge — no P0 or P1 issues found; implementation is robust with thorough test coverage. All normalization boundaries are covered (apply_patch, From, worker). Migration safety is tested. Sentinel fault-tolerance (corrupt/missing) is verified. Clock skew is handled via saturating arithmetic. The previously flagged negative-value bug in apply_patch is now fixed. Only remaining UX concern (blank Select for non-preset TOML values) was already noted in a prior review comment. No files require special attention.
|
| Filename | Overview |
|---|---|
| src-tauri/src/adapters/driven/scheduler/history_purge_worker.rs | New 536-line file: implements daily purge worker with startup alignment, 24h interval, sentinel state file, and thorough test coverage for edge cases (corrupt sentinel, clock skew, zero retention). |
| src-tauri/src/adapters/driven/config/toml_config_store.rs | Adds history_retention_days: i64 to ConfigDto with #[serde(default)] at struct level ensuring migration safety; From<ConfigDto> for AppConfig normalizes negatives via normalize_history_retention_days. |
| src-tauri/src/domain/model/config.rs | Adds history_retention_days field (default 30), normalize_history_retention_days helper, HISTORY_RETENTION_PRESETS_DAYS constant, and normalizes negatives in apply_patch; well-tested. |
| src-tauri/src/lib.rs | Clones config_store and history_repo Arcs before CommandBus takes ownership, then wires HistoryPurgeWorker with SystemClock and the app-data-dir sentinel path; spawns correctly. |
| src/views/SettingsView/GeneralSection.tsx | Adds history retention Select dropdown wired to historyRetentionDays; SelectValue has no placeholder, so a non-preset stored value renders as a blank trigger (noted in a prior review comment). |
| src/types/settings.ts | Adds historyRetentionDays: number to AppConfig and HISTORY_RETENTION_PRESETS constant mirroring the Rust backend presets; order differs (Never last here, first in Rust) but no functional impact. |
| src-tauri/src/domain/ports/driven/clock.rs | Clean Clock port trait (Send + Sync) that allows deterministic time injection in tests; SystemClock adapter backed by std::time::SystemTime with fallback-to-zero on pre-epoch systems. |
| src-tauri/src/adapters/driving/tauri_ipc.rs | Propagates history_retention_days through SettingsDto and ConfigPatchDto; conversion from ConfigPatchDto passes raw value to ConfigPatch, with normalization handled downstream in apply_patch. |
Sequence Diagram
sequenceDiagram
participant App as lib.rs (startup)
participant W as HistoryPurgeWorker
participant SF as .history_purge_state
participant Clk as SystemClock
participant HR as HistoryRepository
participant CS as ConfigStore
App->>W: spawn(Arc<Self>)
W->>SF: read_last_purge()
SF-->>W: Option<last_ts>
W->>Clk: now_unix_secs()
Clk-->>W: now
alt last run < 24h ago
W->>W: sleep(remaining_secs)
end
W->>CS: get_config()
CS-->>W: retention_days
alt retention_days == 0
W-->>W: no-op (sentinel not updated)
else retention_days > 0
W->>Clk: now_unix_secs()
Clk-->>W: now
W->>HR: delete_older_than(now - retention*86400)
HR-->>W: rows_deleted
W->>SF: write(now)
end
loop every 24h (MissedTickBehavior::Skip)
W->>CS: get_config()
CS-->>W: retention_days
W->>HR: delete_older_than(cutoff)
HR-->>W: rows_deleted
W->>SF: write(now)
end
Reviews (3): Last reviewed commit: "fix(tests): add historyRetentionDays to ..." | Re-trigger Greptile
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/adapters/driven/config/toml_config_store.rs`:
- Around line 252-253: In the From<ConfigDto> for AppConfig conversion, don't
assign d.history_retention_days directly; instead normalize it to valid bounds
by clamping it into the allowed range (e.g., at least 0 and at most your
configured upper bound) before storing into AppConfig.history_retention_days;
update the impl to compute a clamped value (use
.max(0).min(MAX_HISTORY_RETENTION_DAYS) or an equivalent) and, if needed, add a
MAX_HISTORY_RETENTION_DAYS constant to hold the upper bound so persisted
negative values cannot propagate to the UI.
In `@src-tauri/src/adapters/driven/scheduler/history_purge_worker.rs`:
- Around line 103-117: The scheduler currently anchors its first post-startup
wait to process start, causing drift; update the startup logic in
history_purge_worker (use read_last_purge(&self.state_path), SECS_PER_DAY,
run_if_due and run_once_blocking) to compute an initial delay based on the
persisted last-run: call read_last_purge(&self.state_path) and if
missing/corrupt or last_run is older than SECS_PER_DAY invoke run_once_blocking
immediately, if last_run is recent compute sleep = SECS_PER_DAY - (now -
last_run) and await tokio::time::sleep for that duration (rather than a full
day), then start the recurring tokio::time::interval of SECS_PER_DAY and
continue using run_once_blocking in the loop; ensure you still skip the extra
immediate ticker tick so you don’t double-run.
In `@src-tauri/src/adapters/driven/scheduler/system_clock.rs`:
- Around line 30-45: Replace the two brittle tests
(test_system_clock_returns_post_2026_timestamp and
test_system_clock_is_monotonic_within_call) to use relative timing: for each
test, capture SystemTime::now() (or equivalent) before and after calling
SystemClock.now_unix_secs(), convert those bounds to unix seconds, and assert
the returned value is >= before_seconds && <= after_seconds; remove the absolute
2026 threshold and the monotonicity assumption since SystemTime can step
backwards.
In `@src-tauri/src/domain/model/config.rs`:
- Around line 271-274: The patch currently assigns patch.history_retention_days
directly into config.history_retention_days, which can persist invalid values;
update this assignment to call
normalize_history_retention_days(patch.history_retention_days) (or
normalize_history_retention_days(*v) if needed) and store the normalized result
into config.history_retention_days so stored config always uses the
validated/normalized value; keep the rest of the conditional (if let Some(v) =
patch.history_retention_days) unchanged and only replace the direct assignment
with a call to normalize_history_retention_days().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 94dd9a32-5be6-45ac-a930-7d623b0fe95a
📒 Files selected for processing (16)
CHANGELOG.mdsrc-tauri/src/adapters/driven/config/toml_config_store.rssrc-tauri/src/adapters/driven/mod.rssrc-tauri/src/adapters/driven/scheduler/history_purge_worker.rssrc-tauri/src/adapters/driven/scheduler/mod.rssrc-tauri/src/adapters/driven/scheduler/system_clock.rssrc-tauri/src/adapters/driving/tauri_ipc.rssrc-tauri/src/domain/model/config.rssrc-tauri/src/domain/ports/driven/clock.rssrc-tauri/src/domain/ports/driven/mod.rssrc-tauri/src/lib.rssrc/i18n/locales/en.jsonsrc/i18n/locales/fr.jsonsrc/types/settings.tssrc/views/SettingsView/GeneralSection.tsxsrc/views/SettingsView/__tests__/Sections.test.tsx
There was a problem hiding this comment.
2 issues found across 16 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/adapters/driven/scheduler/system_clock.rs">
<violation number="1" location="src-tauri/src/adapters/driven/scheduler/system_clock.rs:44">
P2: This assertion is flaky because `SystemTime` is not monotonic; `b` can be less than `a` if the system clock is adjusted between calls.</violation>
</file>
<file name="src-tauri/src/domain/model/config.rs">
<violation number="1" location="src-tauri/src/domain/model/config.rs:273">
P2: `history_retention_days` is applied without normalization, so negative values can be persisted and effectively disable auto-purge.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
- apply_patch + From<ConfigDto> now route history_retention_days through normalize_history_retention_days, so a crafted settings_update payload or a hand-edited config.toml can no longer persist negative values. - HistoryPurgeWorker.spawn now anchors its first post-startup run to the persisted last-purge timestamp instead of process start; recent sentinels sleep for SECS_PER_DAY - elapsed before running, removing the up-to-47h drift after a restart shortly before the daily window. - system_clock tests use a relative SystemTime window instead of an absolute 2026 threshold and a monotonicity assumption, both of which broke under frozen-clock CI and NTP step-backs. - Drop the now-unused run_if_due_blocking; spawn computes the gate inline via the new pure helper initial_delay_secs.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src-tauri/src/adapters/driven/config/toml_config_store.rs (1)
126-134:⚠️ Potential issue | 🟠 MajorNormalize retention in
update_configbefore write/return.At Line 132,
apply_patchcan sethistory_retention_daysto an invalid negative value, and at Line 133 it is persisted as-is. Normalization currently happens on reload (Line 254), so this path can still emit/store invalid state.🔧 Suggested fix
fn update_config(&self, patch: ConfigPatch) -> Result<AppConfig, DomainError> { let _guard = self .lock .lock() .map_err(|e| DomainError::StorageError(format!("config lock poisoned: {e}")))?; let mut config = self.read_or_default()?; apply_patch(&mut config, &patch); + config.history_retention_days = + normalize_history_retention_days(config.history_retention_days); self.write_config(&config)?; Ok(config) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/config/toml_config_store.rs` around lines 126 - 134, update_config currently applies the incoming ConfigPatch via apply_patch and writes it out without normalizing fields, allowing history_retention_days to be set to a negative value; ensure you normalize the mutated config before persisting and returning by calling the existing normalization routine (e.g. normalize_config or the same logic used in read_or_default/read path) after apply_patch and before self.write_config(&config) and Ok(config), so update_config always enforces valid history_retention_days and other invariants.
🧹 Nitpick comments (1)
src-tauri/src/adapters/driven/scheduler/history_purge_worker.rs (1)
54-60: Clarify re-enable behavior inrun_oncedocs.Line 57-Line 59 says the next “due check” fires as soon as retention is re-enabled, but the daemon loop (Line 124-Line 133) runs on a 24h cadence and does not call
run_if_due. The wording is slightly misleading.📝 Suggested doc tweak
- /// sentinel — leaving the next "due" check to fire as soon as the - /// user re-enables retention. + /// sentinel. The daemon still wakes on its daily cadence; once + /// retention is re-enabled, the next scheduled run can purge.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/adapters/driven/scheduler/history_purge_worker.rs` around lines 54 - 60, The doc comment for run_once is misleading about when the "next due" check fires; clarify that run_once returns 0 and does not write the sentinel when retention_days <= 0, and that re-enabling retention does not cause an immediate purge unless run_if_due is invoked — the background daemon loop runs on a 24h cadence (not continuously) so the next purge will occur either when run_if_due is called or when the daemon loop next runs on its schedule. Update the comment on pub fn run_once(&self) to mention run_if_due and the daemon loop timing (24h) so callers understand re-enable behavior and where to trigger an immediate purge.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src-tauri/src/adapters/driven/config/toml_config_store.rs`:
- Around line 126-134: update_config currently applies the incoming ConfigPatch
via apply_patch and writes it out without normalizing fields, allowing
history_retention_days to be set to a negative value; ensure you normalize the
mutated config before persisting and returning by calling the existing
normalization routine (e.g. normalize_config or the same logic used in
read_or_default/read path) after apply_patch and before
self.write_config(&config) and Ok(config), so update_config always enforces
valid history_retention_days and other invariants.
---
Nitpick comments:
In `@src-tauri/src/adapters/driven/scheduler/history_purge_worker.rs`:
- Around line 54-60: The doc comment for run_once is misleading about when the
"next due" check fires; clarify that run_once returns 0 and does not write the
sentinel when retention_days <= 0, and that re-enabling retention does not cause
an immediate purge unless run_if_due is invoked — the background daemon loop
runs on a 24h cadence (not continuously) so the next purge will occur either
when run_if_due is called or when the daemon loop next runs on its schedule.
Update the comment on pub fn run_once(&self) to mention run_if_due and the
daemon loop timing (24h) so callers understand re-enable behavior and where to
trigger an immediate purge.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a272b9c9-f7f9-41bf-98e6-6920a36c3981
📒 Files selected for processing (5)
CHANGELOG.mdsrc-tauri/src/adapters/driven/config/toml_config_store.rssrc-tauri/src/adapters/driven/scheduler/history_purge_worker.rssrc-tauri/src/adapters/driven/scheduler/system_clock.rssrc-tauri/src/domain/model/config.rs
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (2)
- src-tauri/src/adapters/driven/scheduler/system_clock.rs
- src-tauri/src/domain/model/config.rs
Required field landed in src/types/settings.ts but six existing test mocks still constructed AppConfig literals without it, so CI's `tsc -b` step caught TS2741 across: - src/components/__tests__/ClipboardIndicator.test.tsx - src/hooks/__tests__/useAppEffects.test.ts - src/layouts/__tests__/AppLayout.test.tsx - src/stores/__tests__/settingsStore.test.ts - src/views/LinkGrabberView/__tests__/LinkGrabberView.test.tsx - src/views/SettingsView/__tests__/SettingsView.test.tsx Each mock now carries `historyRetentionDays: 30` (matches the AppConfig default), so the literals satisfy the type without changing any behaviour the tests assert on.
Summary
• Add configurable history retention (7/30/90/365 days, unlimited)
• Background worker for daily auto-purge (hard delete, non-recoverable)
• Clock trait abstraction for testability
• Settings UI dropdown with i18n support (en/fr)
• Startup check prevents concurrent purges >24h apart
Type
feat
Summary by cubic
Adds configurable history retention with a daily background purge; default is 30 days and “Never” disables purging, with the first run aligned to the last successful purge to keep a 24h cadence. Also fixes TypeScript tests by adding
historyRetentionDaysto AppConfig mocks. Implements Linear task‑14.New Features
history_retention_days(presets 7/30/90/365/0) in Settings → General with en/fr i18n.HistoryPurgeWorker: runs at startup (aligned to last run) and every 24h; hard-deletes entries older than the retention window; persists last run in<app_data_dir>/.history_purge_state;0is a no‑op; picks up setting changes without restart.Clockport andSystemClock. Negative values are normalized to0inapply_patch,From<ConfigDto>, and the worker.Migration
config.tomlfiles use the 30‑day default; no action needed.0) to keep history indefinitely. Purges are permanent.Written for commit f5e1891. Summary will update on new commits.
Summary by CodeRabbit
New Features
Behavior
Documentation