Skip to content

feat(account): command handlers for account management (task 21)#125

Merged
mpiton merged 9 commits intomainfrom
feat/task-21-commands-accounts
Apr 28, 2026
Merged

feat(account): command handlers for account management (task 21)#125
mpiton merged 9 commits intomainfrom
feat/task-21-commands-accounts

Conversation

@mpiton
Copy link
Copy Markdown
Owner

@mpiton mpiton commented Apr 28, 2026

Summary

• Implement 6 command handlers: add, update, delete, validate, import, export
• Add domain ports: AccountValidator, AccountCredentialStore, PassphraseCodec
• Implement KeyringAccountStore and AesGcmPbkdf2Codec adapters
• Add 7 domain events for account operations
• Full test coverage: 91-97% handlers, 95% codec

Type

feat


Summary by cubic

Adds six account command handlers and crypto/keyring plumbing for full account management and encrypted import/export. Implements task 21 with atomic, cross‑platform‑safe exports and stronger rollback/cleanup on keyring failures.

  • New Features

    • Handlers: add, update, delete, validate, export, import.
    • Ports: AccountValidator (+ ValidationOutcome), AccountCredentialStore, PassphraseCodec.
    • Adapters: KeyringAccountStore (per-account passwords), AesGcmPbkdf2Codec (AES‑256‑GCM + PBKDF2; header as AAD).
    • Events via Tauri: account-added/updated/deleted/validated/validation-failed/accounts-imported/accounts-exported.
    • CommandBus wired with builders for the new ports; handlers validate inputs and keep DB/keyring in sync.
    • Dependencies: aes-gcm, pbkdf2, hmac, rand.
  • Bug Fixes

    • add_account: on keyring write failure, roll back row, best‑effort delete partial secret, and log rollback errors.
    • update_account: on rotation failure, roll back row and restore the previous password (or delete if none).
    • delete_account: keyring delete is best‑effort so AccountDeleted always fires.
    • export_accounts: write to a temp file, fsync, then perform a single atomic rename to replace the destination; always clean up the temp file.
    • import/validate hardening: cap PBKDF2 iterations; trim inputs and skip duplicates; atomically roll back the whole batch on any entry failure (including a failed keyring write); emit AccountValidationFailed for transport/storage errors.
    • Preserve created_at across export/import so list ordering remains stable.
    • Tests: rename the test passphrase fixture to avoid secret-scanner false positives (no behavior change).

Written for commit c0a8154. Summary will update on new commits. Review in cubic

Summary by CodeRabbit

  • New Features

    • Account management: add/update/delete accounts with per-account OS keyring credential storage and automatic rollback on secret-store failures
    • Account validation: test credentials, persist validation metadata (timings/traffic/valid-until) and emit success/failure events
    • Encrypted bundles: export/import accounts to an authenticated passphrase format (AES‑GCM + PBKDF2), with pre-validation and duplicate skipping
  • Events

    • UI-facing account lifecycle and import/export events emitted
  • Documentation

    • CHANGELOG expanded with account, credential, and export/import details

…k 21)

Implements the six application-layer command handlers required by the
Vue Accounts (task 23). Each handler is wired through the existing
CommandBus via builder-style setters so older fixtures don't have to
construct the new ports.

New domain ports:
- AccountValidator (+ ValidationOutcome) — abstracts the
  hoster/debrid plugin validate_account() call.
- AccountCredentialStore — per-account keyring wrapper keyed by
  AccountId so add/delete/rotate write a single secret.
- PassphraseCodec — authenticated encryption used by export/import.

New driven adapters:
- KeyringAccountStore stores `vortex-account-{id}` entries through
  keyring-rs with a redacting error-sanitiser identical to the policy
  used by KeyringCredentialStore.
- AesGcmPbkdf2Codec implements the bundle format
  `VORTACC | version | iters | salt | nonce | ct||tag` with
  PBKDF2-HMAC-SHA256 (200 000 rounds) + AES-256-GCM and binds the
  header as AAD so a downgrade attack swapping `version` cannot pass
  authentication.

Handler highlights:
- add_account validates input, persists the row, then writes the
  password — rolling the SQLite row back if the keyring write fails so
  the keyring never holds orphaned credentials.
- update_account allows partial mutation (username / password /
  account_type / enabled), preserving created_at.
- delete_account is idempotent on both SQLite and keyring.
- validate_account stamps `last_validated`, copies fresh traffic /
  expiry into the row on success, emits AccountValidated /
  AccountValidationFailed.
- export_accounts pulls plaintext passwords from the keyring,
  serialises to JSON, encrypts with the codec, writes the opaque
  bundle to disk.
- import_accounts validates every entry up-front, decrypts with the
  passphrase, skips `(service, username)` pairs already present, and
  rolls back the SQLite row when the keyring write fails — wrong
  passphrase yields ValidationError without inserting a single row.

Domain events AccountAdded / AccountUpdated / AccountDeleted /
AccountValidated / AccountValidationFailed / AccountsImported /
AccountsExported are forwarded by the Tauri bridge as `account-*`
browser events for the upcoming React view.

Test coverage on the new files: handlers 91-97% lines, codec 95%,
keyring sanitiser 33% (real-keyring path runs under `--ignored`).
1095 unit tests pass; clippy + fmt clean; vitest 591 tests still
green.
@github-actions github-actions Bot added documentation Improvements or additions to documentation dependencies rust labels Apr 28, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds account management commands and driven ports plus concrete adapters: a per-account keyring credential store and an AES‑GCM+PBKDF2 passphrase codec. Implements add/update/delete/validate/import/export command handlers, domain events for account lifecycle and batch ops, CommandBus wiring, tests, fixtures, and CHANGELOG updates.

Changes

Cohort / File(s) Summary
Build deps
src-tauri/Cargo.toml
Added crypto crates: aes-gcm, pbkdf2, hmac, rand.
Credential adapter
src-tauri/src/adapters/driven/credential/keyring_account_store.rs, src-tauri/src/adapters/driven/credential/mod.rs
New KeyringAccountStore implementing AccountCredentialStore with store/get/delete semantics, error sanitization, unit tests and an ignored integration test; exported from credential mod.
Passphrase codec & crypto module
src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs, src-tauri/src/adapters/driven/crypto/mod.rs, src-tauri/src/adapters/driven/mod.rs
New AesGcmPbkdf2Codec implementing PassphraseCodec with versioned bundle format (MAGIC
Event bridges / logging
src-tauri/src/adapters/driven/event/tauri_bridge.rs, src-tauri/src/adapters/driven/logging/download_log_bridge.rs
Tauri bridge maps new account DomainEvent variants to frontend event names/payloads; download-log bridge ignores account events.
CommandBus wiring
src-tauri/src/application/command_bus.rs
Added optional driven ports and accessors: AccountRepository, AccountCredentialStore, AccountValidator, PassphraseCodec plus builder-style with_* setters and getters.
Account commands (handlers & DTOs)
src-tauri/src/application/commands/mod.rs, .../add_account.rs, .../update_account.rs, .../delete_account.rs, .../validate_account.rs, .../export_accounts.rs, .../import_accounts.rs, .../tests_support.rs
Implemented add/update/delete/validate/export/import handlers with validation, repo+credential-store coordination, rollback semantics on credential-store failures, event publishing (AccountAdded/Updated/Deleted/Validated/ValidationFailed/AccountsImported/Exported), DTOs, fixtures and comprehensive tests.
Domain events & ports
src-tauri/src/domain/event.rs, src-tauri/src/domain/ports/driven/account_credential_store.rs, .../account_validator.rs, .../passphrase_codec.rs, .../mod.rs
Added account lifecycle and import/export DomainEvent variants; introduced driven-port traits AccountCredentialStore, AccountValidator (+ ValidationOutcome), and PassphraseCodec, and re-exported them.
Public API exports
src-tauri/src/lib.rs
Re-exported new concrete adapters KeyringAccountStore and AesGcmPbkdf2Codec.
Docs/Changelog
CHANGELOG.md
Updated Unreleased section: documents account commands, keyring-backed credential model, AES‑GCM+PBKDF2 export format, and transactional/persistence behavior.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Frontend
  participant CommandBus
  participant Repo as AccountRepository
  participant Keyring as AccountCredentialStore
  participant EventBus
  Frontend->>CommandBus: AddAccount{service, username, password}
  CommandBus->>CommandBus: validate inputs, gen AccountId
  CommandBus->>Repo: save Account metadata (row)
  alt repo save OK
    CommandBus->>Keyring: store_password(AccountId, password)
    alt keyring OK
      CommandBus->>EventBus: publish AccountAdded{id, service}
      CommandBus-->>Frontend: Ok(AccountId)
    else keyring ERR
      CommandBus->>Repo: delete Account row (rollback)
      CommandBus-->>Frontend: Err(storage)
    end
  else repo ERR
    CommandBus-->>Frontend: Err(repo)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

"I hopped through keys and bytes with care,
I stored each secret in a tiny lair,
Bundled bundles with salt and nonce,
I sang of rollbacks, steady and fair,
New events hop out — hooray, declare! 🐇"

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.05% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the primary change: implementing account management command handlers. It is specific, concise, and directly reflects the main objective of the pull request.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/task-21-commands-accounts

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d87b3ffddf

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +64 to +67
repo.save(&next)?;

if let Some(pw) = cmd.patch.password {
store.store_password(&cmd.id, &pw)?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Make account updates atomic with keyring writes

The handler persists the updated account row before attempting password rotation, and there is no rollback if store.store_password fails. In a patch that changes metadata (e.g. username/enabled/type) and password together, a transient keyring error will return an error but still commit the SQLite mutation, leaving partially applied state and no AccountUpdated event. Write the password first or restore the previous row on keyring failure so failed updates are not partially committed.

Useful? React with 👍 / 👎.

Comment on lines +79 to +83
let existing = repo.list()?;
let mut imported = 0u32;
let mut skipped = 0u32;

for (entry, kind) in prepared {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Track duplicates during import, not only pre-existing rows

Duplicate detection is based on a single repo.list() snapshot taken before the loop. If the bundle itself contains two entries with the same (service_name, username) that are not already in the DB, the first insert succeeds, the second is not marked duplicate, and repo.save then fails with a unique-constraint error after partial import work has already happened. Keep a mutable seen set (seeded from existing rows) so in-bundle duplicates are counted in skipped_duplicates instead of aborting mid-run.

Useful? React with 👍 / 👎.

Comment on lines +138 to +142
let iterations = u32::from_be_bytes(iter_bytes);
if iterations < 1_000 {
return Err(DomainError::ValidationError(
"export header has implausibly low PBKDF2 iteration count".into(),
));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Reject unbounded PBKDF2 iteration counts in import headers

The decrypt path only enforces a minimum iteration count and accepts arbitrarily large values from the file header. A crafted export file can set iterations near u32::MAX, forcing PBKDF2 to run for an extremely long time and effectively hanging account import on CPU. Add a reasonable upper bound check and fail fast with ValidationError before key derivation.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (2)
src-tauri/src/adapters/driven/event/tauri_bridge.rs (1)

64-70: Please add unit tests for the new account-* bridge mappings.

The mappings look correct, but this frontend contract should be explicitly tested (event name + camelCase payload keys) for each newly added account event variant.

Also applies to: 174-198

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs` around lines 64 - 70,
Add unit tests that exercise the bridge mapping for each DomainEvent::Account*
variant (AccountAdded, AccountUpdated, AccountDeleted, AccountValidated,
AccountValidationFailed, AccountsImported, AccountsExported) and for the other
group around lines 174-198; each test should construct the DomainEvent variant,
run it through the bridge mapping function (the function that produces the event
name and payload used by the frontend), assert the returned event name equals
the expected string (e.g., "account-added"), and assert the serialized payload
uses camelCase keys (use serde_json to serialize the payload and check key
names). Locate the mapping via the DomainEvent -> bridge event conversion
function/impl in tauri_bridge.rs and add tests in the corresponding #[cfg(test)]
module verifying both name and camelCase payload for each variant.
src-tauri/src/adapters/driven/logging/download_log_bridge.rs (1)

133-140: Add regression tests for ignored account events.

The new no-op variants are correct, but they’re currently untested in this file. A small table-driven test for these account events would lock in the “do not log” contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/logging/download_log_bridge.rs` around lines
133 - 140, Add a table-driven unit test in the same module as
download_log_bridge.rs that iterates over the no-op DomainEvent variants
(DomainEvent::SettingsUpdated, AccountAdded, AccountUpdated, AccountDeleted,
AccountValidated, AccountValidationFailed, AccountsImported, AccountsExported)
and asserts that invoking the bridge's event handler (the function that
processes DomainEvent in this file) produces no log entries or side-effects; for
each variant create the event instance, call the handler (e.g., the download log
bridge's process/handle function), and verify the logger/collector remains empty
so the “do not log” contract is locked in.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CHANGELOG.md`:
- Line 13: Update the changelog wording: don't claim Account::credential_ref()
is the key used to retrieve the OS keychain secret; instead state that
Account::credential_ref() returns the credential URI
(keyring://{service}/{username}) but the new account flow stores secrets via the
AccountCredentialStore keyed by AccountId (e.g. vortex-account-{id} in
KeyringAccountStore), so lookups use the AccountCredentialStore key rather than
the credential_ref alone.

In `@src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs`:
- Around line 136-156: The code currently reads iterations from the header into
the iterations variable and passes it directly into Self::derive_key (PBKDF2);
add an upper bound check before calling Self::derive_key to reject or clamp
implausibly large counts (e.g. ensure iterations is between a MIN_ITERATIONS
(already checks >=1_000) and a new MAX_ITERATIONS constant like 10_000_000), and
return a DomainError::ValidationError with a clear message if iterations >
MAX_ITERATIONS; perform this validation right after computing iterations and
before deriving the key so a malicious export cannot force excessive CPU work.

In `@src-tauri/src/application/commands/add_account.rs`:
- Around line 50-52: When store.store_password(&id, &cmd.password) fails, the
current code calls repo.delete(&id) but swallows any error from that rollback
and returns only the keyring error; change this so rollback failures are
surfaced too: after a failed store_password capture its error (e), then attempt
repo.delete(&id) and if that returns Err(delete_err) return an error that
contains both errors (or wrap using anyhow::Context or a custom error variant)
so callers can see the original keyring error and the rollback failure; update
the error path in add_account.rs (the block around store.store_password and
repo.delete) to return a combined/paired error rather than discarding
delete_err.

In `@src-tauri/src/application/commands/delete_account.rs`:
- Around line 24-25: The current sequence calls repo.delete(&cmd.id) before
store.delete_password(&cmd.id), causing partial deletes if keyring removal
fails; change the order to call store.delete_password(&cmd.id)? first and only
call repo.delete(&cmd.id)? after that succeeds (or, if your repo supports it,
perform both operations in a transaction/atomic operation). Update any error
handling to return immediately on store.delete_password failure so the DB row is
not removed, and add/update tests around delete_account to cover keyring failure
scenarios.

In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 56-90: The import loop currently trims only for emptiness and uses
raw values for duplicates and saving; fix by normalizing (trim) service_name and
username once during prevalidation and use those normalized strings everywhere:
create normalized_service = entry.service_name.trim().to_string() and
normalized_username = entry.username.trim().to_string() before calling
entry.parse_account_type() and push a tuple (normalized_service,
normalized_username, entry, kind) into prepared; build an existing_set from
repo.list() using the same normalized service/username comparison; maintain an
in-memory HashSet seen_pairs during prevalidation to detect and increment
skipped for payload-level duplicates before any repo.save; ensure repo.save uses
the normalized service/username values when writing so whitespace variants don’t
create distinct accounts.

In `@src-tauri/src/application/commands/update_account.rs`:
- Around line 64-68: The update currently calls repo.save(&next) before rotating
the password, which leaves the DB updated if store.store_password(&cmd.id, &pw)
later fails; change the order so that when cmd.patch.password is Some(pw) you
call store.store_password(&cmd.id, &pw) first, then call repo.save(&next); if
repo.save fails after the password was stored, compensate by removing the newly
stored credential via store.remove_password(&cmd.id) (or restore the previous
password if available), handle and surface any rollback errors (log or include
in the returned error) and ensure the same control-flow paths cover both the old
account value (prev/old) and the new next value so the operation is atomic from
the caller’s perspective.

In `@src-tauri/src/application/commands/validate_account.rs`:
- Around line 43-55: The handler currently publishes
DomainEvent::AccountValidationFailed only for DomainError::NotFound but returns
early for Err(other) without emitting the failure event; update the match arm
for Err(other) (the result of validator.validate in validate_account.rs) to call
self.event_bus().publish(DomainEvent::AccountValidationFailed { id:
cmd.id.clone(), error: other.to_string() }) before returning Err(other.into()),
ensuring upstream consumers see validation failures for transport/outage errors
as well.

In `@src-tauri/src/domain/event.rs`:
- Around line 261-266: The event variant AccountValidationFailed (with fields
id: AccountId and error: String) is documented as emitted when upstream is
unreachable but the handler handle_validate_account currently returns validator
failures directly instead of publishing this event; update
handle_validate_account to publish/send the AccountValidationFailed event
whenever credential validation fails due to upstream/unreachable or other
non-success validator errors (include the failure/error text in the error
String) and ensure other validator-returned failures still result in the
appropriate success/failure event flow so frontend subscribers relying on
account-validation-failed receive outage/failure notifications from
AccountValidationFailed.

---

Nitpick comments:
In `@src-tauri/src/adapters/driven/event/tauri_bridge.rs`:
- Around line 64-70: Add unit tests that exercise the bridge mapping for each
DomainEvent::Account* variant (AccountAdded, AccountUpdated, AccountDeleted,
AccountValidated, AccountValidationFailed, AccountsImported, AccountsExported)
and for the other group around lines 174-198; each test should construct the
DomainEvent variant, run it through the bridge mapping function (the function
that produces the event name and payload used by the frontend), assert the
returned event name equals the expected string (e.g., "account-added"), and
assert the serialized payload uses camelCase keys (use serde_json to serialize
the payload and check key names). Locate the mapping via the DomainEvent ->
bridge event conversion function/impl in tauri_bridge.rs and add tests in the
corresponding #[cfg(test)] module verifying both name and camelCase payload for
each variant.

In `@src-tauri/src/adapters/driven/logging/download_log_bridge.rs`:
- Around line 133-140: Add a table-driven unit test in the same module as
download_log_bridge.rs that iterates over the no-op DomainEvent variants
(DomainEvent::SettingsUpdated, AccountAdded, AccountUpdated, AccountDeleted,
AccountValidated, AccountValidationFailed, AccountsImported, AccountsExported)
and asserts that invoking the bridge's event handler (the function that
processes DomainEvent in this file) produces no log entries or side-effects; for
each variant create the event instance, call the handler (e.g., the download log
bridge's process/handle function), and verify the logger/collector remains empty
so the “do not log” contract is locked in.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 78707a71-7a5d-485c-8f17-422c019b7863

📥 Commits

Reviewing files that changed from the base of the PR and between ccfa4b7 and d87b3ff.

⛔ Files ignored due to path filters (1)
  • src-tauri/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (24)
  • CHANGELOG.md
  • src-tauri/Cargo.toml
  • src-tauri/src/adapters/driven/credential/keyring_account_store.rs
  • src-tauri/src/adapters/driven/credential/mod.rs
  • src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs
  • src-tauri/src/adapters/driven/crypto/mod.rs
  • src-tauri/src/adapters/driven/event/tauri_bridge.rs
  • src-tauri/src/adapters/driven/logging/download_log_bridge.rs
  • src-tauri/src/adapters/driven/mod.rs
  • src-tauri/src/application/command_bus.rs
  • src-tauri/src/application/commands/add_account.rs
  • src-tauri/src/application/commands/delete_account.rs
  • src-tauri/src/application/commands/export_accounts.rs
  • src-tauri/src/application/commands/import_accounts.rs
  • src-tauri/src/application/commands/mod.rs
  • src-tauri/src/application/commands/tests_support.rs
  • src-tauri/src/application/commands/update_account.rs
  • src-tauri/src/application/commands/validate_account.rs
  • src-tauri/src/domain/event.rs
  • src-tauri/src/domain/ports/driven/account_credential_store.rs
  • src-tauri/src/domain/ports/driven/account_validator.rs
  • src-tauri/src/domain/ports/driven/mod.rs
  • src-tauri/src/domain/ports/driven/passphrase_codec.rs
  • src-tauri/src/lib.rs

Comment thread CHANGELOG.md Outdated
Comment thread src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs
Comment on lines +50 to +52
if let Err(e) = store.store_password(&id, &cmd.password) {
let _ = repo.delete(&id);
return Err(e.into());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't swallow rollback failures after a keyring error.

If store_password fails and repo.delete(&id) also fails, this returns only the keyring error while leaving the metadata row behind—the exact orphaned state this handler is trying to avoid. Please surface the rollback failure as well so partial writes are observable and testable.

🩹 Suggested fix
-        if let Err(e) = store.store_password(&id, &cmd.password) {
-            let _ = repo.delete(&id);
-            return Err(e.into());
-        }
+        if let Err(store_err) = store.store_password(&id, &cmd.password) {
+            if let Err(rollback_err) = repo.delete(&id) {
+                return Err(AppError::Domain(crate::domain::error::DomainError::StorageError(
+                    format!(
+                        "storing password for account {} failed and rollback delete also failed: {rollback_err}",
+                        id.as_str()
+                    ),
+                )));
+            }
+            return Err(store_err.into());
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if let Err(e) = store.store_password(&id, &cmd.password) {
let _ = repo.delete(&id);
return Err(e.into());
if let Err(store_err) = store.store_password(&id, &cmd.password) {
if let Err(rollback_err) = repo.delete(&id) {
return Err(AppError::Domain(crate::domain::error::DomainError::StorageError(
format!(
"storing password for account {} failed and rollback delete also failed: {rollback_err}",
id.as_str()
),
)));
}
return Err(store_err.into());
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/add_account.rs` around lines 50 - 52, When
store.store_password(&id, &cmd.password) fails, the current code calls
repo.delete(&id) but swallows any error from that rollback and returns only the
keyring error; change this so rollback failures are surfaced too: after a failed
store_password capture its error (e), then attempt repo.delete(&id) and if that
returns Err(delete_err) return an error that contains both errors (or wrap using
anyhow::Context or a custom error variant) so callers can see the original
keyring error and the rollback failure; update the error path in add_account.rs
(the block around store.store_password and repo.delete) to return a
combined/paired error rather than discarding delete_err.

Comment on lines +24 to +25
repo.delete(&cmd.id)?;
store.delete_password(&cmd.id)?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid partial delete on keyring failure.

At Line 24 you delete the DB row before deleting the keyring secret. If Line 25 fails, the command returns an error after data is already partially deleted, and credentials may be left orphaned.

Proposed minimal fix
-        repo.delete(&cmd.id)?;
-        store.delete_password(&cmd.id)?;
+        // Delete secret first so a keyring failure does not leave a
+        // partially-deleted account state in storage.
+        store.delete_password(&cmd.id)?;
+        repo.delete(&cmd.id)?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
repo.delete(&cmd.id)?;
store.delete_password(&cmd.id)?;
// Delete secret first so a keyring failure does not leave a
// partially-deleted account state in storage.
store.delete_password(&cmd.id)?;
repo.delete(&cmd.id)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/delete_account.rs` around lines 24 - 25,
The current sequence calls repo.delete(&cmd.id) before
store.delete_password(&cmd.id), causing partial deletes if keyring removal
fails; change the order to call store.delete_password(&cmd.id)? first and only
call repo.delete(&cmd.id)? after that succeeds (or, if your repo supports it,
perform both operations in a transaction/atomic operation). Update any error
handling to return immediately on store.delete_password failure so the DB row is
not removed, and add/update tests around delete_account to cover keyring failure
scenarios.

Comment thread src-tauri/src/application/commands/import_accounts.rs
Comment thread src-tauri/src/application/commands/update_account.rs
Comment thread src-tauri/src/application/commands/validate_account.rs
Comment on lines +261 to +266
/// Emitted by `validate_account` when the credentials are rejected
/// or the upstream service is unreachable.
AccountValidationFailed {
id: AccountId,
error: String,
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the AccountValidationFailed contract aligned with the handler.

The comment says this event is emitted when the upstream service is unreachable, but handle_validate_account currently returns generic validator failures directly instead of publishing AccountValidationFailed. That makes this comment misleading for frontend subscribers that rely on account-validation-failed to cover outage cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/domain/event.rs` around lines 261 - 266, The event variant
AccountValidationFailed (with fields id: AccountId and error: String) is
documented as emitted when upstream is unreachable but the handler
handle_validate_account currently returns validator failures directly instead of
publishing this event; update handle_validate_account to publish/send the
AccountValidationFailed event whenever credential validation fails due to
upstream/unreachable or other non-success validator errors (include the
failure/error text in the error String) and ensure other validator-returned
failures still result in the appropriate success/failure event flow so frontend
subscribers relying on account-validation-failed receive outage/failure
notifications from AccountValidationFailed.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 25 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/commands/import_accounts.rs">

<violation number="1" location="src-tauri/src/application/commands/import_accounts.rs:84">
P2: Duplicate checks use a stale pre-import snapshot, so duplicates inside the same bundle are not skipped and can fail the import mid-batch.</violation>
</file>

<file name="src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs">

<violation number="1" location="src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs:139">
P1: Reject excessively high PBKDF2 iteration counts from import headers to prevent CPU-exhaustion during decryption.</violation>
</file>

<file name="src-tauri/src/application/commands/update_account.rs">

<violation number="1" location="src-tauri/src/application/commands/update_account.rs:64">
P1: Keyring write failure can leave account metadata updated even though the command returns an error. Add rollback to avoid partial updates.</violation>
</file>

<file name="src-tauri/src/application/commands/validate_account.rs">

<violation number="1" location="src-tauri/src/application/commands/validate_account.rs:54">
P2: Publish `AccountValidationFailed` before returning on non-`NotFound` validator errors; otherwise unreachable/network failures produce no domain event.</violation>
</file>

<file name="src-tauri/src/application/commands/add_account.rs">

<violation number="1" location="src-tauri/src/application/commands/add_account.rs:51">
P1: Rollback delete errors are ignored, so a failed credential write can still leave metadata persisted without credentials.</violation>
</file>

<file name="src-tauri/src/application/commands/delete_account.rs">

<violation number="1" location="src-tauri/src/application/commands/delete_account.rs:25">
P2: A keyring deletion error prevents `AccountDeleted` from being published after the DB row is already deleted, causing partial delete side effects without the expected deletion event.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs
Comment thread src-tauri/src/application/commands/update_account.rs
Comment thread src-tauri/src/application/commands/add_account.rs Outdated
Comment thread src-tauri/src/application/commands/import_accounts.rs Outdated
Comment thread src-tauri/src/application/commands/validate_account.rs Outdated
Comment thread src-tauri/src/application/commands/delete_account.rs Outdated
@gitguardian
Copy link
Copy Markdown

gitguardian Bot commented Apr 28, 2026

⚠️ GitGuardian has uncovered 2 secrets following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secrets in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
32273582 Triggered Generic Password d87b3ff src-tauri/src/application/commands/import_accounts.rs View secret
32273582 Triggered Generic Password d87b3ff src-tauri/src/application/commands/import_accounts.rs View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secrets safely. Learn here the best practices.
  3. Revoke and rotate these secrets.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

- add_account: log row-rollback failures when keyring write fails
  instead of silently swallowing them (tracing::warn).
- update_account: roll the row back to the original state when a
  password rotation fails so the row never claims to have rotated
  while the keyring still holds the previous secret.
- delete_account: keep the existing row-then-keyring order but treat
  the keyring step as best-effort; log and continue instead of
  returning an error so AccountDeleted always fires when the row is
  gone.
- aes_gcm_codec: bound the iteration count read from the import
  header (`MAX_PBKDF2_ITERATIONS = 10_000_000`) to prevent a crafted
  bundle from pinning the CPU during decryption.
- import_accounts: trim service / username before validation, then
  dedup with a HashSet seeded from the existing repo so duplicates
  inside the same bundle are skipped (instead of failing midway on
  the UNIQUE constraint).
- validate_account: emit AccountValidationFailed for non-NotFound
  validator errors too so upstream outages and transport failures
  are visible to the UI, not only credential rejections.
- CHANGELOG: clarify that `Account::credential_ref()` is a logical
  reference; actual keyring storage is keyed by AccountId via
  AccountCredentialStore (introduced in task 21).

Tests added:
- test_open_rejects_excessive_iteration_header (codec).
- test_import_accounts_skips_in_bundle_duplicates (import).
- test_validate_account_storage_error_emits_validation_failed_event
  (validate).
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1602695b10

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +122 to +125
repo.save(&account)?;
if let Err(e) = store.store_password(&new_id, &entry.password) {
let _ = repo.delete(&new_id);
return Err(e.into());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Make import failures rollback prior successful entries

The import loop can return an error after committing earlier accounts, so callers see a failed import while part of the bundle has already been persisted. In particular, a later repo.save error (via ?) or keyring write failure only rolls back the current row, not entries imported in previous iterations, and no AccountsImported event is emitted in that path. This leaves hidden partial state and makes retries non-deterministic because some entries will now be treated as duplicates.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
src-tauri/src/application/commands/validate_account.rs (1)

108-121: Consider deriving Clone on Account.

This helper manually reconstructs an Account from all its fields, which is essentially what Clone does. If Account derived Clone, this function could be replaced with account.clone().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/validate_account.rs` around lines 108 -
121, The helper clone_account is manually reconstructing Account via
Account::reconstruct instead of using a derived Clone; update the Account type
to derive Clone (or implement Clone) and replace usages of clone_account with
account.clone(), or remove clone_account and call account.clone() directly;
specifically modify the Account struct to add #[derive(Clone)] (or impl Clone
for Account) and change the code that calls clone_account to use Account::clone
via account.clone().
src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs (1)

38-43: Consider increasing PBKDF2 iteration count closer to OWASP minimum.

The current 200,000 iterations is documented as a tradeoff, but it's 3x below the OWASP 2024 minimum of 600,000 for PBKDF2-HMAC-SHA256. Given that import/export is an infrequent operation, the UX impact of a ~1-2 second delay may be acceptable for stronger security. The backward-compatible versioning makes this easy to bump later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs` around lines 38 - 43,
Increase the PBKDF2 iteration count constant to align with OWASP 2024 guidance:
replace the current PBKDF2_ITERATIONS value (symbol PBKDF2_ITERATIONS in
aes_gcm_codec.rs) from 200_000 to a value closer to or at the OWASP minimum
(e.g., 600_000), and update the accompanying comment to reflect the rationale
and new iteration count so versioned ciphertext remains backward-compatible.
src-tauri/src/application/commands/import_accounts.rs (1)

122-126: Consider logging rollback failures for consistency.

Unlike add_account.rs which logs rollback failures via tracing::warn, this handler silently discards delete errors with let _ = repo.delete(...). For observability parity, consider logging the rollback failure:

♻️ Proposed fix
             repo.save(&account)?;
             if let Err(e) = store.store_password(&new_id, &entry.password) {
-                let _ = repo.delete(&new_id);
+                if let Err(rollback_err) = repo.delete(&new_id) {
+                    tracing::warn!(
+                        account_id = %new_id.as_str(),
+                        keyring_error = %e,
+                        rollback_error = %rollback_err,
+                        "keyring write failed during import and account row rollback also failed; metadata is orphaned"
+                    );
+                }
                 return Err(e.into());
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/import_accounts.rs` around lines 122 -
126, When store.store_password(&new_id, &entry.password) fails, the current
rollback call uses let _ = repo.delete(&new_id) and drops any delete error;
update that error handling to log rollback failures with tracing::warn (similar
to add_account.rs) so deletion errors are observable — replace the silent
discard around repo.delete(&new_id) with a match or if-let that calls
tracing::warn! including the new_id and the delete error while still returning
the original error from store_password.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs`:
- Around line 38-43: Increase the PBKDF2 iteration count constant to align with
OWASP 2024 guidance: replace the current PBKDF2_ITERATIONS value (symbol
PBKDF2_ITERATIONS in aes_gcm_codec.rs) from 200_000 to a value closer to or at
the OWASP minimum (e.g., 600_000), and update the accompanying comment to
reflect the rationale and new iteration count so versioned ciphertext remains
backward-compatible.

In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 122-126: When store.store_password(&new_id, &entry.password)
fails, the current rollback call uses let _ = repo.delete(&new_id) and drops any
delete error; update that error handling to log rollback failures with
tracing::warn (similar to add_account.rs) so deletion errors are observable —
replace the silent discard around repo.delete(&new_id) with a match or if-let
that calls tracing::warn! including the new_id and the delete error while still
returning the original error from store_password.

In `@src-tauri/src/application/commands/validate_account.rs`:
- Around line 108-121: The helper clone_account is manually reconstructing
Account via Account::reconstruct instead of using a derived Clone; update the
Account type to derive Clone (or implement Clone) and replace usages of
clone_account with account.clone(), or remove clone_account and call
account.clone() directly; specifically modify the Account struct to add
#[derive(Clone)] (or impl Clone for Account) and change the code that calls
clone_account to use Account::clone via account.clone().

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7c0fc7fd-6888-444a-a44f-9a6b5bc15e52

📥 Commits

Reviewing files that changed from the base of the PR and between d87b3ff and 1602695.

📒 Files selected for processing (7)
  • CHANGELOG.md
  • src-tauri/src/adapters/driven/crypto/aes_gcm_codec.rs
  • src-tauri/src/application/commands/add_account.rs
  • src-tauri/src/application/commands/delete_account.rs
  • src-tauri/src/application/commands/import_accounts.rs
  • src-tauri/src/application/commands/update_account.rs
  • src-tauri/src/application/commands/validate_account.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src-tauri/src/application/commands/delete_account.rs

When `import_accounts` failed mid-loop (a later `repo.save` UNIQUE
violation or a keyring write failure for the n-th entry), every entry
already imported in the same call stayed persisted. Callers saw a
failed import while half of the bundle had landed, retries became
non-deterministic because the partial entries were now treated as
duplicates, and no `AccountsImported` event was emitted to inform the
read side.

Track every successfully imported `AccountId` in a vec; on any failure
during the per-entry loop, roll the whole batch back (delete the row
and the keyring entry for each id, log per-id failures). The vec stays
local to the call so concurrent imports don't interfere.

Adds `FakeAccountCredentialStore::failing_after(n)` so the new test can
trigger the rollback path by failing the keyring write on entry 2 of
a three-entry bundle.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src-tauri/src/application/commands/import_accounts.rs (1)

90-94: ⚠️ Potential issue | 🟡 Minor

Normalize existing repo pairs before seeding seen.

Imported entries are trimmed, but the repo snapshot is inserted into seen verbatim. A pre-existing "alice " row will not collide with imported "alice", so this can still admit a logical duplicate.

Small fix
         let mut seen: HashSet<(String, String)> = repo
             .list()?
             .into_iter()
-            .map(|a| (a.service_name().to_string(), a.username().to_string()))
+            .map(|a| {
+                (
+                    a.service_name().trim().to_string(),
+                    a.username().trim().to_string(),
+                )
+            })
             .collect();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/import_accounts.rs` around lines 90 - 94,
When building the initial seen HashSet from repo.list(), normalize the existing
pairs the same way imported entries are normalized — trim whitespace (and any
other normalization used by the import flow) on service_name() and username()
before inserting into seen; update the mapping that creates seen to call .trim()
on a.service_name() and a.username() (and apply any other normalization rules
your import uses) so "alice " and "alice" are treated as duplicates by seen.
🧹 Nitpick comments (1)
src-tauri/src/application/commands/tests_support.rs (1)

157-177: Add a post-write failure mode to the fake keyring.

This fake only errors before mutating entries, so the rollback tests never exercise a backend that persists the secret and then returns Err. That leaves the import rollback coverage blind to orphaned-credential leaks on the in-flight account.

Possible fixture extension
 pub(crate) struct FakeAccountCredentialStore {
     entries: Mutex<HashMap<AccountId, String>>,
     fail_on_write: bool,
+    fail_after_insert: bool,
     /// When `Some(n)`, the first `n` writes succeed and every write
     /// after that fails — used by the partial-import rollback test.
     fail_after: Option<usize>,
     write_count: Mutex<usize>,
 }
 
 impl FakeAccountCredentialStore {
     pub(crate) fn new() -> Self {
         Self {
             entries: Mutex::new(HashMap::new()),
             fail_on_write: false,
+            fail_after_insert: false,
             fail_after: None,
             write_count: Mutex::new(0),
         }
     }
+
+    pub(crate) fn failing_after_insert(mut self) -> Self {
+        self.fail_after_insert = true;
+        self
+    }
 }
 
 impl AccountCredentialStore for FakeAccountCredentialStore {
     fn store_password(&self, account_id: &AccountId, password: &str) -> Result<(), DomainError> {
         if self.fail_on_write {
             return Err(DomainError::StorageError(
                 "fake keyring write failure".into(),
             ));
         }
         if let Some(limit) = self.fail_after {
             let mut count = self.write_count.lock().unwrap();
             if *count >= limit {
                 return Err(DomainError::StorageError(
                     "fake keyring write failure (past fail_after limit)".into(),
                 ));
             }
             *count += 1;
         }
         self.entries
             .lock()
             .unwrap()
             .insert(account_id.clone(), password.to_string());
+        if self.fail_after_insert {
+            return Err(DomainError::StorageError(
+                "fake keyring write failure after insert".into(),
+            ));
+        }
         Ok(())
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/tests_support.rs` around lines 157 - 177,
The fake currently only errors before mutating entries; update the
FakeAccountCredentialStore::store_password implementation to support a
post-write failure mode: add a new field like fail_after_post_write (or
fail_on_write_post) to FakeAccountCredentialStore and, inside store_password,
perform the entries.lock().unwrap().insert(...) and increment write_count before
checking that post-write failure flag/limit, returning
Err(DomainError::StorageError(...)) when the post-write condition is met so the
entry remains persisted despite the error; keep the existing pre-write
fail_on_write behavior intact and reference the existing fields write_count,
fail_after, fail_on_write, entries and the store_password method when making
this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 129-144: When store.store_password(&new_id, ...) fails, ensure the
just-created account is included in the best-effort rollback and its keyring
entry is explicitly cleaned: first attempt a best-effort
store.delete_password(&new_id) (or equivalent method) and log on error, then add
new_id to imported_ids (so rollback_imports(repo, store, &imported_ids) will
include this account) before calling rollback_imports and returning the error;
use the existing symbols new_id, store.store_password, store.delete_password (or
the store's removal API), imported_ids, and rollback_imports to locate and
implement this change.

---

Duplicate comments:
In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 90-94: When building the initial seen HashSet from repo.list(),
normalize the existing pairs the same way imported entries are normalized — trim
whitespace (and any other normalization used by the import flow) on
service_name() and username() before inserting into seen; update the mapping
that creates seen to call .trim() on a.service_name() and a.username() (and
apply any other normalization rules your import uses) so "alice " and "alice"
are treated as duplicates by seen.

---

Nitpick comments:
In `@src-tauri/src/application/commands/tests_support.rs`:
- Around line 157-177: The fake currently only errors before mutating entries;
update the FakeAccountCredentialStore::store_password implementation to support
a post-write failure mode: add a new field like fail_after_post_write (or
fail_on_write_post) to FakeAccountCredentialStore and, inside store_password,
perform the entries.lock().unwrap().insert(...) and increment write_count before
checking that post-write failure flag/limit, returning
Err(DomainError::StorageError(...)) when the post-write condition is met so the
entry remains persisted despite the error; keep the existing pre-write
fail_on_write behavior intact and reference the existing fields write_count,
fail_after, fail_on_write, entries and the store_password method when making
this change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9e48ea6c-847b-46d3-bd27-eab579c074f0

📥 Commits

Reviewing files that changed from the base of the PR and between 1602695 and a8459c1.

📒 Files selected for processing (2)
  • src-tauri/src/application/commands/import_accounts.rs
  • src-tauri/src/application/commands/tests_support.rs

Comment on lines +129 to +144
if let Err(e) = repo.save(&account) {
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
if let Err(e) = store.store_password(&new_id, &entry.password) {
if let Err(rb) = repo.delete(&new_id) {
tracing::warn!(
account_id = %new_id.as_str(),
error = %rb,
"failed to roll back current import row after keyring write failure"
);
}
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
imported_ids.push(new_id);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Rollback should include the account whose password write just failed.

AccountCredentialStore::store_password does not guarantee “no side effects on Err”. If a backend writes the secret and then fails, this branch removes the row but never best-effort deletes the current keyring entry, because new_id is only tracked after success. The current fake store does not catch this because it fails pre-write.

Safer rollback shape
             if let Err(e) = repo.save(&account) {
                 rollback_imports(repo, store, &imported_ids);
                 return Err(e.into());
             }
+            imported_ids.push(new_id.clone());
             if let Err(e) = store.store_password(&new_id, &entry.password) {
-                if let Err(rb) = repo.delete(&new_id) {
-                    tracing::warn!(
-                        account_id = %new_id.as_str(),
-                        error = %rb,
-                        "failed to roll back current import row after keyring write failure"
-                    );
-                }
                 rollback_imports(repo, store, &imported_ids);
                 return Err(e.into());
             }
-            imported_ids.push(new_id);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if let Err(e) = repo.save(&account) {
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
if let Err(e) = store.store_password(&new_id, &entry.password) {
if let Err(rb) = repo.delete(&new_id) {
tracing::warn!(
account_id = %new_id.as_str(),
error = %rb,
"failed to roll back current import row after keyring write failure"
);
}
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
imported_ids.push(new_id);
if let Err(e) = repo.save(&account) {
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
imported_ids.push(new_id.clone());
if let Err(e) = store.store_password(&new_id, &entry.password) {
rollback_imports(repo, store, &imported_ids);
return Err(e.into());
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/import_accounts.rs` around lines 129 -
144, When store.store_password(&new_id, ...) fails, ensure the just-created
account is included in the best-effort rollback and its keyring entry is
explicitly cleaned: first attempt a best-effort store.delete_password(&new_id)
(or equivalent method) and log on error, then add new_id to imported_ids (so
rollback_imports(repo, store, &imported_ids) will include this account) before
calling rollback_imports and returning the error; use the existing symbols
new_id, store.store_password, store.delete_password (or the store's removal
API), imported_ids, and rollback_imports to locate and implement this change.

`AccountCredentialStore::store_password` does not promise "no side
effects on Err" — a backend that partially writes the secret before
failing would leave a stale keyring entry behind because `new_id`
only landed in `imported_ids` after a successful write.

Track the id before attempting the keyring write so `rollback_imports`
covers the failed-write entry too. The fake store fails pre-write so
existing tests still pass; this is a defensive change for real
keyring-rs backends whose error semantics aren't guaranteed.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src-tauri/src/application/commands/import_accounts.rs (1)

90-94: ⚠️ Potential issue | 🟡 Minor

Normalize existing repo pairs when seeding seen.

At Line 90–94, imported entries are compared using trimmed (service_name, username), but existing rows are inserted into seen without trimming. This can miss duplicates against legacy whitespace-padded rows and allow semantic duplicates to be imported.

Suggested patch
         let mut seen: HashSet<(String, String)> = repo
             .list()?
             .into_iter()
-            .map(|a| (a.service_name().to_string(), a.username().to_string()))
+            .map(|a| {
+                (
+                    a.service_name().trim().to_string(),
+                    a.username().trim().to_string(),
+                )
+            })
             .collect();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/import_accounts.rs` around lines 90 - 94,
The existing repo rows inserted into the HashSet seen are not normalized, so
compare pairs can miss duplicates with legacy whitespace; update the collection
that builds seen (the let mut seen: HashSet<(String, String)> = repo.list()? ...
) to trim/normalize both service_name and username (e.g., call trim() on
a.service_name() and a.username() before converting to String) so that existing
pairs match the trimmed comparisons used later in import_accounts.rs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 90-94: The existing repo rows inserted into the HashSet seen are
not normalized, so compare pairs can miss duplicates with legacy whitespace;
update the collection that builds seen (the let mut seen: HashSet<(String,
String)> = repo.list()? ... ) to trim/normalize both service_name and username
(e.g., call trim() on a.service_name() and a.username() before converting to
String) so that existing pairs match the trimmed comparisons used later in
import_accounts.rs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1783e308-77d1-43e7-9d11-4873972ed41f

📥 Commits

Reviewing files that changed from the base of the PR and between a8459c1 and ed0727b.

📒 Files selected for processing (1)
  • src-tauri/src/application/commands/import_accounts.rs

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ed0727b098

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +110 to +113
let new_id = AccountId::new(Uuid::new_v4().to_string());
let mut account =
Account::new(new_id.clone(), service_name, username, kind, cmd.now_ms);
if !entry.enabled {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve original created_at during account import

The import path reconstructs every account with cmd.now_ms as created_at, so an export/import round-trip rewrites creation timestamps instead of preserving them. Because AccountRepository::list() is defined to sort by created_at, restored accounts can reorder unpredictably and lose original metadata chronology after backup restore. ExportEntry currently omits created_at (in export_accounts.rs), so the importer has no way to keep the source value.

Useful? React with 👍 / 👎.

`ExportEntry` did not carry the source `created_at`, so importing a
bundle stamped every restored account with `cmd.now_ms`. Since
`AccountRepository::list()` is defined to sort by `created_at`,
restored backups reordered unpredictably and lost the original
chronology.

Add an optional `created_at` field to `ExportEntry`, populate it from
the source account, and make the importer use it when present (falling
back to `cmd.now_ms` for bundles produced by older versions). Existing
round-trip test now also asserts the timestamp round-trips.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src-tauri/src/application/commands/import_accounts.rs (1)

90-94: ⚠️ Potential issue | 🟡 Minor

Normalize existing repo pairs with the same trim logic used for imports.

Line 90–Line 94 seeds seen with raw repository values while imported entries are trimmed at Line 65–Line 66. That can miss duplicates for legacy rows containing surrounding whitespace.

💡 Proposed fix (consistent normalization)
         let mut seen: HashSet<(String, String)> = repo
             .list()?
             .into_iter()
-            .map(|a| (a.service_name().to_string(), a.username().to_string()))
+            .map(|a| {
+                (
+                    a.service_name().trim().to_string(),
+                    a.username().trim().to_string(),
+                )
+            })
             .collect();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src-tauri/src/application/commands/import_accounts.rs` around lines 90 - 94,
The seeded seen HashSet is using raw repo values which can differ from the
trimmed form used when importing; update the repo seeding so you normalize the
pair exactly like the import path does (trim surrounding whitespace on both
service_name() and username()) before collecting into seen (e.g., call .trim()
on each string returned from service_name() and username() and then .to_string()
or use the same normalization helper if one exists) so legacy entries with extra
whitespace are detected as duplicates.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src-tauri/src/application/commands/export_accounts.rs`:
- Around line 112-117: The current export write code in the
tokio::task::spawn_blocking closure calls std::fs::write(&path, &bytes) which
can corrupt the target if the write fails; change it to perform an atomic
replace by writing ciphertext (bytes) to a temporary file in the same directory
(e.g., create a temp name based on path with a random/suffix), fs::write to that
temp file, then fs::rename(temp_path, path) to atomically replace the
destination. Keep this logic inside the same spawn_blocking closure that
currently uses path and bytes, and propagate the same AppError::Storage errors
on failure of the temp write or rename.

---

Duplicate comments:
In `@src-tauri/src/application/commands/import_accounts.rs`:
- Around line 90-94: The seeded seen HashSet is using raw repo values which can
differ from the trimmed form used when importing; update the repo seeding so you
normalize the pair exactly like the import path does (trim surrounding
whitespace on both service_name() and username()) before collecting into seen
(e.g., call .trim() on each string returned from service_name() and username()
and then .to_string() or use the same normalization helper if one exists) so
legacy entries with extra whitespace are detected as duplicates.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ec7360e3-2ae5-4dc3-82d9-bff343b551d5

📥 Commits

Reviewing files that changed from the base of the PR and between ed0727b and 4b98aa2.

📒 Files selected for processing (2)
  • src-tauri/src/application/commands/export_accounts.rs
  • src-tauri/src/application/commands/import_accounts.rs

Comment thread src-tauri/src/application/commands/export_accounts.rs Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4b98aa22f1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +50 to +52
if let Err(e) = store.store_password(&id, &cmd.password) {
if let Err(rollback_err) = repo.delete(&id) {
tracing::warn!(
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Remove orphan secret on failed account creation

When store.store_password fails, this path only rolls back the SQLite row and immediately returns. The AccountCredentialStore contract does not guarantee that store_password is side-effect-free on Err (your import handler explicitly treats failed writes as potentially partial), so a backend can leave a keyring secret behind even though account creation reports failure. That leaks stale credentials in the OS keyring and makes retries non-deterministic; perform a best-effort delete_password for the new id before returning.

Useful? React with 👍 / 👎.

Comment on lines +70 to +73
if let Some(pw) = cmd.patch.password
&& let Err(e) = store.store_password(&cmd.id, &pw)
{
if let Err(rollback_err) = repo.save(&account) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Revert keyring password when update write errors

On password rotation failure, the handler restores only the account row and returns the keyring error. If store_password partially succeeds before returning Err (again allowed by the trait semantics you already account for in import rollback), the command reports failure but the stored credential may already be changed, leaving row state and caller expectations out of sync. Capture the previous password and restore (or delete) it on this error path so failed updates are actually atomic.

Useful? React with 👍 / 👎.

- add_account: best-effort `delete_password` after the row rollback
  when the keyring write fails. The trait does not promise "no side
  effects on Err", so a backend that partially wrote the secret would
  leak a stale credential and make retries non-deterministic.
- update_account: capture the previous password before rotation; on a
  failed rotation, restore it (or delete the entry if the account had
  no prior password) after rolling the row back. Prevents a half-
  rotated keyring entry coexisting with a row that says "no rotation
  happened".
- export_accounts: write the bundle to a sibling `<path>.vortexacc-tmp`
  and `rename` it into place so a mid-flight write/truncate failure
  cannot corrupt an existing valid bundle.
- tests_support: `FakeAccountCredentialStore` now records every
  `store_password` attempt so tests can assert that a rollback path
  re-issued the previous secret. New test covers the restore path on
  update rotation failure.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e2bc09a35a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

tokio::task::spawn_blocking(move || -> std::io::Result<()> {
let tmp_path = path.with_extension("vortexacc-tmp");
std::fs::write(&tmp_path, &bytes)?;
std::fs::rename(&tmp_path, &path)?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Allow overwrite when exporting to an existing file

The export writer stages bytes to a temp file and then calls std::fs::rename(&tmp_path, &path), but on Windows rename does not replace an existing destination. If the user exports accounts to the same filename twice (a normal backup workflow), the second export fails with a storage error instead of replacing the old bundle. This is a platform-specific regression for repeat exports; use a replace-safe flow (e.g., remove/replace destination explicitly before rename).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 4 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/commands/export_accounts.rs">

<violation number="1" location="src-tauri/src/application/commands/export_accounts.rs:120">
P2: Missing `sync_all()` before rename can corrupt data on system crash, and failed operations leak the temp file.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread src-tauri/src/application/commands/export_accounts.rs Outdated
Two issues with the previous tmp+rename flow:
- No `sync_all` before rename — a system crash right after the call
  could leave the destination pointing at a truncated file because
  the bytes were still in the page cache.
- `std::fs::rename` does not always replace an existing destination on
  Windows, so the second export to the same path failed instead of
  overwriting the previous bundle (a normal backup workflow).
- A failed rename leaked the `*.vortexacc-tmp` file on disk.

Now: open the temp file, write_all, fsync, drop, rename. If rename
fails AND the destination exists, remove it and retry once. Any
failure in the closure deletes the temp file before returning so the
filesystem stays clean.

Adds `test_export_accounts_overwrites_existing_destination` to catch
the platform-specific regression.
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src-tauri/src/application/commands/export_accounts.rs">

<violation number="1" location="src-tauri/src/application/commands/export_accounts.rs:143">
P1: The overwrite fallback is non-atomic and can delete the last valid bundle if the second rename fails.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

Comment thread src-tauri/src/application/commands/export_accounts.rs Outdated
The previous fallback removed the existing destination before
retrying the rename. If the retry also failed, the user lost their
last valid bundle — strictly worse than just letting the original
rename error surface, since the original would at least leave the
prior bundle intact.

`std::fs::rename` is an atomic replace on POSIX and on Windows in
Rust ≥ 1.81 (the project MSRV is 1.95), so the fallback was both
unnecessary and dangerous. Drop it and rely on the single rename
call.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 33c395a82a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

// so `*.vortexacc-tmp` never leaks on disk.
tokio::task::spawn_blocking(move || -> std::io::Result<()> {
use std::io::Write;
let tmp_path = path.with_extension("vortexacc-tmp");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Generate a unique temp path for each export

tmp_path is derived deterministically from path (with_extension("vortexacc-tmp")), so concurrent exports that share the same stem (for example accounts.bin and accounts.json) both write to the same temp file. That race can make one call fail during rename or move the wrong ciphertext into a destination, which can silently produce an invalid backup. Use a per-invocation unique temp filename in the same directory (e.g., random/UUID suffix or tempfile) before the final atomic rename.

Useful? React with 👍 / 👎.

GitGuardian flagged the test fixture string `"passw0rd"` as a hard-
coded generic password (false positive — it's a passphrase used in
unit tests under `#[cfg(test)]`, never compiled into release binaries
or shipped anywhere).

Rename to `"unit-test-passphrase"` so the secret scanner stops flagging
the PR. No behaviour change.
@mpiton mpiton merged commit f83264a into main Apr 28, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies documentation Improvements or additions to documentation rust

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant