Skip to content

Order flow in message and synch message at boot#48

Merged
arkanoider merged 25 commits intomainfrom
synch-message-boot
Apr 11, 2026
Merged

Order flow in message and synch message at boot#48
arkanoider merged 25 commits intomainfrom
synch-message-boot

Conversation

@arkanoider
Copy link
Copy Markdown
Collaborator

@arkanoider arkanoider commented Apr 2, 2026

Summary

This PR bundles work from 0964e55 (feat: new ui for messages start commit) through current HEAD on synch-message-boot (12 commits after merge base d6502b4 / PR #47).

Highlights:

  • Messages tab: buy/sell flow, timeline steps (StepLabel), rating, navigation and key handlers.
  • DM router: single listener, wait_for_dm / TrackOrder, GiftWrap filters (LiveOnly uses .limit(0)), parse_dm_events_single, DmListenerStartupReplay.
  • Startup: hydrate active orders from DB, fetch_events replay so Messages repopulate after restart, last_seen_dm_ts cursor, StartupDmHydration::empty() fallbacks.
  • Network: offline overlay, relay reachability, reconnect / key-reload paths aligned with DM listener respawn.
  • Shared helpers: filter_giftwrap_to_recipient, docs index (README.md, docs/README.md), related protocol/DM docs.

Summary by CodeRabbit

  • New Features

    • Offline network detection with a non-blocking retry overlay and automatic reconnect
    • In-app 1–5 star rating overlay (triggerable from Messages) and rating send flow
    • Redesigned Messages tab: sidebar + detail panel with a 6-step trade timeline and richer message details
  • Improvements

    • Silent startup replay to rebuild Messages UI and preserve read/sync cursors
    • Orders now persist trade index and DM cursor for more reliable resume/sync
    • Safer relay/connect and automatic runtime reload on reconnect
  • Documentation

    • New/expanded guides: startup/config, DM listener/router, message flow/protocol, buy/sell flows

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds network reachability monitoring and reconnect/reload flows, startup DM hydration and bounded replay for order DMs, DB schema and model changes for trade indexing and DM cursors, a messages timeline and rating UI flow, DM/filter helpers, panic-hook/fatal-restart wrappers, and extensive documentation updates.

Changes

Cohort / File(s) Summary
Documentation
README.md, docs/README.md, docs/CODING_STANDARDS.md, docs/DATABASE.md, docs/DM_LISTENER_FLOW.md, docs/MESSAGE_FLOW_AND_PROTOCOL.md, docs/STARTUP_AND_CONFIG.md, docs/TUI_INTERFACE.md, docs/buy order flow.md, docs/sell order flow.md
Added/reorganized docs: DM listener flow, buy/sell flows, DB/migration notes, startup/replay behavior, UI timelines, coding-standards re-export notes, and quick links.
Repo metadata
.gitignore
Added a comment line above the existing .cursor/ entry; no ignore behavior changed.
DB schema & models
src/db.rs, src/models.rs, tests/common/mod.rs, tests/db_tests.rs
Added trade_index and last_seen_dm_ts to orders, conditional migrations, updated Order struct (non-null is_mine, new optional fields), startup query types, last-seen update API, and tests enforcing positive trade_index.
Main & runtime wiring
src/main.rs, src/util/mod.rs, src/util/network.rs, src/util/fatal.rs
Relay trimming/reachability probe, connect_client_safely, background panic hook and fatal-restart wrapper, and re-exports; main integrates network-status channel and offline/reconnect handling.
Network monitor & offline UI
src/ui/network_status.rs, src/ui/offline_overlay.rs, src/ui/mod.rs, src/ui/app_state.rs
Added 5s network monitor, NetworkStatus enum, offline overlay renderer, AppState.offline_overlay_message, and related AppState fields/exports.
Reconnect & session reload
src/ui/key_handler/async_tasks.rs, src/ui/key_handler/mod.rs
Added RuntimeReconnectContext, reload_runtime_session_after_reconnect, fetch-scheduler reload/apply routines, extended AppChannels with network channels, and Mostro info refresh API updates.
DM router, parsing & startup hydration
src/util/dm_utils/mod.rs, src/util/dm_utils/dm_helpers.rs, src/util/filters.rs, src/util/db_utils.rs, src/util/chat_utils.rs
Added hydrate_startup_active_order_dm_state/StartupDmHydration, startup replay with notify gating, StartupSince subscription mode, filter_giftwrap_to_recipient, seed_admin_chat_last_seen, parse refactor, last-seen persistence, and related subscription/filter changes.
UI: messages, timeline, rating
src/ui/tabs/message_flow_tab.rs, src/ui/tabs/mod.rs, src/ui/tabs/tab_content.rs, src/ui/draw.rs, src/ui/orders.rs, src/ui/constants.rs, src/ui/state.rs
New message_flow_tab with 6-step timeline stepper and labels, StepLabel constants, enriched OrderMessage metadata, RatingOrderState, rating popup render, and MostroInfoFetchResult::Applied.
UI key handlers & navigation
src/ui/key_handler/enter_handlers.rs, src/ui/key_handler/message_handlers.rs, src/ui/key_handler/esc_handlers.rs, src/ui/key_handler/navigation.rs
Added UiMode::RatingOrder flow (enter/esc/left-right rating adjustment), handle_enter_rating_order calling execute_rate_user, message-selection clamping, and modal/tab behaviors.
Order utilities & rate flow
src/util/order_utils/execute_send_msg.rs, src/util/order_utils/execute_add_invoice.rs, src/util/order_utils/helper.rs, src/util/order_utils/mod.rs, src/util/order_utils/fetch_scheduler.rs
Added execute_rate_user, rating validation, extended status mappings, treat HoldInvoicePaymentAccepted as success, validate_range_amount, and wrapped background tasks with panic-catch; changed some subscription filters to .since(now).
DM subscription routing & re-exports
src/util/dm_utils/mod.rs, src/util/mod.rs
Expanded dm re-exports (hydrate_startup_active_order_dm_state, seed_admin_chat_last_seen, StartupDmHydration), exported filter_giftwrap_to_recipient, any_relay_reachable, connect_client_safely, and fatal helpers.
UI helper & admin startup
src/ui/helpers.rs, src/ui/key_handler/admin_handlers.rs, src/ui/key_handler/settings.rs
Added hydrate_app_admin_keys_from_privkey, load_admin_disputes_at_startup, admin_chat_keys_clone_for_role, and set reload flags when admin keys/role change.
Tests
tests/*
Test DB schema and integration tests updated for trade_index/last_seen_dm_ts; added test requiring positive trade_index.

Sequence Diagram(s)

sequenceDiagram
    participant Main as Main Loop
    participant NetMon as Network Monitor
    participant Relay as Relay/Network
    participant Reloader as Session Reloader
    participant DmListener as DM Listener
    participant DB as SQLite

    Main->>NetMon: spawn_network_status_monitor(initial_relays)
    loop every 5s
        NetMon->>Relay: TCP reachability probe
        Relay-->>NetMon: reachable/unreachable
        alt offline detected
            NetMon->>Main: NetworkStatus::Offline(msg)
            Main->>Main: set offline_overlay_message
        else online detected
            NetMon->>Main: NetworkStatus::Online(msg)
            Main->>Reloader: reload_runtime_session_after_reconnect(ctx)
            Reloader->>DB: hydrate_startup_active_order_dm_state(pool)
            DB-->>Reloader: StartupDmHydration (trade indices + last_seen_dm_ts)
            Reloader->>DmListener: respawn listener with hydrated state
            Reloader-->>Main: Ok / Err
            Main->>Main: clear or set reconnect-failed overlay
        end
    end
Loading
sequenceDiagram
    participant Startup as Startup
    participant DB as SQLite
    participant Hydrator as Hydrator
    participant DmListener as DM Listener
    participant Relay as Relay
    participant Replay as Replay Task

    Startup->>Hydrator: hydrate_startup_active_order_dm_state(pool)
    Hydrator->>DB: query non-terminal orders (trade_index, last_seen_dm_ts)
    DB-->>Hydrator: rows
    Hydrator-->>Startup: StartupDmHydration
    Startup->>DmListener: listen_for_order_messages(..., order_last_seen_dm_ts)
    Startup->>Replay: fetch_and_replay_startup_trade_dms(...)
    Replay->>Relay: fetch_events(trade_keys, since=last_seen)
    Relay-->>Replay: events
    Replay->>DmListener: dispatch parsed events (notify=false)
    DmListener->>DB: update_last_seen_dm_ts per order
    DmListener->>Startup: switch to live subscriptions (notify=true)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~110 minutes

Possibly related PRs

Suggested reviewers

  • grunch
  • Catrya
  • AndreaDiazCorreia

Poem

🐰 A rabbit hopped across the relay stream,
Counting trade indices and chasing a dream,
It replayed the DMs, kept last-seen in view,
Gave five-star hops when the rating flew,
Now the network hums — hop, replay, renew.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title "Order flow in message and synch message at boot" is vague and grammatically awkward, making it difficult to understand the primary change without reading the detailed summary. Revise to a clearer, more concise title such as "Add order flow UI, DM listener startup hydration, and network resilience" or "Implement message flow steps, DM replay, and reconnect handling."
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch synch-message-boot

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@arkanoider arkanoider changed the title Synch-message-boot Order flow in message and synch message at boot Apr 2, 2026
@arkanoider
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review!

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 2, 2026

@arkanoider: Sure! I'll start reviewing the PR right away.

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/util/dm_utils/mod.rs (1)

544-564: ⚠️ Potential issue | 🟠 Major

The stale-event check still lets stale events replace the stored row.

is_new_message only gates notifications. The code still rebuilds and stores a new OrderMessage below even when the incoming event is an older same-action duplicate, so out-of-order delivery can roll the row timestamp/content back to stale data.

Also applies to: 624-639

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/util/dm_utils/mod.rs` around lines 544 - 564, The stale-event check only
prevents notifications but still allows rebuilding and persisting a stale
OrderMessage; update the logic so that after computing is_new_message (in the
block using existing_message_data and variables like prior_sat_amount,
prior_invoice, prior_auto_popup_shown) you skip rebuilding/storing the
OrderMessage when is_new_message is false (i.e., when action equals
existing_action and timestamp <= existing_timestamp). Concretely, wrap or gate
the code that constructs and writes the OrderMessage (the same pattern found
around lines 624-639) behind an if is_new_message { ... } (or return early) so
out-of-order/older same-action events do not overwrite the stored row. Ensure
both places that rebuild/store the message use this guard.
🧹 Nitpick comments (7)
src/db.rs (1)

179-179: Update the migration log message to reflect orders table changes.

The log message only mentions "admin_disputes table" but the migration now also adds columns to the orders table (trade_index, last_seen_dm_ts). Consider updating it to be more accurate.

📝 Suggested fix
-        log::info!("Running migration: Adding missing columns to admin_disputes table");
+        log::info!("Running migration: Adding missing columns to admin_disputes and orders tables");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/db.rs` at line 179, Update the migration log message emitted by the
log::info! call so it accurately describes both schema changes: mention that the
migration adds missing columns to the admin_disputes table and also updates the
orders table by adding trade_index and last_seen_dm_ts; locate the existing
log::info! invocation (the string "Running migration: Adding missing columns to
admin_disputes table") and replace it with a message that references
admin_disputes and orders and the new column names (trade_index,
last_seen_dm_ts).
src/ui/key_handler/message_handlers.rs (1)

150-154: Consider using a more semantically appropriate waiting mode for rating submission.

UserMode::WaitingAddInvoice is reused here for the rating flow, but its name implies invoice-related waiting. While this works functionally, a dedicated WaitingRating mode would improve code clarity.

💡 Alternative approach

If adding a new mode isn't warranted now, consider adding a TODO comment to clarify the reuse:

     let default_mode = match app.user_role {
+        // Note: Reusing WaitingAddInvoice as a generic "waiting for Mostro response" state
         UserRole::User => UiMode::UserMode(UserMode::WaitingAddInvoice),
         UserRole::Admin => UiMode::AdminMode(AdminMode::Normal),
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/key_handler/message_handlers.rs` around lines 150 - 154, The code sets
app.mode to UserMode::WaitingAddInvoice for the rating flow which is
semantically misleading; update the UserMode enum to add a dedicated variant
(e.g., UserMode::WaitingRating) and set app.mode =
UiMode::UserMode(UserMode::WaitingRating) where the rating flow initializes, or
if you prefer not to add a new variant yet, replace the current assignment with
a clear TODO comment referencing UserMode::WaitingAddInvoice to explain it's
intentionally reused for ratings; modify the enum and the assignment sites (look
for UserMode, UiMode, and app.mode in message_handlers.rs) so the intent is
unambiguous.
src/ui/offline_overlay.rs (1)

40-45: Consider parameterizing the retry interval display.

The hard-coded "Retrying every 5s" string may become stale if the actual retry interval changes elsewhere in the codebase.

💡 Suggested approach

If the retry interval is configurable, consider passing it as a parameter to the function and formatting the string dynamically:

-pub fn render_offline_overlay(f: &mut ratatui::Frame, message: &str) {
+pub fn render_offline_overlay(f: &mut ratatui::Frame, message: &str, retry_secs: u64) {
     // ...
     Span::styled(
-        "Retrying every 5s",
+        format!("Retrying every {}s", retry_secs),
         // ...
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/offline_overlay.rs` around lines 40 - 45, The hard-coded "Retrying
every 5s" in the Span::styled call risks becoming stale; modify the rendering
code that produces this span (the Span::styled invocation in offline_overlay.rs)
to accept the retry interval as a parameter (e.g., add a retry_interval argument
to the rendering function or OfflineOverlay struct) and build the displayed text
dynamically (format the string using the passed interval, e.g.,
format!("Retrying every {}s", retry_interval)) before calling Span::styled so
the UI always reflects the actual configured retry interval.
src/ui/network_status.rs (1)

21-26: Consider caching settings to reduce disk I/O.

Loading settings from disk every 5 seconds adds unnecessary I/O overhead. Consider caching the relay list and only reloading when settings are explicitly changed (e.g., via a channel notification).

♻️ Alternative approach
 pub fn spawn_network_status_monitor(
     initial_relays: Vec<String>,
     network_status_tx: UnboundedSender<NetworkStatus>,
+    settings_changed_rx: Option<tokio::sync::watch::Receiver<()>>,
 ) {
     tokio::spawn(async move {
         let mut last_reachable: Option<bool> = None;
+        let mut cached_relays = initial_relays.clone();
         let mut ticker = interval(Duration::from_secs(5));
         loop {
             ticker.tick().await;
-            let relays = crate::settings::load_settings_from_disk()
-                .map(|s| s.relays)
-                .unwrap_or_else(|_| initial_relays.clone());
+            // Reload from disk only if settings changed
+            if let Some(ref mut rx) = settings_changed_rx.as_mut() {
+                if rx.has_changed().unwrap_or(false) {
+                    rx.mark_changed();
+                    if let Ok(s) = crate::settings::load_settings_from_disk() {
+                        cached_relays = s.relays;
+                    }
+                }
+            }
+            let relays = &cached_relays;
src/ui/tabs/message_flow_tab.rs (2)

229-229: Variable shadowing reduces clarity.

The parameter current_step: FlowStep is immediately shadowed by let current_step = current_step.step_number();. Consider renaming for clarity.

♻️ Suggested rename
 fn render_trade_stepper(
     f: &mut ratatui::Frame,
     area: Rect,
-    current_step: FlowStep,
+    flow_step: FlowStep,
     steps: &[StepLabel; 6],
 ) {
-    let current_step = current_step.step_number();
+    let current_step_num = flow_step.step_number();

Then use current_step_num in the comparisons at lines 245 and 249.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/tabs/message_flow_tab.rs` at line 229, The parameter named
current_step: FlowStep is being shadowed by let current_step =
current_step.step_number(); rename the shadowed variable to a clearer name
(e.g., current_step_num) and update all subsequent uses (the comparisons that
currently use current_step) to reference current_step_num; ensure you only
change the local integer variable name and leave the original parameter name and
type (current_step: FlowStep) intact so callers and the rest of the function
remain correct.

207-212: Hardcoded step count assumes array length.

The timeline details section hardcodes indices 0-5, assuming step_labels always has exactly 6 elements. While StepLabel is defined as [StepLabel; 6] in the function signature at line 227, this coupling could be fragile if the array size changes.

Consider using iteration for consistency with the stepper rendering:

♻️ Alternative using iteration
-        Line::from(Span::raw(format!("1) {}", step_labels[0].as_single_line()))),
-        Line::from(Span::raw(format!("2) {}", step_labels[1].as_single_line()))),
-        Line::from(Span::raw(format!("3) {}", step_labels[2].as_single_line()))),
-        Line::from(Span::raw(format!("4) {}", step_labels[3].as_single_line()))),
-        Line::from(Span::raw(format!("5) {}", step_labels[4].as_single_line()))),
-        Line::from(Span::raw(format!("6) {}", step_labels[5].as_single_line()))),
+        // Build step lines dynamically
+    ]);
+    let mut lines = vec![
+        Line::from(Span::styled("Current action", Style::default().add_modifier(Modifier::BOLD))),
+        Line::from(Span::raw(action_text)),
+        Line::from(Span::raw("")),
+        Line::from(Span::styled("Trade timeline", Style::default().add_modifier(Modifier::BOLD))),
+    ];
+    for (i, label) in step_labels.iter().enumerate() {
+        lines.push(Line::from(Span::raw(format!("{}) {}", i + 1, label.as_single_line()))));
+    }
+    let details = Paragraph::new(lines)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/tabs/message_flow_tab.rs` around lines 207 - 212, The timeline
rendering currently hardcodes six entries using Line::from(Span::raw(format!("N)
{}", step_labels[i].as_single_line()))) which assumes step_labels has exactly 6
elements; update the code in message_flow_tab.rs to iterate over step_labels
(e.g., for (i, label) in step_labels.iter().enumerate()) and produce the
Line::from(Span::raw(...)) entries dynamically so the rendering adapts to the
array length; reference the StepLabel type, the step_labels variable, and the
existing Line::from/Span::raw/ as_single_line() calls when replacing the
hardcoded lines.
docs/TUI_INTERFACE.md (1)

230-230: Verify source file path in documentation.

The path references src/ui/tab_content.rs, but based on the AI summary and other files in this PR, the Messages tab rendering is in src/ui/tabs/message_flow_tab.rs and the rating overlay is in src/ui/tabs/tab_content.rs. Consider updating to include the full path with the tabs/ subdirectory for consistency.

-**Source**: `src/ui/tab_content.rs` (`render_messages_tab`, `render_rating_order`)
+**Source**: `src/ui/tabs/message_flow_tab.rs` (`render_messages_tab`), `src/ui/tabs/tab_content.rs` (`render_rating_order`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/TUI_INTERFACE.md` at line 230, Documentation references the wrong source
path for the Messages tab and rating overlay; update the path to point to the
actual files under the tabs subdirectory. Change the documented source from
src/ui/tab_content.rs to use src/ui/tabs/message_flow_tab.rs for
render_messages_tab and src/ui/tabs/tab_content.rs (or the exact file containing
render_rating_order) so that the symbols render_messages_tab and
render_rating_order are mapped to their actual locations in the repo.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/STARTUP_AND_CONFIG.md`:
- Around line 181-186: Update the example tokio::select! loop in the TUI section
so it matches the documented control flow by adding the network-status handling
branch: locate the illustrative select! block (the TUI run/loop example that
currently shows Order Results, Message Notifications, User Input via
key_handler.rs, and UI Refresh) and insert a branch that receives from the
network status event channel (e.g. a receiver like network_status_rx or the
handler function handle_network_status/handle_network_event) and invokes the
existing reconnect/workflow logic for offline/online transitions; ensure the new
branch uses the same style as the other arms and references the existing
network-status handling code paths so the example is consistent with the
numbered list.

In `@src/ui/constants.rs`:
- Around line 218-222: The StepLabel for the taker in
SELL_ORDER_FLOW_STEPS_TAKER is wrong: it shows the taker chatting with "Buyer"
when the taker is the buyer and should see "Chat with Seller". Locate the
StepLabel instance inside the SELL_ORDER_FLOW_STEPS_TAKER constant and change
the bottom field from "Buyer" to "Seller" so the label reads "Chat with Seller";
ensure you update only that StepLabel entry.

In `@src/ui/key_handler/async_tasks.rs`:
- Around line 251-264: The reconnect currently aborts session tasks
(ctx.message_listener_handle.abort(), ctx.order_fetch_task.abort(),
ctx.dispute_fetch_task.abort()) and proceeds even if
connect_client_safely(ctx.client).await fails, causing the function to return Ok
and make the app think it reconnected; change the control flow so that you
attempt connect_client_safely first and, if it returns Err, propagate that error
(return Err with the connective error message) instead of continuing, or
alternately keep the current order but immediately return Err when
connect_client_safely fails so the rest of
reload_runtime_session_after_reconnect (respawning tasks / clearing session
state) is not executed unless connect_client_safely succeeds.
- Around line 258-317: The reconnect path aborts the old message listener but
leaves relay subscriptions attached to the same Client (ctx.client), causing
duplicate DMs; before spawning the new listen_for_order_messages task (and
before reusing ctx.client) iterate the Client's active subscriptions and call
client.unsubscribe(...) for each (or create a fresh Client via
connect_client_safely and replace ctx.client) so that old GiftWrap subscriptions
are removed; update code around
ctx.message_listener_handle.abort()/connect_client_safely(...) and before
tokio::spawn to perform the unsubscribe loop (or client replacement) and only
then spawn the new listener.

In `@src/ui/orders.rs`:
- Around line 382-389: When handling Sell listings with unknown role (order_kind
== Sell and is_mine == None) in sell_listing_flow_step, don't call
message_buy_flow_step_fallback because that swaps columns for buys; instead
return a sell-appropriate fallback (e.g., message_sell_flow_step_fallback or a
new neutral/message_unknown_role_flow_step_fallback that preserves seller column
order) so timeline steps remain correct for sell labels. Update the
sell_listing_flow_step function to use that sell/neutral fallback in the
Some(is_mine) else branch, and make the same change in the other sell-handling
block later in the file that mirrors this logic.

In `@src/util/dm_utils/dm_helpers.rs`:
- Around line 117-119: Guard against negative ts when constructing the Timestamp
in the GiftWrapSubscriptionMode::StartupSince arm: instead of blindly casting ts
to u64, validate or convert it safely (e.g., check ts >= 0 or use a
try_from/try_into conversion) before calling Timestamp::from so a negative
last_seen_dm_ts cannot wrap to a huge positive value; update the logic in the
GiftWrapSubscriptionMode::StartupSince branch that calls
filter_giftwrap_to_recipient(pubkey).since(Timestamp::from(...)) to perform the
safe conversion and handle the error/negative case appropriately (e.g., clamp to
zero or return an Err) referencing GiftWrapSubscriptionMode::StartupSince,
filter_giftwrap_to_recipient, and Timestamp::from.

In `@src/util/dm_utils/mod.rs`:
- Around line 109-147: The current logic in status_str_is_non_terminal causes
orders with Status::Success to be treated as terminal and dropped in
hydrate_startup_active_order_dm_state; change the logic so Success remains
non-terminal for hydration (i.e., allow replay until the rating phase finishes).
Concretely, update status_str_is_non_terminal (or adjust
is_terminal_order_status) to explicitly treat Status::Success as non-terminal
(return true for Success) so hydrate_startup_active_order_dm_state will keep
Success orders in active_order_trade_indices and order_last_seen_dm_ts; keep the
existing parsing via Status::from_str(s).ok() and preserve the existing fallback
for unknown statuses.

In `@src/util/order_utils/helper.rs`:
- Around line 112-122: The validation currently defaults order.min_amount and
order.max_amount to 0 which makes any positive amount fail when max_amount is
None; update the check in the function manipulating take_state so you only
enforce the lower bound when take_state.order.min_amount.is_some() and only
enforce the upper bound when take_state.order.max_amount.is_some() (e.g., use if
let Some(min) = ... and if let Some(max) = ... or match on the Options) and set
take_state.validation_error only when a checked bound is violated; leave
validation_error as None when no active bounds reject the amount.

---

Outside diff comments:
In `@src/util/dm_utils/mod.rs`:
- Around line 544-564: The stale-event check only prevents notifications but
still allows rebuilding and persisting a stale OrderMessage; update the logic so
that after computing is_new_message (in the block using existing_message_data
and variables like prior_sat_amount, prior_invoice, prior_auto_popup_shown) you
skip rebuilding/storing the OrderMessage when is_new_message is false (i.e.,
when action equals existing_action and timestamp <= existing_timestamp).
Concretely, wrap or gate the code that constructs and writes the OrderMessage
(the same pattern found around lines 624-639) behind an if is_new_message { ...
} (or return early) so out-of-order/older same-action events do not overwrite
the stored row. Ensure both places that rebuild/store the message use this
guard.

---

Nitpick comments:
In `@docs/TUI_INTERFACE.md`:
- Line 230: Documentation references the wrong source path for the Messages tab
and rating overlay; update the path to point to the actual files under the tabs
subdirectory. Change the documented source from src/ui/tab_content.rs to use
src/ui/tabs/message_flow_tab.rs for render_messages_tab and
src/ui/tabs/tab_content.rs (or the exact file containing render_rating_order) so
that the symbols render_messages_tab and render_rating_order are mapped to their
actual locations in the repo.

In `@src/db.rs`:
- Line 179: Update the migration log message emitted by the log::info! call so
it accurately describes both schema changes: mention that the migration adds
missing columns to the admin_disputes table and also updates the orders table by
adding trade_index and last_seen_dm_ts; locate the existing log::info!
invocation (the string "Running migration: Adding missing columns to
admin_disputes table") and replace it with a message that references
admin_disputes and orders and the new column names (trade_index,
last_seen_dm_ts).

In `@src/ui/key_handler/message_handlers.rs`:
- Around line 150-154: The code sets app.mode to UserMode::WaitingAddInvoice for
the rating flow which is semantically misleading; update the UserMode enum to
add a dedicated variant (e.g., UserMode::WaitingRating) and set app.mode =
UiMode::UserMode(UserMode::WaitingRating) where the rating flow initializes, or
if you prefer not to add a new variant yet, replace the current assignment with
a clear TODO comment referencing UserMode::WaitingAddInvoice to explain it's
intentionally reused for ratings; modify the enum and the assignment sites (look
for UserMode, UiMode, and app.mode in message_handlers.rs) so the intent is
unambiguous.

In `@src/ui/offline_overlay.rs`:
- Around line 40-45: The hard-coded "Retrying every 5s" in the Span::styled call
risks becoming stale; modify the rendering code that produces this span (the
Span::styled invocation in offline_overlay.rs) to accept the retry interval as a
parameter (e.g., add a retry_interval argument to the rendering function or
OfflineOverlay struct) and build the displayed text dynamically (format the
string using the passed interval, e.g., format!("Retrying every {}s",
retry_interval)) before calling Span::styled so the UI always reflects the
actual configured retry interval.

In `@src/ui/tabs/message_flow_tab.rs`:
- Line 229: The parameter named current_step: FlowStep is being shadowed by let
current_step = current_step.step_number(); rename the shadowed variable to a
clearer name (e.g., current_step_num) and update all subsequent uses (the
comparisons that currently use current_step) to reference current_step_num;
ensure you only change the local integer variable name and leave the original
parameter name and type (current_step: FlowStep) intact so callers and the rest
of the function remain correct.
- Around line 207-212: The timeline rendering currently hardcodes six entries
using Line::from(Span::raw(format!("N) {}", step_labels[i].as_single_line())))
which assumes step_labels has exactly 6 elements; update the code in
message_flow_tab.rs to iterate over step_labels (e.g., for (i, label) in
step_labels.iter().enumerate()) and produce the Line::from(Span::raw(...))
entries dynamically so the rendering adapts to the array length; reference the
StepLabel type, the step_labels variable, and the existing Line::from/Span::raw/
as_single_line() calls when replacing the hardcoded lines.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ba1e626d-3188-47fe-9fe3-6088c7dc5a14

📥 Commits

Reviewing files that changed from the base of the PR and between d6502b4 and e106616.

📒 Files selected for processing (46)
  • .gitignore
  • README.md
  • docs/CODING_STANDARDS.md
  • docs/DATABASE.md
  • docs/DM_LISTENER_FLOW.md
  • docs/MESSAGE_FLOW_AND_PROTOCOL.md
  • docs/README.md
  • docs/STARTUP_AND_CONFIG.md
  • docs/TUI_INTERFACE.md
  • docs/buy order flow.md
  • docs/sell order flow.md
  • src/db.rs
  • src/main.rs
  • src/models.rs
  • src/ui/app_state.rs
  • src/ui/constants.rs
  • src/ui/draw.rs
  • src/ui/key_handler/async_tasks.rs
  • src/ui/key_handler/confirmation.rs
  • src/ui/key_handler/enter_handlers.rs
  • src/ui/key_handler/esc_handlers.rs
  • src/ui/key_handler/message_handlers.rs
  • src/ui/key_handler/mod.rs
  • src/ui/key_handler/navigation.rs
  • src/ui/mod.rs
  • src/ui/network_status.rs
  • src/ui/offline_overlay.rs
  • src/ui/orders.rs
  • src/ui/state.rs
  • src/ui/tabs/message_flow_tab.rs
  • src/ui/tabs/mod.rs
  • src/ui/tabs/tab_content.rs
  • src/util/chat_utils.rs
  • src/util/db_utils.rs
  • src/util/dm_utils/dm_helpers.rs
  • src/util/dm_utils/mod.rs
  • src/util/filters.rs
  • src/util/mod.rs
  • src/util/network.rs
  • src/util/order_utils/execute_add_invoice.rs
  • src/util/order_utils/execute_send_msg.rs
  • src/util/order_utils/fetch_scheduler.rs
  • src/util/order_utils/helper.rs
  • src/util/order_utils/mod.rs
  • tests/common/mod.rs
  • tests/db_tests.rs

Comment thread docs/STARTUP_AND_CONFIG.md
Comment thread src/ui/constants.rs
Comment thread src/ui/key_handler/async_tasks.rs
Comment thread src/ui/key_handler/async_tasks.rs
Comment thread src/ui/orders.rs Outdated
Comment thread src/util/dm_utils/dm_helpers.rs
Comment thread src/util/dm_utils/mod.rs Outdated
Comment thread src/util/order_utils/helper.rs Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main.rs`:
- Around line 158-170: Normalize and deduplicate the relay list once into a
canonical Vec<String> (e.g., normalized_relays) by trimming and filtering out
empty entries, then use that same normalized_relays for client.add_relay calls,
any_relay_reachable(&normalized_relays).await, and any monitor/startup gating
instead of reading settings.relays directly; if the monitor is intended to track
live edits, change the monitor to reload/observe settings rather than holding
the startup snapshot. Ensure references to settings.relays, client.add_relay,
any_relay_reachable, and connect_client_safely are updated to use the canonical
list.

In `@src/util/fatal.rs`:
- Around line 40-60: The global panic hook installed by
install_background_panic_hook currently calls request_fatal_restart for every
panic (making recoverable panics fatal); remove the call to
request_fatal_restart from install_background_panic_hook and change the hook to
only log the panic payload and location (or otherwise record it) so it does not
force a restart, then move the fatal-restart behavior into explicit
task-boundary code that spawns critical background tasks (e.g., the places that
call spawn or spawn_blocking and functions like connect_client_safely or other
startup/reconnect routines) so those callers catch_unwind around task entry and
call request_fatal_restart only when a panic is truly unrecoverable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a0b4a952-1de0-4ab1-bdc2-64276a6736f1

📥 Commits

Reviewing files that changed from the base of the PR and between e106616 and 15c235d.

📒 Files selected for processing (3)
  • src/main.rs
  • src/util/fatal.rs
  • src/util/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/util/mod.rs

Comment thread src/main.rs Outdated
Comment thread src/util/fatal.rs
Copy link
Copy Markdown
Contributor

@mostronatorcoder mostronatorcoder Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review

CI passes (build, clippy, fmt, test, MSRV 1.90). The offline/reconnect infrastructure, DM startup hydration, and trade timeline UI are solid work. Three issues need to be addressed before merge — one is a blocker.


🔴 Blocker — request_id column missing from migrate_db

src/models.rs now binds request_id in INSERT (line ~274) and UPDATE (line ~307) statements, and the Order struct includes it as a field. The CREATE TABLE DDL in db.rs includes request_id INTEGER, so new installs are fine. However, migrate_db does not add request_id for existing databases that predate this PR.

Any user upgrading from a prior version will get a SQLite "table orders has no column named request_id" error on the first insert or update — crashing the order flow silently or visibly.

Fix: add has_request_id check + ALTER TABLE orders ADD COLUMN request_id INTEGER; to migrate_db, alongside has_trade_index and has_last_seen_dm_ts.


🟡 Major — reload_runtime_session_after_reconnect does not update current_mostro_pubkey

apply_pending_key_reload correctly updates *current_mostro_pubkey through the Arc<Mutex<PublicKey>> after a key change. reload_runtime_session_after_reconnect (in async_tasks.rs) does not — it calls spawn_fetch_scheduler_loops with Arc::clone(ctx.current_mostro_pubkey) but never writes the new value.

In the reconnect case the pubkey doesn't change, so this is benign today. But the omission is fragile: if settings are reloaded from disk in latest_settings with a new mostro_pubkey (possible in multi-node setups), the fetch loops will use the stale value from before the disconnect.

Fix: after connect_client_safely, mirror the same current_mostro_pubkey update that apply_pending_key_reload does, using ctx.settings.mostro_pubkey.


🟡 Major — get_startup_active_orders has no status filter; hydrates terminated orders

SELECT id, status, trade_index, trade_keys, last_seen_dm_ts
FROM orders
WHERE trade_keys IS NOT NULL AND trade_keys != ''

This returns all orders that ever had trade keys — including Success, Canceled, Expired, etc. The Rust-side status_str_is_non_terminal filter in hydrate_startup_active_order_dm_state drops them afterwards, but the query still does a full table scan and deserializes every row on every startup and reconnect.

For a heavy user with hundreds of historical orders this will be increasingly slow. Push the filter to SQL:

SELECT id, status, trade_index, trade_keys, last_seen_dm_ts
FROM orders
WHERE trade_keys IS NOT NULL
  AND trade_keys != ''
  AND (status IS NULL
       OR status NOT IN ('success','canceled','canceled-by-admin',
                         'settled-by-admin','completed-by-admin',
                         'expired','cooperatively-canceled'))

This does not block merge on its own, but combined with the request_id blocker it's worth fixing in the same pass.


✅ What's good

  • connect_client_safely with AssertUnwindSafe + catch_unwind is the right pattern for the nostr-sdk panic scenario. Clean.
  • install_background_panic_hook routing panics to the fatal UI path is correct and prevents silent crashes.
  • any_relay_reachable TCP-only check (no SDK involved) avoids the very panic it guards against. Good layering.
  • Offline overlay renders as a non-modal overlay (not changing UiMode) — correct; user can still see the current screen.
  • StartupDmHydration::empty() fallback on DB error is safe.
  • Migration transaction wrapping all ALTER TABLE statements is correct.
  • GiftWrapSubscriptionMode::StartupSince(ts) / StartupCatchUp distinction for the last_seen_dm_ts cursor is well-designed.
  • trade_index <= 0 guard in Order::new_from_small_order is a good invariant check.
  • is_mine: Option<bool>is_mine: bool cleanup is correct; the DB column is INTEGER NOT NULL.
  • Doc files are thorough and accurate.
  • Spaces in filenames (docs/buy order flow.md, docs/sell order flow.md) are functional but unconventional for a Rust project — consider renaming to docs/buy-order-flow.md at some point (not a blocker).

Comment thread src/db.rs
@@ -49,8 +49,10 @@ pub async fn init_db() -> Result<SqlitePool> {
is_mine INTEGER NOT NULL,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocker: migrate_db does not add request_id for existing databases.

models.rs now binds self.request_id in INSERT and UPDATE, and the DDL includes request_id INTEGER, but there is no ALTER TABLE orders ADD COLUMN request_id INTEGER in the migration path.

Upgrading users will get table orders has no column named request_id on the first write.

Fix alongside has_trade_index and has_last_seen_dm_ts:

let has_request_id = check_column_exists(pool, "orders", "request_id").await?;
// ...
if !has_request_id {
    sqlx::query("ALTER TABLE orders ADD COLUMN request_id INTEGER;")
        .execute(&mut *tx).await?;
}

@@ -5,12 +5,15 @@ use crate::ui::key_handler::EnterKeyContext;
use crate::ui::FormState;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reload_runtime_session_after_reconnect reads ctx.settings.mostro_pubkey for the fetch loops but never writes it to ctx.current_mostro_pubkey. This is harmless today (pubkey doesn't change on reconnect) but becomes a bug if a user edits their config and then goes offline+online.

Mirror what apply_pending_key_reload does after connect_client_safely:

if let Ok(new_pk) = PublicKey::from_str(&ctx.settings.mostro_pubkey) {
    if let Ok(mut guard) = ctx.current_mostro_pubkey.lock() {
        *guard = new_pk;
    }
}

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/ui/orders.rs`:
- Around line 291-303: The current branch returns "N/A" when Payload::Order
exists but order.kind is None, preventing the later inner.action fallback;
update the logic in the getter so that when matching Payload::Order(order) you
only return "BUY"/"SELL" if order.kind is Some(...) and otherwise do not return
immediately so execution falls through to the existing match on inner.action
(use Payload::Order, order.kind, inner.action, Action::TakeSell/TakeBuy to
locate and alter the conditional).
- Around line 449-455: The sell-maker branch in the match currently maps
Action::WaitingSellerToPay to StepLabelsSell::StepBuyerInvoice; change it so
Action::WaitingSellerToPay produces
FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment) instead, making the
mapping consistent with message_sell_flow_step_fallback() and
listing_step_from_status(Status::WaitingPayment); update the match arm in the
is_maker block that handles Action::WaitingSellerToPay so it returns
StepSellerPayment.
- Around line 539-546: In message_buy_flow_step_fallback, adjust the match arm
handling Action::HoldInvoicePaymentAccepted so it returns
FlowStep::BuyFlowStep(StepLabelsBuy::StepChatWithSeller) instead of
StepSellerPayment; currently both role-aware buy branches map
HoldInvoicePaymentAccepted to StepChatWithSeller but the fallback sends it to
StepSellerPayment causing replayed buys to jump back—update the
Action::HoldInvoicePaymentAccepted branch in message_buy_flow_step_fallback to
match the role-aware behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 257daac1-61b6-412c-a585-8e07c0ff7ce1

📥 Commits

Reviewing files that changed from the base of the PR and between 15c235d and c87195c.

📒 Files selected for processing (3)
  • src/ui/constants.rs
  • src/ui/key_handler/async_tasks.rs
  • src/ui/orders.rs
✅ Files skipped from review due to trivial changes (2)
  • src/ui/constants.rs
  • src/ui/key_handler/async_tasks.rs

Comment thread src/ui/orders.rs
Comment thread src/ui/orders.rs
Comment thread src/ui/orders.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (2)
src/ui/orders.rs (1)

449-457: ⚠️ Potential issue | 🟠 Major

Map sell-maker payment requests to the seller-payment step.

For maker-side sell trades, PayInvoice / WaitingSellerToPay still highlight StepBuyerInvoice. That disagrees with listing_step_from_status(Status::WaitingPayment), the taker branch, and message_sell_flow_step_fallback(), so replayed sell-maker messages land in the wrong column. The test at Lines 663-673 should flip with the same change.

🛠️ Possible fix
     if is_maker {
         match action {
-            Action::WaitingSellerToPay | Action::PayInvoice => {
-                FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice)
-            }
             Action::AddInvoice | Action::WaitingBuyerInvoice => {
                 FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice)
             }
+            Action::WaitingSellerToPay | Action::PayInvoice => {
+                FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment)
+            }
             Action::HoldInvoicePaymentAccepted => {
                 FlowStep::SellFlowStep(StepLabelsSell::StepChatWithSeller)
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/orders.rs` around lines 449 - 457, The maker-side sell actions
PayInvoice and WaitingSellerToPay are incorrectly mapped to StepBuyerInvoice in
sell_listing_flow_step_from_action; change the mapping so these actions return
the seller-payment step (match the behavior of
listing_step_from_status(Status::WaitingPayment) and
message_sell_flow_step_fallback). Update sell_listing_flow_step_from_action to
return FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment) for
Action::PayInvoice and Action::WaitingSellerToPay, and flip the corresponding
unit test expectations that assert the flow step for maker-side sell messages to
the seller-payment step.
src/ui/key_handler/async_tasks.rs (1)

289-296: ⚠️ Potential issue | 🟠 Major

Don't tear down the live scheduler until reconnect succeeds.

This aborts the current DM/fetch tasks and unsubscribes everything, then only logs a connect_client_safely() failure. A transient connect error leaves the old session already gone and the replacement loops running on a disconnected client.

🛠️ Possible fix
-    message_listener_handle.abort();
-    order_fetch_task.abort();
-    dispute_fetch_task.abort();
-    client.unsubscribe_all().await;
-
-    if let Err(e) = connect_client_safely(client).await {
-        log::warn!("Fetch scheduler reload: failed to reconnect Nostr client: {e}");
-    }
+    connect_client_safely(client)
+        .await
+        .map_err(|e| format!("Fetch scheduler reload: failed to reconnect Nostr client: {e}"))?;
+
+    message_listener_handle.abort();
+    order_fetch_task.abort();
+    dispute_fetch_task.abort();
+    client.unsubscribe_all().await;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/key_handler/async_tasks.rs` around lines 289 - 296, Don't tear down
the running scheduler before a successful reconnect: move the calls that abort
the existing tasks and unsubscribe the client (message_listener_handle.abort(),
order_fetch_task.abort(), dispute_fetch_task.abort(),
client.unsubscribe_all().await) so they only execute after
connect_client_safely(client).await returns Ok; instead attempt reconnect first,
and only on success replace/stop the old session by aborting the handles and
unsubscribing, otherwise keep the current tasks alive and just log the failure.
Ensure you reference the same client/handles (message_listener_handle,
order_fetch_task, dispute_fetch_task, client) so the old session isn't torn down
on transient connect errors.
🧹 Nitpick comments (1)
src/models.rs (1)

479-507: Consider extracting terminal statuses to a shared constant.

The hardcoded list of terminal statuses in the SQL query could drift out of sync with mostro_core::order::Status. While the comment helpfully references the source, consider defining a constant array of terminal status strings (e.g., TERMINAL_DM_STATUSES) that can be reused if this filtering logic appears elsewhere in the codebase.

This is a minor maintainability suggestion; the current implementation is correct.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/models.rs` around lines 479 - 507, Extract the hardcoded terminal DM
statuses used in the SQL WHERE clause of get_startup_active_orders into a shared
constant (e.g., TERMINAL_DM_STATUSES) and reference that constant when building
the query so the list is maintained in one place; update any other places that
apply the same filtering (or import the constant) to avoid drift from
mostro_core::order::Status and ensure the SQL string is constructed or
parameterized from that single constant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main.rs`:
- Around line 255-258: The current code calls
load_admin_disputes_at_startup(...) and admin_chat_keys_for_role(...) once at
startup using app.user_role, which leaves admin_disputes_in_progress and
admin_chat_keys unset if the app later switches to admin; to fix, persist the
parsed admin_keys in state (e.g., keep admin_keys in AppState) and stop deriving
admin-only view only at startup — either (A) compute admin_chat_keys lazily
wherever you use it by calling admin_chat_keys_for_role(app.user_role,
admin_keys) at access time, or (B) invoke the existing bootstrap when entering
admin role from switch_role (call load_admin_disputes_at_startup(&pool, &mut
app, admin_keys.as_ref()) and recompute admin_chat_keys there); update
switch_role to trigger the rehydration or ensure consumers read admin_chat_keys
dynamically so admin_disputes_in_progress and admin_chat_keys are populated when
role becomes admin.

In `@src/ui/orders.rs`:
- Around line 264-279: The compact sidebar label helper
message_action_compact_label is missing a match arm for the Action::RateReceived
variant, so rated trades fall through to the default "Message" label; update
message_action_compact_label to include Action::RateReceived => "Rate
Counterparty" (or the same label used for Action::Rate) alongside the existing
Action::Rate arm so the compact sidebar shows the rating phase correctly.

In `@src/util/dm_utils/mod.rs`:
- Around line 795-839: In fetch_and_replay_startup_trade_dms, use the stored
per-order DM cursor instead of always using now -
STARTUP_TRADE_DM_LOOKBACK_SECS: look up the last-seen timestamp for this
subscription/order (use the existing mapping structures such as
subscription_to_order or any order_last_seen_dm_ts field carried in
DmListenerStartupReplay) and set filter.since(Timestamp::from(last_seen_ts))
when present, falling back to the computed since_ts only if no cursor exists;
update the filter construction around filter_giftwrap_to_recipient(pubkey) to
apply that per-order cursor.

In `@src/util/order_utils/helper.rs`:
- Around line 104-116: The parsed f64 value from take_state.amount_input can be
NaN/Inf which parse::<f64>() considers Ok, so before computing
below_min/above_max you must reject non-finite values: after let amount = ...
add a check using amount.is_finite() and if it is false set
take_state.validation_error = Some("Invalid number format".to_string()) (or a
more specific "Non-finite number") and return; keep the rest of the range checks
that use take_state.order.min_amount and max_amount (and the below_min/above_max
logic) unchanged.

---

Duplicate comments:
In `@src/ui/key_handler/async_tasks.rs`:
- Around line 289-296: Don't tear down the running scheduler before a successful
reconnect: move the calls that abort the existing tasks and unsubscribe the
client (message_listener_handle.abort(), order_fetch_task.abort(),
dispute_fetch_task.abort(), client.unsubscribe_all().await) so they only execute
after connect_client_safely(client).await returns Ok; instead attempt reconnect
first, and only on success replace/stop the old session by aborting the handles
and unsubscribing, otherwise keep the current tasks alive and just log the
failure. Ensure you reference the same client/handles (message_listener_handle,
order_fetch_task, dispute_fetch_task, client) so the old session isn't torn down
on transient connect errors.

In `@src/ui/orders.rs`:
- Around line 449-457: The maker-side sell actions PayInvoice and
WaitingSellerToPay are incorrectly mapped to StepBuyerInvoice in
sell_listing_flow_step_from_action; change the mapping so these actions return
the seller-payment step (match the behavior of
listing_step_from_status(Status::WaitingPayment) and
message_sell_flow_step_fallback). Update sell_listing_flow_step_from_action to
return FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment) for
Action::PayInvoice and Action::WaitingSellerToPay, and flip the corresponding
unit test expectations that assert the flow step for maker-side sell messages to
the seller-payment step.

---

Nitpick comments:
In `@src/models.rs`:
- Around line 479-507: Extract the hardcoded terminal DM statuses used in the
SQL WHERE clause of get_startup_active_orders into a shared constant (e.g.,
TERMINAL_DM_STATUSES) and reference that constant when building the query so the
list is maintained in one place; update any other places that apply the same
filtering (or import the constant) to avoid drift from
mostro_core::order::Status and ensure the SQL string is constructed or
parameterized from that single constant.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: fc75e2cf-d082-4156-9146-ad682284fc3b

📥 Commits

Reviewing files that changed from the base of the PR and between c87195c and 686981e.

📒 Files selected for processing (17)
  • src/db.rs
  • src/main.rs
  • src/models.rs
  • src/ui/app_state.rs
  • src/ui/helpers.rs
  • src/ui/key_handler/async_tasks.rs
  • src/ui/key_handler/confirmation.rs
  • src/ui/key_handler/enter_handlers.rs
  • src/ui/key_handler/mod.rs
  • src/ui/network_status.rs
  • src/ui/orders.rs
  • src/util/dm_utils/dm_helpers.rs
  • src/util/dm_utils/mod.rs
  • src/util/fatal.rs
  • src/util/mod.rs
  • src/util/order_utils/fetch_scheduler.rs
  • src/util/order_utils/helper.rs
✅ Files skipped from review due to trivial changes (1)
  • src/db.rs
🚧 Files skipped from review as they are similar to previous changes (7)
  • src/ui/key_handler/confirmation.rs
  • src/util/order_utils/fetch_scheduler.rs
  • src/ui/network_status.rs
  • src/util/fatal.rs
  • src/ui/key_handler/mod.rs
  • src/util/mod.rs
  • src/util/dm_utils/dm_helpers.rs

Comment thread src/main.rs Outdated
Comment thread src/ui/orders.rs
Comment thread src/util/dm_utils/mod.rs
Comment thread src/util/order_utils/helper.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/ui/orders.rs (1)

452-458: ⚠️ Potential issue | 🟡 Minor

Inconsistent mapping for WaitingSellerToPay in sell maker branch.

For is_maker = true (sell maker is the seller), Action::WaitingSellerToPay | Action::PayInvoice maps to StepBuyerInvoice (column 1), but:

  • The taker branch (line 477-478) maps these to StepSellerPayment
  • message_sell_flow_step_fallback (line 571-572) maps to StepSellerPayment
  • listing_step_from_status(Status::WaitingPayment) (line 427-428) maps to StepSellerPayment

When the action is "waiting for seller to pay", we're in the seller payment phase (column 2), not the buyer invoice phase (column 1).

🐛 Proposed fix
 if is_maker {
     match action {
-        Action::WaitingSellerToPay | Action::PayInvoice => {
-            FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice)
-        }
+        Action::WaitingSellerToPay | Action::PayInvoice => {
+            FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment)
+        }
         Action::AddInvoice | Action::WaitingBuyerInvoice => {
             FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice)
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/orders.rs` around lines 452 - 458, The sell-maker branch in the match
on action incorrectly maps Action::WaitingSellerToPay (and Action::PayInvoice)
to FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice); change this mapping
so WaitingSellerToPay/PayInvoice return
FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment) to match the taker
branch, message_sell_flow_step_fallback, and
listing_step_from_status(Status::WaitingPayment). Update the match arm in the
sell maker branch (the code handling is_maker true) to use StepSellerPayment for
those Action variants.
🧹 Nitpick comments (2)
src/util/dm_utils/dm_helpers.rs (1)

162-183: Minor: Avoid cloning disputes.

The function clones app.admin_disputes_in_progress only to iterate over it. Consider borrowing directly.

♻️ Suggested simplification
 pub fn seed_admin_chat_last_seen(app: &mut AppState) {
-    let disputes = app.admin_disputes_in_progress.clone();
-
-    for dispute in &disputes {
+    for dispute in &app.admin_disputes_in_progress {

Note: This change is valid because app.admin_chat_last_seen is a separate field from admin_disputes_in_progress, so the borrow checker allows mutable access to one while borrowing the other immutably.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/util/dm_utils/dm_helpers.rs` around lines 162 - 183, The function
currently clones app.admin_disputes_in_progress into disputes just to iterate;
instead borrow it directly to avoid the clone by iterating over
&app.admin_disputes_in_progress (e.g., change the loop to for dispute in
&app.admin_disputes_in_progress) and keep the same insert logic into
app.admin_chat_last_seen inside seed_admin_chat_last_seen so the mutable access
to admin_chat_last_seen and the immutable borrow of admin_disputes_in_progress
coexist.
src/util/dm_utils/mod.rs (1)

539-566: Consider extracting the 8-element tuple to a named struct for clarity.

The tuple destructuring with 8 positional elements (e.g., (_, _, _, _, _, _, _, st) on line 566) is becoming hard to follow. A small struct like ExistingMessageData { timestamp, action, sat_amount, ... } would improve readability and reduce the risk of positional mistakes during future maintenance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/util/dm_utils/mod.rs` around lines 539 - 566, Extract the 8-tuple carried
by existing_message_data into a named struct (e.g., ExistingMessageData {
timestamp, action, sat_amount, invoice, auto_popup_shown, order_kind, is_mine,
order_status }) and update pattern matches and access sites (the match arm that
computes should_replace and the let bindings that produce prior_sat_amount,
prior_invoice, prior_auto_popup_shown, prior_order_kind, prior_is_mine,
prior_order_status) to use field names instead of positional tuple indexing;
adjust the creation/return of existing_message_data to construct the struct and
update any as_ref()/and_then()/map() uses to reference fields (e.g.,
existing.as_ref().map(|d| d.auto_popup_shown && d.action == action)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/ui/orders.rs`:
- Around line 452-458: The sell-maker branch in the match on action incorrectly
maps Action::WaitingSellerToPay (and Action::PayInvoice) to
FlowStep::SellFlowStep(StepLabelsSell::StepBuyerInvoice); change this mapping so
WaitingSellerToPay/PayInvoice return
FlowStep::SellFlowStep(StepLabelsSell::StepSellerPayment) to match the taker
branch, message_sell_flow_step_fallback, and
listing_step_from_status(Status::WaitingPayment). Update the match arm in the
sell maker branch (the code handling is_maker true) to use StepSellerPayment for
those Action variants.

---

Nitpick comments:
In `@src/util/dm_utils/dm_helpers.rs`:
- Around line 162-183: The function currently clones
app.admin_disputes_in_progress into disputes just to iterate; instead borrow it
directly to avoid the clone by iterating over &app.admin_disputes_in_progress
(e.g., change the loop to for dispute in &app.admin_disputes_in_progress) and
keep the same insert logic into app.admin_chat_last_seen inside
seed_admin_chat_last_seen so the mutable access to admin_chat_last_seen and the
immutable borrow of admin_disputes_in_progress coexist.

In `@src/util/dm_utils/mod.rs`:
- Around line 539-566: Extract the 8-tuple carried by existing_message_data into
a named struct (e.g., ExistingMessageData { timestamp, action, sat_amount,
invoice, auto_popup_shown, order_kind, is_mine, order_status }) and update
pattern matches and access sites (the match arm that computes should_replace and
the let bindings that produce prior_sat_amount, prior_invoice,
prior_auto_popup_shown, prior_order_kind, prior_is_mine, prior_order_status) to
use field names instead of positional tuple indexing; adjust the creation/return
of existing_message_data to construct the struct and update any
as_ref()/and_then()/map() uses to reference fields (e.g.,
existing.as_ref().map(|d| d.auto_popup_shown && d.action == action)).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 89727964-da1d-45b3-9a79-4496f09b458d

📥 Commits

Reviewing files that changed from the base of the PR and between 686981e and c05aa75.

📒 Files selected for processing (12)
  • src/main.rs
  • src/models.rs
  • src/ui/app_state.rs
  • src/ui/helpers.rs
  • src/ui/key_handler/admin_handlers.rs
  • src/ui/key_handler/async_tasks.rs
  • src/ui/key_handler/confirmation.rs
  • src/ui/key_handler/settings.rs
  • src/ui/orders.rs
  • src/util/dm_utils/dm_helpers.rs
  • src/util/dm_utils/mod.rs
  • src/util/order_utils/helper.rs
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/ui/helpers.rs
  • src/models.rs

@arkanoider
Copy link
Copy Markdown
Collaborator Author

@coderabbitai please review!

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 10, 2026

@arkanoider: Sure! I'll kick off a review of PR #48 right away.


🧠 Learnings used
Learnt from: arkanoider
Repo: MostroP2P/mostrix PR: 31
File: src/ui/key_handler/enter_handlers.rs:0-0
Timestamp: 2026-03-13T21:08:38.400Z
Learning: In mostrix (MostroP2P/mostrix), the `order_result_tx` channel is semantically reserved for order/dispute operation results (`OperationResult` variants). Do not suggest reusing it for unrelated flows such as Mostro instance info refresh. A proper background-task approach for Mostro info would require a separate channel (e.g. `mostro_info_result_tx`) or a new `OperationResult::MostroInfoLoaded(MostroInstanceInfo)` variant plus a dedicated handler in the main loop.
✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (2)
src/util/dm_utils/dm_helpers.rs (1)

117-120: ⚠️ Potential issue | 🟠 Major

Don't fall back to now() for an invalid persisted DM cursor.

If ts is negative/corrupt here, unwrap_or(Timestamp::now().as_u64()) jumps the replay window to “now minus skew”, which drops historical DMs instead of replaying from a safe floor. This is a regression from the earlier clamp-to-zero fix in this same area; failed conversions should still degrade to 0 (or return an error), not to current time.

Possible fix
-            let ts = u64::try_from(ts).unwrap_or(Timestamp::now().as_u64());
+            let ts = u64::try_from(ts).unwrap_or(0);
             let since_ts = ts.saturating_sub(super::STARTUP_GIFTWRAP_ENVELOPE_SKEW_SECS);
             filter_giftwrap_to_recipient(pubkey).since(Timestamp::from(since_ts))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/util/dm_utils/dm_helpers.rs` around lines 117 - 120, The conversion
fallback for GiftWrapSubscriptionMode::StartupSince currently uses
Timestamp::now().as_u64() which moves the replay window to "now" on
corrupt/negative persisted cursors; change the fallback to a safe floor (0)
instead of now so failed TryFrom conversions degrade to zero. In the block
handling GiftWrapSubscriptionMode::StartupSince, replace the
unwrap_or(Timestamp::now().as_u64()) behavior in the u64 conversion of ts with a
zero fallback (or explicitly clamp negative values to 0) before computing
since_ts with STARTUP_GIFTWRAP_ENVELOPE_SKEW_SECS and calling
filter_giftwrap_to_recipient(pubkey).since(Timestamp::from(...)).
src/util/dm_utils/mod.rs (1)

98-110: ⚠️ Potential issue | 🟠 Major

Keep Success non-terminal until the rating phase is done.

Startup hydration now preserves success orders, but this helper still classifies Status::Success as terminal. That means startup replay/live dispatch will immediately unsubscribe and drop tracking as soon as the last replayed DM is success, so post-success rating DMs still disappear after a restart.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/util/dm_utils/mod.rs` around lines 98 - 110, The helper is incorrectly
treating Status::Success as terminal, causing orders to be dropped before
post-success rating DMs complete; in is_terminal_order_status remove
Status::Success from the matches (i.e., only match Canceled, CanceledByAdmin,
SettledByAdmin, CompletedByAdmin, Expired, CooperativelyCanceled) so Success is
considered non-terminal until the rating phase finishes and startup
hydration/dispatch will keep tracking those orders.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/DATABASE.md`:
- Line 108: The documentation note for the orders table is missing the newly
added orders.request_id; update the sentence that lists added columns to include
request_id alongside trade_index and last_seen_dm_ts, and make sure the callout
references the actual migration function migrate_db (src/db.rs) so the docs
match the code where migrate_db adds orders.request_id, trade_index and
last_seen_dm_ts.

In `@docs/DM_LISTENER_FLOW.md`:
- Around line 62-69: Update the startup replay doc to reflect current behavior:
instead of fetching, sorting, parsing and dispatching the full fetched window,
fetch_and_replay_startup_trade_dms now selects only the freshest parsed rumor
per order and replays that single message (with notify: false), so change the
description and the “where to look” guidance to reference that it picks the
newest parsed rumor rather than dispatching a sorted batch; mention
DmListenerStartupReplay and the notify parameter on handle_trade_dm_for_order /
dispatch_giftwrap_batch for clarity.

In `@src/db.rs`:
- Around line 310-314: The DROP COLUMN migration using ALTER TABLE in the block
that checks check_column_exists(pool, "orders", "suppress_next_new_order_dm") is
unsafe on older SQLite (pre-3.35) and can crash startup; modify the migration to
detect SQLite runtime version or fall back to a safe table-copy migration: query
sqlite_version() (or PRAGMA compile_options / PRAGMA user_version) at startup
and if version < 3.35.0 perform the standard copy-and-replace pattern (CREATE
TABLE orders_new without suppress_next_new_order_dm, INSERT INTO orders_new
SELECT <columns>, DROP TABLE orders, ALTER TABLE orders_new RENAME TO orders)
instead of running ALTER TABLE ... DROP COLUMN, or alternatively switch to the
bundled sqlite feature; update the code around check_column_exists and the ALTER
TABLE orders DROP COLUMN suppress_next_new_order_dm to use this guarded path.

In `@src/ui/key_handler/admin_handlers.rs`:
- Around line 217-221: The code currently calls
hydrate_app_admin_keys_from_privkey and sets app.pending_admin_disputes_reload
whenever selected_button is true, even if saving failed; change the flow so you
first call save_admin_key_to_settings (as invoked by handle_confirmation_enter)
and check its result, and only if it returns success/Ok do you call
hydrate_app_admin_keys_from_privkey(&key_string) and, if app.user_role ==
UserRole::Admin, set app.pending_admin_disputes_reload = true; if save fails, do
not mutate app.admin_keys or pending_admin_disputes_reload and surface or log
the error as appropriate.

In `@src/ui/key_handler/async_tasks.rs`:
- Around line 292-299: The reload currently aborts message_listener_handle,
order_fetch_task, dispute_fetch_task and calls client.unsubscribe_all().await
then ignores connect_client_safely(client).await failure; change the flow so
that if connect_client_safely(client).await returns Err(e) you return that error
(or propagate it as a failure) instead of proceeding to respawn tasks—i.e.,
check the Result from connect_client_safely and bail out (propagate Err)
immediately so the fetch-scheduler reload fails when reconnection fails, leaving
higher-level code to handle rollback or retry.

In `@src/ui/key_handler/message_handlers.rs`:
- Around line 75-96: The code currently sends OperationResult::TradeClosed
regardless of whether update_order_status(&pool_clone, &order_id.to_string(),
Status::CooperativelyCanceled).await succeeded; change the logic so the
OperationResult::TradeClosed is only created when update_order_status returns
Ok, and when it returns Err you both log the error (already present) and send
back a failure/result that surfaces the DB error and keeps the trade open
locally (e.g., send an error variant or an Info string describing "Failed to
mark cooperatively canceled: {e}" instead of OperationResult::TradeClosed).
Update the branch around update_order_status, refer to update_order_status,
Status::CooperativelyCanceled, OperationResult::TradeClosed, result_tx.send and
order_id to implement this conditional-outcome behavior.

In `@src/ui/orders.rs`:
- Around line 329-343: The current branch collapses multiple terminal statuses
into "Trade Completed"; update the status mapping so msg.order_status is matched
against specific terminal values and returns distinct labels for
cancellation/expiration (e.g., return "Canceled" or "Expired" for
mostro_core::order::Status::Canceled | CanceledByAdmin | CooperativelyCanceled |
Expired) while leaving true success statuses
(mostro_core::order::Status::Success | SettledByAdmin | CompletedByAdmin)
returning "Trade Completed"; keep the fallback to
message_action_compact_label(&msg.message.get_inner_message_kind().action) when
msg.order_status is None or not terminal.

In `@src/ui/tabs/message_flow_tab.rs`:
- Around line 179-188: The current banner logic uses
message_timeline_warning(selected_action) only, which misses terminal order
statuses stored on the message; change the logic to first derive warning from
selected_msg.order_status via
message_timeline_warning(selected_msg.order_status) and fall back to
message_timeline_warning(selected_action) only if the message status is
None/empty, then use that resulting warning to set warning (instead of calling
message_timeline_warning(selected_action)) and determine warning_style by
testing whether the resolved warning came from a terminal status (i.e., whether
the message-derived warning is Some) so terminal-state warnings
(canceled/admin-canceled/cooperatively canceled) render correctly; update
references to selected_action and warning_style accordingly.

In `@src/util/order_utils/helper.rs`:
- Around line 107-116: The current match arms in helper.rs assign a concrete
rank (Some(1)) when kind is None for actor-dependent phases
(Status::WaitingPayment and Status::WaitingBuyerInvoice |
Status::SettledHoldInvoice), which makes actor-dependent transitions ambiguous;
change those arms so they return None when kind is None (i.e., do not assign a
default rank) so should_apply_status_transition() will preserve the existing
status if kind recovery failed; apply the same change to the analogous block at
the later occurrence (the block referenced by the reviewer around lines 158-165)
and keep the Some(...) return values for explicit
Some(mostro_core::order::Kind::Buy/Sell) cases.

---

Duplicate comments:
In `@src/util/dm_utils/dm_helpers.rs`:
- Around line 117-120: The conversion fallback for
GiftWrapSubscriptionMode::StartupSince currently uses Timestamp::now().as_u64()
which moves the replay window to "now" on corrupt/negative persisted cursors;
change the fallback to a safe floor (0) instead of now so failed TryFrom
conversions degrade to zero. In the block handling
GiftWrapSubscriptionMode::StartupSince, replace the
unwrap_or(Timestamp::now().as_u64()) behavior in the u64 conversion of ts with a
zero fallback (or explicitly clamp negative values to 0) before computing
since_ts with STARTUP_GIFTWRAP_ENVELOPE_SKEW_SECS and calling
filter_giftwrap_to_recipient(pubkey).since(Timestamp::from(...)).

In `@src/util/dm_utils/mod.rs`:
- Around line 98-110: The helper is incorrectly treating Status::Success as
terminal, causing orders to be dropped before post-success rating DMs complete;
in is_terminal_order_status remove Status::Success from the matches (i.e., only
match Canceled, CanceledByAdmin, SettledByAdmin, CompletedByAdmin, Expired,
CooperativelyCanceled) so Success is considered non-terminal until the rating
phase finishes and startup hydration/dispatch will keep tracking those orders.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9f0ddac6-c216-4e41-8290-bb72f6ea53cf

📥 Commits

Reviewing files that changed from the base of the PR and between 686981e and 9d3fab3.

📒 Files selected for processing (32)
  • docs/DATABASE.md
  • docs/DM_LISTENER_FLOW.md
  • docs/MESSAGE_FLOW_AND_PROTOCOL.md
  • docs/README.md
  • docs/TUI_INTERFACE.md
  • src/db.rs
  • src/main.rs
  • src/models.rs
  • src/ui/app_state.rs
  • src/ui/constants.rs
  • src/ui/draw.rs
  • src/ui/helpers.rs
  • src/ui/key_handler/admin_handlers.rs
  • src/ui/key_handler/async_tasks.rs
  • src/ui/key_handler/confirmation.rs
  • src/ui/key_handler/enter_handlers.rs
  • src/ui/key_handler/message_handlers.rs
  • src/ui/key_handler/navigation.rs
  • src/ui/key_handler/settings.rs
  • src/ui/operation_result.rs
  • src/ui/orders.rs
  • src/ui/tabs/message_flow_tab.rs
  • src/ui/tabs/tab_content.rs
  • src/util/db_utils.rs
  • src/util/dm_utils/dm_helpers.rs
  • src/util/dm_utils/mod.rs
  • src/util/dm_utils/notifications_ch_mng.rs
  • src/util/dm_utils/order_ch_mng.rs
  • src/util/order_utils/execute_send_msg.rs
  • src/util/order_utils/helper.rs
  • src/util/order_utils/mod.rs
  • src/util/order_utils/send_new_order.rs
✅ Files skipped from review due to trivial changes (2)
  • src/ui/constants.rs
  • docs/MESSAGE_FLOW_AND_PROTOCOL.md
🚧 Files skipped from review as they are similar to previous changes (11)
  • src/util/db_utils.rs
  • src/ui/draw.rs
  • src/util/order_utils/mod.rs
  • src/util/order_utils/execute_send_msg.rs
  • src/ui/app_state.rs
  • docs/TUI_INTERFACE.md
  • src/ui/helpers.rs
  • src/ui/key_handler/confirmation.rs
  • src/models.rs
  • src/main.rs
  • src/ui/key_handler/settings.rs

Comment thread docs/DATABASE.md Outdated
Comment thread docs/DM_LISTENER_FLOW.md Outdated
Comment thread src/db.rs Outdated
Comment thread src/ui/key_handler/admin_handlers.rs
Comment thread src/ui/key_handler/async_tasks.rs Outdated
Comment thread src/ui/key_handler/message_handlers.rs
Comment thread src/ui/orders.rs Outdated
Comment thread src/ui/tabs/message_flow_tab.rs Outdated
Comment thread src/util/order_utils/helper.rs
…persisted `orders` in the same DB transaction as user replacement (`replace_all_atomic` and async key rotation). Update the generate-keys confirmation copy so users know local order rows are cleared.
Copy link
Copy Markdown
Contributor

@mostronatorcoder mostronatorcoder Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed the latest state of this PR end to end. The previously blocking migration gap is fixed because migrate_db now adds request_id for existing databases, and the reconnect path now keeps current_mostro_pubkey synchronized with settings. The startup active order hydration query also applies the terminal status filter at SQL level, which avoids loading closed trades unnecessarily. I also validated the branch locally with cargo fmt --all -- --check, cargo clippy --all-targets --all-features -- -D warnings, and cargo test, all passing. From my side this is ready to merge.

Copy link
Copy Markdown
Contributor

@mostronatorcoder mostronatorcoder Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second pass review on the latest update completed. I reviewed the new head commit 8a6d4db and confirmed the change in message_action_compact_label_for_message improves terminal status clarity by distinguishing Success, SettledByAdmin, and CompletedByAdmin with explicit labels. I reran local validation on the current PR head with cargo fmt --all -- --check, cargo clippy --all-targets --all-features -- -D warnings, and cargo test, all passing. No new issues found in this update.

@arkanoider arkanoider merged commit d2a213e into main Apr 11, 2026
11 checks passed
@arkanoider arkanoider deleted the synch-message-boot branch April 11, 2026 06:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant