fix(accounts): reset transient webview status on rehydrate#1536
Conversation
Embedded-app tabs (Slack, Discord, WhatsApp, Telegram, ...) could open straight onto the "<provider> is taking longer than expected" overlay right after reopening the desktop app. One verified root cause is a Redux-persist hydration leak: Account.status is part of the persisted accounts blob, so a `timeout` (or `loading` / `pending` / `open` / `error`) set in the previous session was replayed on boot before any new webview spawn had started, even though every CEF browser is destroyed at shutdown. Treat any non-`closed` status as session-local and reset it to `closed` on REHYDRATE for the `accounts` persist key, matching the existing pattern in notificationSlice. Account directory, order, activeAccountId, lastActiveAccountId, and provider login sessions (owned by CEF user-data-dir, not Redux) are untouched. Adds an `accounts:rehydrate` debug log capturing the reset count + previous statuses so future Sentry/log captures can confirm the hydration path was taken. Covered by a new rehydrate suite that exercises every transient status, multi-account directories, the persisted MRU pointer, and REHYDRATE actions targeted at sibling persist keys. Refs tinyhumansai#1379
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds redux-persist ChangesAccount status rehydration
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
graycyrus
left a comment
There was a problem hiding this comment.
Clean, well-scoped fix — the REHYDRATE handler follows the notificationSlice pattern exactly, TRANSIENT_ACCOUNT_STATUSES is exhaustive against AccountStatus, Immer mutations are correct, debug logging uses the right namespace with no PII, and the 10 test cases cover every stated invariant. Two minor polish suggestions inline — neither is a blocker.
| builder.addCase(REHYDRATE, (state, action) => { | ||
| const rehydrateAction = action as { | ||
| type: typeof REHYDRATE; | ||
| key: string; |
There was a problem hiding this comment.
[minor] The rehydrateAction cast declares payload?: Partial<AccountsState> but the implementation never reads it — it works entirely off state.accounts (the Immer draft), which is correct because persistReducer merges the payload into state before this inner reducer fires.
A one-line comment would prevent the next reader from reaching for payload by mistake:
| key: string; | |
| if (rehydrateAction.key !== 'accounts') return; | |
| // `persistReducer` merges `action.payload` into `state` before this | |
| // inner reducer fires, so `state.accounts` already holds the hydrated | |
| // data — we reset in-place rather than reading from payload. | |
| const reset: Array<{ id: string; previous: AccountStatus }> = []; |
| return reducer(state, { type: REHYDRATE, key, payload: state } as unknown as { | ||
| type: typeof REHYDRATE; | ||
| }); | ||
| } |
There was a problem hiding this comment.
[minor] This helper passes state as both the reducer's first argument and payload. It works because they're the same object in isolation, but in the real app persistReducer merges the payload into initialState first — so state and payload would normally differ on a fresh reducer instance.
Not a correctness issue (the reducer reads state.accounts, not payload), but a comment here would clarify the assumption:
// Bypasses persistReducer — state doubles as payload. This is fine
// because the slice reads from the Immer draft, not action.payload.
function rehydrate(state: AccountsState, key = 'accounts') {
Summary
Embedded-app tabs (Slack, Discord, WhatsApp, Telegram, ...) could open straight onto the " is taking longer than expected." overlay right after reopening the desktop app, even though every CEF browser is destroyed at shutdown so the timeout had no live session left to be about.
One verified root cause is a Redux-persist hydration leak.
accountsPersistConfigwhitelists the fullaccountsblob — includingAccount.status— so a transient status (timeout/loading/pending/open/error) set in the previous session was replayed on the next cold boot before any new webview spawn had started.WebviewHostreadsstatus === 'timeout'straight from Redux, so the retry overlay rendered before frame 1.Treat any non-
closedstatus as session-local and reset it toclosedonREHYDRATEfor theaccountspersist key, matching the existing pattern innotificationSlice. The persisted account directory, ordering,activeAccountId,lastActiveAccountId, and provider login session (owned by the CEF user-data-dir, not Redux) are untouched, so users still land on their accounts list with logins intact.What changed
app/src/store/accountsSlice.ts— newREHYDRATEextraReducerthat flips every account's transient status toclosed, clearslastError, and emits aaccounts:rehydratedebug log with the reset count + previous statuses for future triage.app/src/store/__tests__/accountsSlice.rehydrate.test.ts— 10 vitest cases covering every transient status, theclosed-stays-closedinvariant, multi-account directories, the persisted MRU pointer, andREHYDRATEactions targeted at sibling persist keys.Acceptance criteria — coverage
retryWebviewAccountLoad) is unchanged. If the live load times out the user still gets the retry overlay.accounts:rehydratedebug namespace logs the reset count + previous statuses so future Sentry/log captures can confirm the hydration path was taken.Scope caveat — not closing the issue
This PR addresses one verified root cause that is clearly wrong on its own (transient session state has no business surviving a process restart). I did not reproduce the original Slack screenshot end-to-end — that needs a real Slack OAuth + dirty-close + cold-reopen cycle with a slow network. Other plausible paths to the same overlay are not touched here:
IDLE_BUDGET(8s).Keeping #1379 open until the reporter confirms the repro is gone in a release build; if the overlay still appears, the remaining causes above are the next follow-up. Linking with
Refsinstead ofCloseson purpose.Refs #1379
Summary by CodeRabbit