fix(auth): collapse fallback duplicates, stabilize dedupe remap, and harden auth tests#38
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds refresh-token collision detection and account-merge logic into persistence and normalization, expands identity indexes with a global refresh-token map, switches identity resolution to multi-key matching, updates UI account-title formatting, adjusts tests for collapsed/deduplicated behavior, and exposes a test-only storage path setter. Changes
Sequence DiagramsequenceDiagram
participant Client as Account Manager
participant Auth as Authenticator
participant Resolver as Identity Resolver
participant Dedupe as Deduplication Engine
participant Storage as Storage Layer
Client->>Auth: submit accounts to persist
Auth->>Resolver: build identity indexes (org-scoped, byRefreshTokenGlobal)
Resolver->>Dedupe: provide candidate account indices & identity keys
Dedupe->>Dedupe: detect refresh-token collisions
Dedupe->>Dedupe: pickNewestAccountIndex (lastUsed/addedAt) & mergeAccountRecords
Dedupe->>Storage: return deduplicated/merged account set
Storage->>Storage: normalizeAccountStorage (final pruning + index cleanup)
Storage->>Client: persist normalized accounts
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/storage.ts (1)
93-100:⚠️ Potential issue | 🟠 MajorPreserve org-scoped metadata when merging refresh-token collisions.
When a fallback entry is newer,
mergeAccountRecordscan overwrite org-scopedaccountLabel/accountIdSource, which are used for identity and UI. That can regress the “org-preferred” intent. Preserve these fields from the target (org) record, similar to the index.ts merge logic.🛠️ Suggested fix
type AccountLike = { organizationId?: string; accountId?: string; + accountIdSource?: AccountMetadataV3["accountIdSource"]; + accountLabel?: string; email?: string; refreshToken: string; addedAt?: number; lastUsed?: number; }; function mergeAccountRecords<T extends AccountLike>(target: T, source: T): T { const newest = selectNewestAccount(target, source); const older = newest === target ? source : target; return { ...older, ...newest, organizationId: target.organizationId ?? source.organizationId, accountId: target.accountId ?? source.accountId, + accountIdSource: target.accountIdSource ?? source.accountIdSource, + accountLabel: target.accountLabel ?? source.accountLabel, email: target.email ?? source.email, }; }Also applies to: 292-302
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/storage.ts` around lines 93 - 100, mergeAccountRecords currently allows a newer fallback entry to overwrite org-scoped metadata (like accountLabel and accountIdSource) from the target org record; update mergeAccountRecords to preserve these org-scoped fields from the existing/target record when they are present (i.e., only set accountLabel and accountIdSource from the incoming/source record if the target’s value is absent), mirroring the merge behavior used in index.ts; apply the same preservation logic to the related merge code block around the other occurrence (the block referenced at 292-302) and reference the AccountLike type and field names (accountLabel, accountIdSource, accountId) when making the change.index.ts (1)
654-742:⚠️ Potential issue | 🟠 MajorPreserve the active selection after collision pruning.
Pruning can delete the currently active fallback entry, and the current clamping logic can then point at an unrelated account. Consider remapping by identity key (org → accountId → refreshToken) before clamping so the active account stays stable.
🛠️ Suggested fix (map active index by identity before clamping)
- const activeIndex = replaceAll - ? 0 - : typeof stored?.activeIndex === "number" && Number.isFinite(stored.activeIndex) - ? stored.activeIndex - : 0; + const resolveIdentityKey = ( + account: { organizationId?: string; accountId?: string; refreshToken?: string } | undefined, + ): string | undefined => { + const org = account?.organizationId?.trim(); + if (org) return `org:${org}`; + const id = account?.accountId?.trim(); + if (id) return `account:${id}`; + const refresh = account?.refreshToken?.trim(); + return refresh ? `refresh:${refresh}` : undefined; + }; + + const storedActive = + !replaceAll && stored?.accounts?.length + ? stored.accounts[ + typeof stored.activeIndex === "number" && Number.isFinite(stored.activeIndex) + ? stored.activeIndex + : 0 + ] + : undefined; + const activeKey = resolveIdentityKey(storedActive); + const mappedActiveIndex = + activeKey + ? accounts.findIndex((account) => resolveIdentityKey(account) === activeKey) + : -1; + + const activeIndex = replaceAll + ? 0 + : mappedActiveIndex >= 0 + ? mappedActiveIndex + : typeof stored?.activeIndex === "number" && Number.isFinite(stored.activeIndex) + ? stored.activeIndex + : 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.ts` around lines 654 - 742, Prune can remove entries before computing clampedActiveIndex causing stored.activeIndex (and stored.activeIndexByFamily) to point to a different account; before calling pruneRefreshTokenCollisions capture a stable identity map (e.g. map stored.activeIndex and each stored.activeIndexByFamily[family] to an identity key built from account.organizationId, account.accountId, account.refreshToken), run pruneRefreshTokenCollisions which mutates accounts, then remap those saved identity keys back to new indices in the updated accounts array (falling back to 0/clamped bounds) so clampedActiveIndex and activeIndexByFamily are computed from the remapped indices instead of the pre-prune numeric indices; reference pruneRefreshTokenCollisions, accounts, stored.activeIndex, stored.activeIndexByFamily, MODEL_FAMILIES, clampedActiveIndex, and activeIndexByFamily when implementing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/storage.test.ts`:
- Around line 1471-1504: In the test "normalizes duplicate org/token variants
sharing refresh token before writing" replace the real-looking email
"realuser@gmail.com" used in the storage.accounts entries with a non-real
placeholder (e.g., "user@example.com" or "user@company.test") so both account
objects in the test data passed to saveAccounts() and later asserted from
loadAccounts() use the placeholder; update all occurrences in the storage
variable to avoid real email addresses.
---
Outside diff comments:
In `@index.ts`:
- Around line 654-742: Prune can remove entries before computing
clampedActiveIndex causing stored.activeIndex (and stored.activeIndexByFamily)
to point to a different account; before calling pruneRefreshTokenCollisions
capture a stable identity map (e.g. map stored.activeIndex and each
stored.activeIndexByFamily[family] to an identity key built from
account.organizationId, account.accountId, account.refreshToken), run
pruneRefreshTokenCollisions which mutates accounts, then remap those saved
identity keys back to new indices in the updated accounts array (falling back to
0/clamped bounds) so clampedActiveIndex and activeIndexByFamily are computed
from the remapped indices instead of the pre-prune numeric indices; reference
pruneRefreshTokenCollisions, accounts, stored.activeIndex,
stored.activeIndexByFamily, MODEL_FAMILIES, clampedActiveIndex, and
activeIndexByFamily when implementing.
In `@lib/storage.ts`:
- Around line 93-100: mergeAccountRecords currently allows a newer fallback
entry to overwrite org-scoped metadata (like accountLabel and accountIdSource)
from the target org record; update mergeAccountRecords to preserve these
org-scoped fields from the existing/target record when they are present (i.e.,
only set accountLabel and accountIdSource from the incoming/source record if the
target’s value is absent), mirroring the merge behavior used in index.ts; apply
the same preservation logic to the related merge code block around the other
occurrence (the block referenced at 292-302) and reference the AccountLike type
and field names (accountLabel, accountIdSource, accountId) when making the
change.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (6)
index.tslib/storage.tslib/ui/auth-menu.tstest/auth-menu.test.tstest/index.test.tstest/storage.test.ts
| const getAccountAtStoredIndex = (rawIndex: unknown) => { | ||
| if (typeof rawIndex !== "number" || !Number.isFinite(rawIndex)) return undefined; | ||
| const candidate = Math.floor(rawIndex); | ||
| if (candidate < 0 || candidate >= accounts.length) return undefined; | ||
| return accounts[candidate]; | ||
| }; | ||
|
|
||
| const storedActiveKey = replaceAll | ||
| ? undefined | ||
| : resolveIdentityKey(getAccountAtStoredIndex(stored?.activeIndex)); | ||
| const storedActiveKeyByFamily: Partial<Record<ModelFamily, string>> = {}; | ||
| if (!replaceAll) { | ||
| for (const family of MODEL_FAMILIES) { | ||
| const familyKey = resolveIdentityKey( | ||
| getAccountAtStoredIndex(stored?.activeIndexByFamily?.[family]), | ||
| ); | ||
| if (familyKey) { | ||
| storedActiveKeyByFamily[family] = familyKey; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| pruneRefreshTokenCollisions(); |
There was a problem hiding this comment.
identity key capture happens before pruning, which creates a race: if stored.activeIndex points to an account that gets removed by pruneRefreshTokenCollisions(), the captured storedActiveKey will be the identity of the removed account. when resolveIndexByIdentityKey searches the pruned array, it won't find that key and falls back to the numeric index—which now points to a different account.
move getAccountAtStoredIndex and the identity key capture logic to after pruneRefreshTokenCollisions() so it reads from the final deduplicated array.
Prompt To Fix With AI
This is a comment left during a code review.
Path: index.ts
Line: 736-758
Comment:
identity key capture happens before pruning, which creates a race: if `stored.activeIndex` points to an account that gets removed by `pruneRefreshTokenCollisions()`, the captured `storedActiveKey` will be the identity of the removed account. when `resolveIndexByIdentityKey` searches the pruned array, it won't find that key and falls back to the numeric index—which now points to a different account.
move `getAccountAtStoredIndex` and the identity key capture logic to after `pruneRefreshTokenCollisions()` so it reads from the final deduplicated array.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/storage.ts (1)
308-412:⚠️ Potential issue | 🟠 MajorActive index remap can drift after refresh-token dedupe removes fallback entries.
When a no‑org fallback is removed in favor of an org‑scoped account sharing the same refresh token,normalizeAccountStorageremaps active indices by identity key (org/accountId/refreshToken). A refresh‑token key will no longer match the org-scoped survivor, so the fallback clamp can point at the wrong account if indices shift.
Suggested fix: when an active key is refresh‑token–based and no exact identity match is found, fall back to matching byrefreshTokenacross remaining accounts (including org‑scoped). Apply the same fallback foractiveIndexByFamily.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@index.ts`:
- Around line 725-798: The active-index remapping can miss a surviving account
when pruneRefreshTokenCollisions replaces a refresh-token identity with an
org/account identity; to fix, capture refresh-token values before calling
pruneRefreshTokenCollisions (e.g., save storedRefreshKey = storedActiveKey and
storedRefreshKeyByFamily = storedActiveKeyByFamily), then after pruning, if
resolveIndexByIdentityKey(storedActiveKey) returns undefined and
storedRefreshKey starts with "refresh:", attempt a fallback lookup by comparing
trimmed account.refreshToken values to the saved refresh token (apply same for
each family using storedRefreshKeyByFamily) and use that index as
remappedActiveIndex/family remapped index so the active account follows the
surviving identity.
- Around line 456-481: mergeAccountRecords currently only preserves identity
fields and token/timestamp values from the "newer" record and may drop
non-identity metadata present only on the source (e.g., enabled,
rateLimitResetTimes, coolingDownUntil, cooldownReason, lastSwitchReason). Update
mergeAccountRecords to explicitly preserve non-identity metadata by assigning
each metadata field with target[field] ?? source[field] (or merging structures
where appropriate) instead of letting them be lost; keep the existing
newer/older selection for refreshToken/accessToken/expiresAt and use Math.max
for addedAt/lastUsed as now, but add explicit merges for enabled,
rateLimitResetTimes, coolingDownUntil, cooldownReason, lastSwitchReason (and any
similar metadata on accounts) so source metadata is only applied when target
lacks it.
In `@test/index.test.ts`:
- Line 1602: Test uses autoMethod.authorize({ loginMode: "add", accountCount:
"0" }) which differs from other tests that use "1"; confirm intent and if
unintentional, change accountCount to "1" to exercise the same first-account-add
branch in the OpenAIOAuthPlugin persistAccountPool flow (or, if "0" is
intentional, add an explicit comment and/or an assertion that verifies the
alternate branch behavior). Locate the call to autoMethod.authorize in the test
and either replace "0" with "1" or document and assert the special-case branch
to keep the test representative.
- Around line 1558-1561: The test's assertion is wrong: organizationEntries is
filtered for account.organizationId === "organization-shared" but expects the
fallback "token-personal" (no org); update the assertion to expect the
org-scoped account chosen by the collapse/select logic (the value produced by
selectBestAccountCandidate, e.g., the org-variant accountId) instead of
"token-personal", and keep the existing check that mockStorage.accounts has
length 1 to reflect that the no-org token was pruned in favor of the org-scoped
identity; locate the failing assertions by looking for organizationEntries and
selectBestAccountCandidate in the test and replace the accountId expectation
accordingly.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
index.tslib/storage.tstest/index.test.tstest/storage.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@index.ts`:
- Around line 456-486: The mergeAccountRecords function currently merges
rateLimitResetTimes by spreading source then target so the target always wins
even when source/newer should win; update the rateLimitResetTimes merge to
prefer the newer record (or take per-key max): compute newer and older as you
already do (newer/older variables) and replace the current spread block with
...(older.rateLimitResetTimes ?? {}), ...(newer.rateLimitResetTimes ?? {}) or,
for per-key max, iterate Object.keys(newer.rateLimitResetTimes ||
older.rateLimitResetTimes || {}) and set each key to Math.max(older[key] ?? 0,
newer[key] ?? 0) so the most recent reset times are preserved when merging
accounts.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
index.tslib/storage.tstest/index.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/rotation-integration.test.ts (1)
26-32:⚠️ Potential issue | 🟠 MajorReplace potentially-real email addresses with synthetic
@example.comvalues.
jorrizarellano123456@gmail.comandkeiyoon25@gmail.comare specific, non-reserved addresses that do not follow the RFC 2606 convention (@example.com) used by the rest of this file's test data. If either address belongs to a real user, committing it to a public repository constitutes a GDPR/CCPA compliance risk. They appear structurally consistent with addresses that might have been copied from a live storage file when writing the deduplication regression test.🛡️ Proposed fix — replace with synthetic addresses
const DUPLICATE_EMAIL_ACCOUNTS = [ - { email: "jorrizarellano123456@gmail.com", refresh_token: "token_old", lastUsed: 1000 }, - { email: "jorrizarellano123456@gmail.com", refresh_token: "token_new", lastUsed: 2000 }, - { email: "keiyoon25@gmail.com", refresh_token: "token_old_2", lastUsed: 1500 }, - { email: "keiyoon25@gmail.com", refresh_token: "token_new_2", lastUsed: 2500 }, - { email: "unique@gmail.com", refresh_token: "token_unique", lastUsed: 1800 }, + { email: "duplicate.a@example.com", refresh_token: "token_old", lastUsed: 1000 }, + { email: "duplicate.a@example.com", refresh_token: "token_new", lastUsed: 2000 }, + { email: "duplicate.b@example.com", refresh_token: "token_old_2", lastUsed: 1500 }, + { email: "duplicate.b@example.com", refresh_token: "token_new_2", lastUsed: 2500 }, + { email: "unique@example.com", refresh_token: "token_unique", lastUsed: 1800 }, ];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/rotation-integration.test.ts` around lines 26 - 32, The test data constant DUPLICATE_EMAIL_ACCOUNTS contains real-looking emails; replace the two specific addresses ("jorrizarellano123456@gmail.com" and "keiyoon25@gmail.com") with synthetic, reserved example addresses (e.g., "jorrizarellano123456@example.com" and "keiyoon25@example.com") inside the DUPLICATE_EMAIL_ACCOUNTS array in test/rotation-integration.test.ts, leaving refresh_token and lastUsed values unchanged so the deduplication test behavior remains the same.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/rotation-integration.test.ts`:
- Around line 66-68: The afterAll hook currently calls
setStoragePathDirect(null) but doesn't remove the temp JSON file written by
saveToDisk/saveToDiskDebounced; update the afterAll block to also delete/unlink
the TEST_STORAGE_PATH file (use fs.unlink or fs.promises.unlink) if it exists,
catching and ignoring ENOENT errors so test teardown never fails, and keep the
call to setStoragePathDirect(null) after the deletion; reference the
TEST_STORAGE_PATH constant and the afterAll hook that currently invokes
setStoragePathDirect.
---
Outside diff comments:
In `@test/rotation-integration.test.ts`:
- Around line 26-32: The test data constant DUPLICATE_EMAIL_ACCOUNTS contains
real-looking emails; replace the two specific addresses
("jorrizarellano123456@gmail.com" and "keiyoon25@gmail.com") with synthetic,
reserved example addresses (e.g., "jorrizarellano123456@example.com" and
"keiyoon25@example.com") inside the DUPLICATE_EMAIL_ACCOUNTS array in
test/rotation-integration.test.ts, leaving refresh_token and lastUsed values
unchanged so the deduplication test behavior remains the same.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
test/rotation-integration.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@index.ts`:
- Around line 436-503: mergeAccountRecords currently favors
target.enabled/target.coolingDownUntil which can re-enable or clear a cooling
source account; change the merge to use "most restrictive wins": compute enabled
as false if either target.enabled === false or source.enabled === false,
otherwise fall back to target.enabled ?? source.enabled; compute
coolingDownUntil as Math.max(target.coolingDownUntil ?? 0,
source.coolingDownUntil ?? 0) and if that max > 0 pick cooldownReason from
whichever record (target or source) had that timestamp (or prefer source if
equal); update the assignment in accounts[targetIndex] to use these computed
values instead of target.coolingDownUntil ?? source.coolingDownUntil and
target.cooldownReason ?? source.cooldownReason so disabled/cooling state is
preserved after dedupe.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
index.tstest/index.test.tstest/rotation-integration.test.ts
| const asUniqueIndex = (indices: number[] | undefined): number | undefined => { | ||
| if (!indices || indices.length !== 1) return undefined; | ||
| const [onlyIndex] = indices; | ||
| return typeof onlyIndex === "number" ? onlyIndex : undefined; | ||
| }; | ||
|
|
||
| const pickNewestAccountIndex = (existingIndex: number, candidateIndex: number): number => { | ||
| const existing = accounts[existingIndex]; | ||
| const candidate = accounts[candidateIndex]; | ||
| if (!existing) return candidateIndex; | ||
| if (!candidate) return existingIndex; | ||
| const existingLastUsed = existing.lastUsed ?? 0; | ||
| const candidateLastUsed = candidate.lastUsed ?? 0; | ||
| if (candidateLastUsed > existingLastUsed) return candidateIndex; | ||
| if (candidateLastUsed < existingLastUsed) return existingIndex; | ||
| const existingAddedAt = existing.addedAt ?? 0; | ||
| const candidateAddedAt = candidate.addedAt ?? 0; | ||
| return candidateAddedAt >= existingAddedAt ? candidateIndex : existingIndex; | ||
| }; | ||
|
|
||
| const mergeAccountRecords = (targetIndex: number, sourceIndex: number): void => { | ||
| const target = accounts[targetIndex]; | ||
| const source = accounts[sourceIndex]; | ||
| if (!target || !source) return; | ||
| const targetLastUsed = target.lastUsed ?? 0; | ||
| const sourceLastUsed = source.lastUsed ?? 0; | ||
| const targetAddedAt = target.addedAt ?? 0; | ||
| const sourceAddedAt = source.addedAt ?? 0; | ||
| const sourceIsNewer = | ||
| sourceLastUsed > targetLastUsed || | ||
| (sourceLastUsed === targetLastUsed && sourceAddedAt > targetAddedAt); | ||
| const newer = sourceIsNewer ? source : target; | ||
| const older = sourceIsNewer ? target : source; | ||
| const mergedRateLimitResetTimes: Record<string, number> = {}; | ||
| const rateLimitResetKeys = new Set([ | ||
| ...Object.keys(older.rateLimitResetTimes ?? {}), | ||
| ...Object.keys(newer.rateLimitResetTimes ?? {}), | ||
| ]); | ||
| for (const key of rateLimitResetKeys) { | ||
| const olderRaw = older.rateLimitResetTimes?.[key]; | ||
| const newerRaw = newer.rateLimitResetTimes?.[key]; | ||
| const olderValue = | ||
| typeof olderRaw === "number" && Number.isFinite(olderRaw) ? olderRaw : 0; | ||
| const newerValue = | ||
| typeof newerRaw === "number" && Number.isFinite(newerRaw) ? newerRaw : 0; | ||
| const resolved = Math.max(olderValue, newerValue); | ||
| if (resolved > 0) { | ||
| mergedRateLimitResetTimes[key] = resolved; | ||
| } | ||
| } | ||
| accounts[targetIndex] = { | ||
| ...target, | ||
| accountId: target.accountId ?? source.accountId, | ||
| organizationId: target.organizationId ?? source.organizationId, | ||
| accountIdSource: target.accountIdSource ?? source.accountIdSource, | ||
| accountLabel: target.accountLabel ?? source.accountLabel, | ||
| email: target.email ?? source.email, | ||
| refreshToken: newer.refreshToken || older.refreshToken, | ||
| accessToken: newer.accessToken || older.accessToken, | ||
| expiresAt: newer.expiresAt ?? older.expiresAt, | ||
| enabled: target.enabled ?? source.enabled, | ||
| addedAt: Math.max(target.addedAt ?? 0, source.addedAt ?? 0), | ||
| lastUsed: Math.max(target.lastUsed ?? 0, source.lastUsed ?? 0), | ||
| lastSwitchReason: target.lastSwitchReason ?? source.lastSwitchReason, | ||
| rateLimitResetTimes: mergedRateLimitResetTimes, | ||
| coolingDownUntil: target.coolingDownUntil ?? source.coolingDownUntil, | ||
| cooldownReason: target.cooldownReason ?? source.cooldownReason, | ||
| }; |
There was a problem hiding this comment.
Preserve disabled/cooldown state when merging duplicates.
mergeAccountRecords keeps target.enabled/target.coolingDownUntil whenever defined, so a disabled or cooling source record can be re‑enabled/cleared if the newer target lacks those flags. That defeats user intent and can prematurely use a cooling account after dedupe.
💡 Suggested merge semantics (most restrictive wins)
const mergeAccountRecords = (targetIndex: number, sourceIndex: number): void => {
const target = accounts[targetIndex];
const source = accounts[sourceIndex];
if (!target || !source) return;
@@
for (const key of rateLimitResetKeys) {
@@
}
+ const mergedEnabled =
+ target.enabled === false || source.enabled === false
+ ? false
+ : target.enabled ?? source.enabled;
+ const mergedCoolingDownUntil = Math.max(
+ target.coolingDownUntil ?? 0,
+ source.coolingDownUntil ?? 0,
+ ) || undefined;
+ const mergedCooldownReason =
+ mergedCoolingDownUntil === (target.coolingDownUntil ?? 0)
+ ? target.cooldownReason ?? source.cooldownReason
+ : source.cooldownReason ?? target.cooldownReason;
accounts[targetIndex] = {
...target,
@@
- enabled: target.enabled ?? source.enabled,
+ enabled: mergedEnabled,
addedAt: Math.max(target.addedAt ?? 0, source.addedAt ?? 0),
lastUsed: Math.max(target.lastUsed ?? 0, source.lastUsed ?? 0),
lastSwitchReason: target.lastSwitchReason ?? source.lastSwitchReason,
rateLimitResetTimes: mergedRateLimitResetTimes,
- coolingDownUntil: target.coolingDownUntil ?? source.coolingDownUntil,
- cooldownReason: target.cooldownReason ?? source.cooldownReason,
+ coolingDownUntil: mergedCoolingDownUntil,
+ cooldownReason: mergedCooldownReason,
};
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@index.ts` around lines 436 - 503, mergeAccountRecords currently favors
target.enabled/target.coolingDownUntil which can re-enable or clear a cooling
source account; change the merge to use "most restrictive wins": compute enabled
as false if either target.enabled === false or source.enabled === false,
otherwise fall back to target.enabled ?? source.enabled; compute
coolingDownUntil as Math.max(target.coolingDownUntil ?? 0,
source.coolingDownUntil ?? 0) and if that max > 0 pick cooldownReason from
whichever record (target or source) had that timestamp (or prefer source if
equal); update the assignment in accounts[targetIndex] to use these computed
values instead of target.coolingDownUntil ?? source.coolingDownUntil and
target.cooldownReason ?? source.cooldownReason so disabled/cooling state is
preserved after dedupe.
…harden auth tests (#38) * fix(auth): collapse token fallback duplicates and clarify account labels * fix(auth): stabilize org-preferred refresh dedupe and active index remap * fix(auth): keep active/account metadata stable through refresh dedupe * test(storage): isolate rotation integration writes from user auth store * fix(auth,test): preserve latest reset windows and harden test fixtures * fix(auth): preserve restrictive cooldown/enable state during dedupe * chore(release): v5.3.3 ---------
…harden auth tests (#38) * fix(auth): collapse token fallback duplicates and clarify account labels * fix(auth): stabilize org-preferred refresh dedupe and active index remap * fix(auth): keep active/account metadata stable through refresh dedupe * test(storage): isolate rotation integration writes from user auth store * fix(auth,test): preserve latest reset windows and harden test fixtures * fix(auth): preserve restrictive cooldown/enable state during dedupe * chore(release): v5.3.3 ---------
Summary
@example.comvaluesRoot Cause
One OAuth login can yield multiple candidate identities (org-scoped + token fallback) for the same person/refresh token. Without consistent dedupe+merge semantics across runtime persistence and storage normalization, duplicate-looking entries survived and active indices could drift after pruning.
What Changed
index.tspersistAccountPoolenabled, cooldown fields, switch reason, rate-limit maps)rateLimitResetTimesmerge to per-key max so latest reset windows are retainedlib/storage.tslib/ui/auth-menu.tstest/index.test.tstest/rotation-integration.test.ts~/.opencode/openai-codex-accounts.json)ENOENT)@example.comValidation
npm run typechecknpm test -- test/index.test.ts test/storage.test.tsnpm test -- test/rotation-integration.test.ts test/index.test.tsAll passed on this branch.
User-Facing Outcome
opencode auth loginno longer keeps/creates duplicate fallback entries for the same account after dedupe paths executeaccount1@example.comentries into real user storagenote: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
fixed oauth account dedupe so org-scoped and token-fallback variants sharing a refresh token collapse into one canonical entry. preserved org identity and account metadata during merges while taking freshest token/session state. kept active account selection stable after dedupe/pruning by remapping via multi-key identity (org, accountId, refresh). improved auth menu labels so same-email entries are distinguishable. hardened storage normalization/import remap paths to match runtime dedupe semantics. isolated integration tests from real user auth storage and removed temp artifacts after tests. replaced real-looking test emails with reserved
@example.comvalues.key changes:
index.ts: added refresh-token collision pruning inpersistAccountPool, robust active-index remap after prune using multi-key identity matching, preserved org identity fields when merging fallback into org-scoped entries, preserved non-identity metadata (enabled, cooldown fields, switch reason, rate-limit maps), updatedrateLimitResetTimesmerge to per-key max so latest reset windows are retainedlib/storage.ts: aligned normalization/import remap behavior with multi-key identity matching, ensured dedupe/remap handles refresh-token fallback → org-survivor transitions correctly, added normalization call on every write pathlib/ui/auth-menu.ts: clearer account display identity for multi-account selection (workspace/label/id suffix)test/rotation-integration.test.ts: forced suite storage path to isolated temp file (prevents pollution of real~/.opencode/openai-codex-accounts.json), added teardown cleanup to unlink temp file, replaced non-reserved email literals with synthetic@example.comtest/index.test.ts,test/storage.test.ts: added regressions for active-index remap, latest rate-limit window merge, org/token collapse, and cooldown preservationwindows filesystem concurrency: no new file operations added. existing
saveAccountsUnlockedinlib/storage.tsalready uses atomic write-via-rename pattern (temp file → rename) to defend against windows antivirus locks. normalization now runs on every write (line 704 inlib/storage.ts) which adds deterministic dedupe enforcement but doesn't change concurrency semantics.token leakage: no new token exposure paths. merge logic in
mergeAccountRecords(index.ts:456-534) preserves existing token redaction boundaries - tokens stay in-memory during dedupe and are written through the existingsaveAccountsUnlockedpath which enforces 0o600 permissions.regression tests:
test/index.test.tscovers org/token collapse (collapses org-scoped primary and no-org token variant...), active index remap after prune (remaps active indices to merged org account...), rate-limit window merge (keeps latest rate-limit reset windows...), cooldown/enabled preservation (keeps restrictive enabled/cooldown metadata...)test/storage.test.tsaddsnormalizes duplicate org/token variants sharing a refresh token before writingConfidence Score: 4/5
index.tslines 799-841 (identity key capture and remap after prune) - this is the fix for the race condition and uses multi-key fallback logic that should be validated under edge cases (e.g., pruning removes all variants of a previously-active account)Important Files Changed
Last reviewed commit: 178ae6e
Context used:
dashboard- What: Every code change must explain how it defends against Windows filesystem concurrency bugs and ... (source)