Skip to content

fix: recreate live account sync on config changes#380

Closed
ndycode wants to merge 1 commit intomainfrom
fix/live-account-sync-stale-state
Closed

fix: recreate live account sync on config changes#380
ndycode wants to merge 1 commit intomainfrom
fix/live-account-sync-stale-state

Conversation

@ndycode
Copy link
Copy Markdown
Owner

@ndycode ndycode commented Apr 6, 2026

Problem

Live account sync kept reusing the same controller once created, even when its effective debounce/poll configuration changed.

That leaves stale runtime state in place and continues watching with outdated timing parameters.

Fix

Track a live-sync config key and recreate the controller when its effective configuration changes.

This mirrors the existing refresh-guardian pattern and keeps runtime watcher state aligned with the current config.

Changes

  • lib/runtime/live-sync.ts
    • added config-key tracking to the runtime live-sync helper
    • recreates the controller when the config key changes
  • lib/runtime/runtime-services.ts
    • extended the lower-level helper to preserve and return a live-sync config key
  • lib/runtime/live-sync-entry.ts
    • forwards debounce/poll-derived config key data into the helper
  • index.ts
    • stores liveAccountSyncConfigKey alongside the sync instance/path
  • tests updated in:
    • test/runtime-live-sync.test.ts
    • test/runtime-services.test.ts
    • test/live-sync-entry.test.ts

Validation

npx vitest run test/live-sync-entry.test.ts test/runtime-services.test.ts test/runtime-live-sync.test.ts test/live-account-sync.test.ts test/live-account-sync-edge.test.ts
Test Files  5 passed (5)
Tests      26 passed (26)

npx vitest run
Test Files  222 passed (222)
Tests      3315 passed (3315)

note: greptile review for oc-chatgpt-multi-auth. cite files like lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.

Greptile Summary

this pr adds config-key tracking to the live account sync system so the watcher is torn down and recreated whenever debounce/poll settings change, mirroring the existing refresh-guardian pattern. changes are clean and the overall approach is sound.

  • lib/runtime/live-sync.ts: new generic helper adds currentConfigKey/configKey params, compares keys on entry, stops stale sync, and tracks cleanupRegistered to avoid duplicate handlers
  • lib/runtime/runtime-services.ts: new ensureLiveAccountSyncState delegates to the same pattern but does not track cleanupRegisteredregisterCleanup is called on every sync creation, including recreations triggered by config changes
  • lib/runtime/live-sync-entry.ts: cleanly computes configKey from debounce+poll and forwards currentConfigKey down; return value now includes liveAccountSyncConfigKey
  • index.ts: adds the module-level liveAccountSyncConfigKey state variable and threads it through ensureLiveAccountSyncEntry correctly
  • tests cover the new recreation path and ebusy retries, but live-sync-entry.test.ts doesn't assert on the new liveAccountSyncConfigKey return value and the mock omits it

Confidence Score: 3/5

mergeable with caution — the cleanup accumulation in runtime-services.ts is a real correctness divergence from the established pattern

the core logic in live-sync.ts and the wiring in index.ts/live-sync-entry.ts are correct and well-tested. the missing cleanupRegistered guard in runtime-services.ts is a meaningful gap that will silently accumulate handlers across config changes and is inconsistent with the sibling implementation. the missing test coverage in live-sync-entry.test.ts means the new config-key threading is unverified at that layer. neither issue is blocking at normal usage volumes, but both are worth fixing before merge.

lib/runtime/runtime-services.ts (missing cleanupRegistered guard) and test/live-sync-entry.test.ts (missing mock field and assertion)

Important Files Changed

Filename Overview
lib/runtime/runtime-services.ts adds ensureLiveAccountSyncState with config-key recreation logic, but missing cleanupRegistered guard — diverges from live-sync.ts pattern and accumulates cleanup handlers across recreations
lib/runtime/live-sync.ts well-structured generic helper; correctly tracks currentConfigKey, stops stale sync on key change, guards cleanupRegistered to prevent handler accumulation
lib/runtime/live-sync-entry.ts cleanly computes configKey from debounce/poll, forwards currentConfigKey and new configKey into state helper, returns liveAccountSyncConfigKey to caller
index.ts adds liveAccountSyncConfigKey module-level state and threads it through ensureLiveAccountSyncEntry correctly
test/runtime-live-sync.test.ts comprehensive: covers creation, config-key recreation, cleanup dedup, EBUSY/EPERM retries, concurrency (commit-before-await), toggle cycle
test/runtime-services.test.ts covers key scenarios but missing assertion on registerCleanup call count during config-key recreation
test/live-sync-entry.test.ts single test; mock omits liveAccountSyncConfigKey from return and no assertion on it — zero coverage of the new config-key threading behaviour

Sequence Diagram

sequenceDiagram
    participant I as index.ts
    participant E as live-sync-entry.ts
    participant S as runtime-services.ts
    participant Sync as LiveAccountSync

    note over I,Sync: initial call — configKey=25:250
    I->>E: ensureLiveAccountSyncEntry(currentConfigKey=null)
    E->>E: compute configKey="25:250"
    E->>S: ensureLiveAccountSyncState(currentConfigKey=null, configKey="25:250")
    S->>Sync: createSync() → sync1
    S->>S: registerCleanup(sync1.stop) ✓
    S->>Sync: sync1.syncToPath(targetPath)
    S-->>E: {liveAccountSync: sync1, configKey: "25:250"}
    E-->>I: {liveAccountSync: sync1, liveAccountSyncConfigKey: "25:250"}
    I->>I: liveAccountSyncConfigKey = "25:250"

    note over I,Sync: config change — debounce/poll updated
    I->>E: ensureLiveAccountSyncEntry(currentConfigKey="25:250")
    E->>E: compute configKey="50:500"
    E->>S: ensureLiveAccountSyncState(currentConfigKey="25:250", configKey="50:500")
    S->>Sync: sync1.stop() [key mismatch]
    S->>Sync: createSync() → sync2
    S->>S: registerCleanup(sync2.stop) ⚠ no guard — extra handler registered
    S->>Sync: sync2.syncToPath(targetPath)
    S-->>E: {liveAccountSync: sync2, configKey: "50:500"}
    E-->>I: {liveAccountSync: sync2, liveAccountSyncConfigKey: "50:500"}
    I->>I: liveAccountSyncConfigKey = "50:500"
Loading

Comments Outside Diff (1)

  1. lib/runtime/runtime-services.ts, line 66-72 (link)

    P1 cleanup accumulates on every sync recreation

    no cleanupRegistered guard here. every time the config key changes and a new sync is created, another closure is pushed into the cleanup registry. the sibling live-sync.ts (lines 95-101) was written with an explicit cleanupRegistered flag + getCurrentSync() indirection precisely to prevent this.

    each extra closure captures its own local liveAccountSync binding, so it won't double-stop the current sync — but it will call .stop() on the already-stopped previous sync, which is only safe if stop() is idempotent. more importantly, after N config changes the registry holds N cleanup handlers. on windows, antivirus-induced re-entrant fs locks during a flurry of config reloads could trigger multiple concurrent stop/recreate cycles, each registering another handler.

    mirror the pattern from live-sync.ts:

    if (!liveAccountSync) {
        liveAccountSync = params.createSync(params.authFallback);
        liveAccountSyncConfigKey = nextConfigKey;
        if (!params.currentCleanupRegistered) {
            params.registerCleanup(() => {
                liveAccountSync?.stop();
            });
        }
    }

    or accept/return a cleanupRegistered boolean in the params/return type so callers can track and pass it back on the next invocation — exactly how live-sync.ts handles it.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: lib/runtime/runtime-services.ts
    Line: 66-72
    
    Comment:
    **cleanup accumulates on every sync recreation**
    
    no `cleanupRegistered` guard here. every time the config key changes and a new sync is created, another closure is pushed into the cleanup registry. the sibling `live-sync.ts` (lines 95-101) was written with an explicit `cleanupRegistered` flag + `getCurrentSync()` indirection precisely to prevent this.
    
    each extra closure captures its own local `liveAccountSync` binding, so it won't double-stop the *current* sync — but it will call `.stop()` on the already-stopped previous sync, which is only safe if `stop()` is idempotent. more importantly, after N config changes the registry holds N cleanup handlers. on windows, antivirus-induced re-entrant fs locks during a flurry of config reloads could trigger multiple concurrent stop/recreate cycles, each registering another handler.
    
    mirror the pattern from `live-sync.ts`:
    
    ```typescript
    if (!liveAccountSync) {
        liveAccountSync = params.createSync(params.authFallback);
        liveAccountSyncConfigKey = nextConfigKey;
        if (!params.currentCleanupRegistered) {
            params.registerCleanup(() => {
                liveAccountSync?.stop();
            });
        }
    }
    ```
    
    or accept/return a `cleanupRegistered` boolean in the params/return type so callers can track and pass it back on the next invocation — exactly how `live-sync.ts` handles it.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Codex

Fix All in Codex

Prompt To Fix All With AI
This is a comment left during a code review.
Path: lib/runtime/runtime-services.ts
Line: 66-72

Comment:
**cleanup accumulates on every sync recreation**

no `cleanupRegistered` guard here. every time the config key changes and a new sync is created, another closure is pushed into the cleanup registry. the sibling `live-sync.ts` (lines 95-101) was written with an explicit `cleanupRegistered` flag + `getCurrentSync()` indirection precisely to prevent this.

each extra closure captures its own local `liveAccountSync` binding, so it won't double-stop the *current* sync — but it will call `.stop()` on the already-stopped previous sync, which is only safe if `stop()` is idempotent. more importantly, after N config changes the registry holds N cleanup handlers. on windows, antivirus-induced re-entrant fs locks during a flurry of config reloads could trigger multiple concurrent stop/recreate cycles, each registering another handler.

mirror the pattern from `live-sync.ts`:

```typescript
if (!liveAccountSync) {
    liveAccountSync = params.createSync(params.authFallback);
    liveAccountSyncConfigKey = nextConfigKey;
    if (!params.currentCleanupRegistered) {
        params.registerCleanup(() => {
            liveAccountSync?.stop();
        });
    }
}
```

or accept/return a `cleanupRegistered` boolean in the params/return type so callers can track and pass it back on the next invocation — exactly how `live-sync.ts` handles it.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: test/live-sync-entry.test.ts
Line: 6-40

Comment:
**mock omits `liveAccountSyncConfigKey`, no assertion on it**

the `ensureLiveAccountSyncState` mock returns:

```typescript
vi.fn(async () => ({
    liveAccountSync: { stop: vi.fn(), syncToPath: vi.fn() },
    liveAccountSyncPath: "/tmp/accounts.json",
    // liveAccountSyncConfigKey missing
}))
```

the only assertion at line 40 checks `liveAccountSyncPath`. `result.liveAccountSyncConfigKey` will be `undefined` in this test, and the test has no assertion on it at all — so there is zero signal that the config key is forwarded from `ensureLiveAccountSyncEntry` through to the caller. this is the core observable behaviour added by this pr for the entry-point layer.

add `liveAccountSyncConfigKey: "25:250"` to the mock return and assert:

```suggestion
		const ensureLiveAccountSyncState = vi.fn(async () => ({
			liveAccountSync: { stop: vi.fn(), syncToPath: vi.fn() },
			liveAccountSyncPath: "/tmp/accounts.json",
			liveAccountSyncConfigKey: "25:250",
		}));
```

then add after line 40:
```typescript
expect(result.liveAccountSyncConfigKey).toBe("25:250");
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: test/runtime-services.test.ts
Line: 92-115

Comment:
**missing coverage: `registerCleanup` call count on recreation**

the "recreates live sync when config key changes" test correctly checks that the old sync is stopped and a new one is created, but it doesn't assert how many times `registerCleanup` is called. given the missing guard in `runtime-services.ts`, this test would pass even if cleanup handlers accumulate — and it would continue to pass after a fix. adding `expect(registerCleanup).toHaveBeenCalledTimes(1)` would lock in the expected behaviour and catch regressions.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "fix: recreate live account sync on confi..." | Re-trigger Greptile

Live account sync reused the same controller once created, even when its
effective debounce/poll configuration changed. That leaves stale runtime
state in place and keeps watching with outdated timing parameters.

Track a live-sync config key and recreate the controller when the
configuration changes, matching the refresh-guardian pattern.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 6, 2026

Warning

Rate limit exceeded

@ndycode has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 20 minutes and 2 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 20 minutes and 2 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6048e946-1923-4769-8d54-e77d599c45e1

📥 Commits

Reviewing files that changed from the base of the PR and between 478f44c and 9e35ef5.

📒 Files selected for processing (7)
  • index.ts
  • lib/runtime/live-sync-entry.ts
  • lib/runtime/live-sync.ts
  • lib/runtime/runtime-services.ts
  • test/live-sync-entry.test.ts
  • test/runtime-live-sync.test.ts
  • test/runtime-services.test.ts
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/live-account-sync-stale-state
✨ Simplify code
  • Create PR with simplified code
  • Commit simplified code in branch fix/live-account-sync-stale-state

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@ndycode
Copy link
Copy Markdown
Owner Author

ndycode commented Apr 6, 2026

Superseded by #387, which rebuilds the full open PR stack onto one reviewed integration branch.

@ndycode
Copy link
Copy Markdown
Owner Author

ndycode commented Apr 6, 2026

Closing in favor of #387.

@ndycode ndycode closed this Apr 6, 2026
@ndycode ndycode deleted the fix/live-account-sync-stale-state branch April 12, 2026 06:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant