test(settings): split persistence regression coverage#64
Conversation
📝 WalkthroughWalkthroughnew helper module Changes
Sequence Diagram(s)No sequence diagrams generated. Changes are test-only refactoring with straightforward cleanup logic and test expansion; no new features or multi-component interactions introduced. Flagged Concerns
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/dashboard-settings.test.ts`:
- Around line 409-424: The test mutates the exported defaults object
(defaults.actionAutoReturnMs = undefined etc.), which can leak between tests;
instead, before importing normalizeDashboardDisplaySettings, call
vi.resetModules() and use vi.doMock to return a mocked module that spreads the
original export but overrides DEFAULT_DASHBOARD_DISPLAY_SETTINGS (or the
exported defaults object you changed) with the specific fields set to undefined;
then import normalizeDashboardDisplaySettings from the module, run assertions,
and finally vi.doUnmock to restore the real module. Ensure you reference the
exported symbol DEFAULT_DASHBOARD_DISPLAY_SETTINGS (or the actual exported name
used in the module) and the function normalizeDashboardDisplaySettings when
replacing the export via vi.doMock.
In `@test/unified-settings.test.ts`:
- Around line 270-324: The test currently verifies preservation of an unrelated
top-level section only with sequential writes; add a concurrent-write variant to
ensure the serialization/merge logic doesn't clobber other sections under race
conditions. Create a new test that writes an initial file containing
experimentalDraft, then concurrently calls saveUnifiedPluginConfig(...) and
saveUnifiedDashboardSettings(...) (using Promise.all) and finally reads the file
(getUnifiedSettingsPath/readFile) to assert experimentalDraft is unchanged and
both pluginConfig and dashboardDisplaySettings exist; this will exercise the
serialization queue used in the unified-settings read-merge-write functions
(saveUnifiedPluginConfig, saveUnifiedDashboardSettings).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d603ed2d-757d-4e9f-ab02-b321e7c9d442
📒 Files selected for processing (3)
test/dashboard-settings.test.tstest/helpers/remove-with-retry.tstest/unified-settings.test.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Greptile Review
🧰 Additional context used
📓 Path-based instructions (1)
test/**
⚙️ CodeRabbit configuration file
tests must stay deterministic and use vitest. demand regression cases that reproduce concurrency bugs, token refresh races, and windows filesystem behavior. reject changes that mock real secrets or skip assertions.
Files:
test/unified-settings.test.tstest/helpers/remove-with-retry.tstest/dashboard-settings.test.ts
🔇 Additional comments (6)
test/helpers/remove-with-retry.ts (1)
3-18: solid retry helper for windows filesystem transients.the implementation correctly handles the common windows lock errors (EBUSY, EPERM) that cause flaky test teardowns. the incremental backoff (25ms × attempt) is reasonable for test cleanup scenarios.
one minor observation: if you ever need to verify this helper's behavior in isolation, consider adding a dedicated test that mocks
fs.rmto confirm retry count and backoff timing. not blocking, but would strengthen confidence in the retry logic itself.test/unified-settings.test.ts (1)
24-24: good switch to the retry helper for teardown.using
removeWithRetryhere aligns with the windows-safe cleanup pattern and matchestest/dashboard-settings.test.ts:24. this should reduce flaky test failures on windows CI where antivirus or indexing services hold file handles.test/dashboard-settings.test.ts (4)
24-24: consistent cleanup pattern with unified-settings tests.matches the teardown at
test/unified-settings.test.ts:24. good for consistency across the test suite.
112-168: good regression coverage for custom sections.testing with
docsParityAnchor(a section outside the known schema) verifies the read-merge-write doesn't strip unknown keys. this is exactly the kind of forward-compatibility test you want.
233-258: solid regression test for windows EBUSY during legacy migration.mocking a single EBUSY failure then falling through to real reads verifies the retry logic recovers. this is exactly the windows edge case the coding guidelines call for.
276-276: good: retry count assertion keeps test deterministic.asserting
toHaveBeenCalledTimes(4)ties this test to the implementation's retry count (3 retries = 4 total attempts). if the retry count changes in production code, this test will catch the drift.
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="test/dashboard-settings.test.ts">
<violation number="1" location="test/dashboard-settings.test.ts:393">
P2: This mock-based setup no longer verifies the undefined-default fallback path; it only validates normal defaults, so the regression test can pass while fallback literals are broken.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
Pushed follow-up I removed the misleading mock-based dashboard fallback test after re-checking the implementation: the mock could not affect Validated with |
| const loaded = await loadDashboardDisplaySettings(); | ||
| expect(loaded.showPerAccountRows).toBe(false); | ||
| expect(loaded.menuShowQuotaSummary).toBe(false); | ||
| expect(readSpy).toHaveBeenCalledTimes(2); |
There was a problem hiding this comment.
The assertion expect(readSpy).toHaveBeenCalledTimes(2) is correct for the happy path, but the spy is never cleaned up if the test fails before readSpy.mockRestore() at line 261. This means the spy continues affecting subsequent tests. The code should wrap the test logic in a try-finally block:
try {
const loaded = await loadDashboardDisplaySettings();
expect(loaded.showPerAccountRows).toBe(false);
expect(loaded.menuShowQuotaSummary).toBe(false);
expect(readSpy).toHaveBeenCalledTimes(2);
} finally {
readSpy.mockRestore();
}This pattern is already used in other tests (lines 278-284) but was missed here.
Spotted by Graphite
Is this helpful? React 👍 or 👎 to let us know.
|
Follow-up pushed on top of the earlier persistence-test cleanup: This adds the missing The PR body was also refreshed to a concise human summary. |
Summary
Why
This branch keeps persistence-specific regression coverage separate from the much larger CLI settings harness so reviewers can focus on the file/write-queue behavior by itself.
Verification
npm exec vitest run test/dashboard-settings.test.ts test/unified-settings.test.tsnpm run typechecknpm run buildNotes
#55