Conversation
When we implemented config diffing we started doing the diffing client-side. I think this is still generally the correct behavior. However, we should also run and display the server-side dry-run on a dry-run, since this actually runs the validation before a change.
🦋 Changeset detectedLatest commit: cec9f36 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis pull request enables true dry-run behavior for Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Detailed notes
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
| await expect( | ||
| runConfigPatch({ json: '{"session":{"lifetime":3600}}', dryRun: true }), | ||
| ).rejects.toThrow("API error"); |
There was a problem hiding this comment.
The test name says it surfaces server validation errors, but we only assert the error class. The server's message (invalid organization_settings) never gets verified, so a regression in error formatting would slip through. Tightening the assertion catches that.
| await expect( | |
| runConfigPatch({ json: '{"session":{"lifetime":3600}}', dryRun: true }), | |
| ).rejects.toThrow("API error"); | |
| await expect( | |
| runConfigPatch({ json: '{"session":{"lifetime":3600}}', dryRun: true }), | |
| ).rejects.toThrow("invalid organization_settings"); |
| await runConfigPut({ json: '{"session":{"lifetime":3600}}', dryRun: true }); | ||
| expect(putUrl).toContain("dry_run=true"); | ||
| expect(captured.err).toContain("[dry-run] Would PUT"); | ||
| }); |
There was a problem hiding this comment.
Small one: the PATCH dry-run test above asserts the projected result on stdout and the Validation passed line. The PUT test stops earlier, so a regression in either would slip through for PUT only. Worth mirroring.
| await runConfigPut({ json: '{"session":{"lifetime":3600}}', dryRun: true }); | |
| expect(putUrl).toContain("dry_run=true"); | |
| expect(captured.err).toContain("[dry-run] Would PUT"); | |
| }); | |
| await runConfigPut({ json: '{"session":{"lifetime":3600}}', dryRun: true }); | |
| expect(putUrl).toContain("dry_run=true"); | |
| expect(captured.err).toContain("[dry-run] Would PUT"); | |
| expect(captured.out).toContain(JSON.stringify(mockResponse, null, 2)); | |
| expect(captured.err).toContain("[dry-run] Validation passed"); |
There was a problem hiding this comment.
Thanks, adopted (and fixed the "Would" in the error assertion from Wyatt's comment)
When we implemented config diffing we started doing the diffing client-side. I think this is still generally the correct behavior. However, we should also run and display the server-side dry-run on a dry-run, since this actually runs the validation before a change, which is very valuable.