test: stabilize flaky cli encryption and queue tests#1861
Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughRemoved concurrent execution from two RBAC tests; changed key generation in test utils to spawn a subprocess and serialized SDK operations with explicit cwd handling; added per-test queue cleanup and a retrying /queue_consumer/sync helper used by queue load tests. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/queue_load.test.ts (1)
14-24: Consider removing the redundantbeforeAllcleanup.With
beforeEachnow performing the same cleanup (lines 21-24), thebeforeAllblock (lines 14-19) is redundant. Every test will start with a clean queue regardless.♻️ Proposed simplification
-beforeAll(async () => { - // Clean up any existing messages in the test queue - // Count before cleanup for debugging - await pool.query(`DELETE FROM pgmq.q_${queueName}`) - await pool.query(`DELETE FROM pgmq.a_${queueName}`) -}) - beforeEach(async () => { await pool.query(`DELETE FROM pgmq.q_${queueName}`) await pool.query(`DELETE FROM pgmq.a_${queueName}`) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/queue_load.test.ts` around lines 14 - 24, Remove the redundant beforeAll cleanup block: since beforeEach already runs await pool.query(`DELETE FROM pgmq.q_${queueName}`) and await pool.query(`DELETE FROM pgmq.a_${queueName}`) before every test, delete the entire beforeAll(...) that performs the same pool.query deletions (references: beforeAll, beforeEach, pool.query, queueName) so tests rely solely on beforeEach to reset state.tests/cli-sdk-utils.ts (1)
260-273: Consider making the delay configurable or documenting the source.The 100ms delay for async SDK config writes is a pragmatic workaround. If this proves insufficient in CI environments with higher latency, consider making it configurable or adding a retry loop that checks if the config file was modified.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli-sdk-utils.ts` around lines 260 - 273, Replace the fixed 100ms sleep used to wait for async SDK writes (the anonymous Promise around setTimeout near CAPACITOR_CONFIG_PATH and configBackup handling) with a configurable wait or a retry loop: expose a constant or parameter (e.g., SDK_CONFIG_WRITE_TIMEOUT_MS or a helper waitForConfigWrite function) and implement retries that check the filesystem (using readFileSync and comparing to expected modified content or timestamp) until the config file reflects the SDK change or a timeout elapses; then restore via writeFileSync within the same try/catch as before. This keeps the existing cleanup logic around configBackup/writeFileSync but makes the delay robust and configurable for CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/cli-sdk-utils.ts`:
- Around line 260-273: Replace the fixed 100ms sleep used to wait for async SDK
writes (the anonymous Promise around setTimeout near CAPACITOR_CONFIG_PATH and
configBackup handling) with a configurable wait or a retry loop: expose a
constant or parameter (e.g., SDK_CONFIG_WRITE_TIMEOUT_MS or a helper
waitForConfigWrite function) and implement retries that check the filesystem
(using readFileSync and comparing to expected modified content or timestamp)
until the config file reflects the SDK change or a timeout elapses; then restore
via writeFileSync within the same try/catch as before. This keeps the existing
cleanup logic around configBackup/writeFileSync but makes the delay robust and
configurable for CI.
In `@tests/queue_load.test.ts`:
- Around line 14-24: Remove the redundant beforeAll cleanup block: since
beforeEach already runs await pool.query(`DELETE FROM pgmq.q_${queueName}`) and
await pool.query(`DELETE FROM pgmq.a_${queueName}`) before every test, delete
the entire beforeAll(...) that performs the same pool.query deletions
(references: beforeAll, beforeEach, pool.query, queueName) so tests rely solely
on beforeEach to reset state.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a00b2273-37fd-45f1-a9ab-9b0ee2dffab8
📒 Files selected for processing (3)
tests/bundle-metadata-rbac.unit.test.tstests/cli-sdk-utils.tstests/queue_load.test.ts
|



Summary (AI generated)
Motivation (AI generated)
The production failures and flakes were coming from test infrastructure rather than product logic. The CLI encryption tests shared global SDK state through process cwd and root-level key generation, while the queue and RBAC tests still had small timing and shared-mock races under Vitest parallelism.
Business Impact (AI generated)
This reduces false negatives in CI/CD, makes release validation more reliable, and cuts time spent re-running or investigating flaky test failures. More stable test signal lowers the risk of delayed merges and wasted engineering time.
Test Plan (AI generated)
bunx eslint tests/cli-sdk-utils.ts tests/bundle-metadata-rbac.unit.test.ts tests/queue_load.test.tsbun run supabase:with-env -- bunx vitest run tests/bundle-metadata-rbac.unit.test.ts tests/queue_load.test.ts tests/cli-new-encryption.test.tsGenerated with AI
Summary by CodeRabbit