fix(e2e): wait for Unleash before running tests#869
Conversation
The frontend retries Unleash connections on failure, causing continuous React re-renders that detach DOM nodes. When Unleash is still starting (DB migration, restarts), this makes cy.click() fail with "element has detached from DOM". Wait for the Unleash deployment to be available before proceeding. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WalkthroughAdds a new wait sequence for the Unleash deployment in the e2e test initialization script. After verifying frontend readiness, the script now waits for the Unleash deployment to be ready with a 300-second timeout, with graceful fallback if the service is not deployed. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Claude Code Review Summary: PR 869 adds a kubectl wait step for Unleash in e2e/scripts/wait-for-ready.sh to prevent E2E test failures caused by React re-render loops when Unleash is still initializing. Small, focused, correct fix. Issues by Severity
Positive Highlights
Recommendations
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e/scripts/wait-for-ready.sh`:
- Around line 27-29: The script currently swallows kubectl wait failures for
deployment/unleash; change it to first check existence of the Unleash deployment
(kubectl get deployment/unleash -n ambient-code) and only skip waiting if the
deployment is truly absent, otherwise run kubectl wait --for=condition=available
--timeout=300s deployment/unleash -n ambient-code and fail (exit non-zero) if
that wait times out or errors. Locate the wait line in wait-for-ready.sh and
replace the current "|| echo ..." fallback with an existence check + conditional
that echoes "Unleash not deployed (feature flags disabled)" only when the
deployment is missing and returns a non-zero exit (exit 1) when the deployment
exists but never becomes available.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 31e6b0ba-0bb2-4768-9da6-63cb53341832
📒 Files selected for processing (1)
e2e/scripts/wait-for-ready.sh
| kubectl wait --for=condition=available --timeout=300s \ | ||
| deployment/unleash \ | ||
| -n ambient-code 2>/dev/null || echo "⚠️ Unleash not deployed (feature flags disabled)" |
There was a problem hiding this comment.
Don't swallow Unleash readiness failures.
In this repo Unleash is part of the base manifests and wait-for-ready.sh runs after deploy, so this || echo ... path will also hide real timeouts/crash loops, not just a missing deployment. That lets E2E continue before feature flags are stable, which defeats the purpose of this fix. Check for existence first, then fail if the deployed rollout never becomes available. As per coding guidelines, "**: -Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
Suggested fix
# Wait for Unleash (feature flags - frontend retries on connect failure cause re-render loops)
echo "⏳ Waiting for unleash..."
-kubectl wait --for=condition=available --timeout=300s \
- deployment/unleash \
- -n ambient-code 2>/dev/null || echo "⚠️ Unleash not deployed (feature flags disabled)"
+if kubectl get deployment/unleash -n ambient-code >/dev/null 2>&1; then
+ kubectl wait --for=condition=available --timeout=300s \
+ deployment/unleash \
+ -n ambient-code
+else
+ echo "⚠️ Unleash not deployed (feature flags disabled)"
+fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kubectl wait --for=condition=available --timeout=300s \ | |
| deployment/unleash \ | |
| -n ambient-code 2>/dev/null || echo "⚠️ Unleash not deployed (feature flags disabled)" | |
| # Wait for Unleash (feature flags - frontend retries on connect failure cause re-render loops) | |
| echo "⏳ Waiting for unleash..." | |
| if kubectl get deployment/unleash -n ambient-code >/dev/null 2>&1; then | |
| kubectl wait --for=condition=available --timeout=300s \ | |
| deployment/unleash \ | |
| -n ambient-code | |
| else | |
| echo "⚠️ Unleash not deployed (feature flags disabled)" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e/scripts/wait-for-ready.sh` around lines 27 - 29, The script currently
swallows kubectl wait failures for deployment/unleash; change it to first check
existence of the Unleash deployment (kubectl get deployment/unleash -n
ambient-code) and only skip waiting if the deployment is truly absent, otherwise
run kubectl wait --for=condition=available --timeout=300s deployment/unleash -n
ambient-code and fail (exit non-zero) if that wait times out or errors. Locate
the wait line in wait-for-ready.sh and replace the current "|| echo ..."
fallback with an existence check + conditional that echoes "Unleash not deployed
(feature flags disabled)" only when the deployment is missing and returns a
non-zero exit (exit 1) when the deployment exists but never becomes available.
Summary
mainwithcy.click() failed because the page updated— the "New Session" button gets detached from the DOM by continuous React re-renderskubectl waitfor the Unleash deployment inwait-for-ready.shso all services are stable before tests beginTest plan
🤖 Generated with Claude Code