Skip to content

fix(e2e): wait for Unleash before running tests#869

Merged
Gkrumbach07 merged 1 commit intomainfrom
fix/e2e-wait-for-unleash
Mar 10, 2026
Merged

fix(e2e): wait for Unleash before running tests#869
Gkrumbach07 merged 1 commit intomainfrom
fix/e2e-wait-for-unleash

Conversation

@maskarb
Copy link
Copy Markdown
Contributor

@maskarb maskarb commented Mar 10, 2026

Summary

  • E2E tests fail on main with cy.click() failed because the page updated — the "New Session" button gets detached from the DOM by continuous React re-renders
  • Root cause: the frontend retries failed Unleash connections, triggering re-render loops when Unleash is still starting (DB migration, pod restarts)
  • Fix: add kubectl wait for the Unleash deployment in wait-for-ready.sh so all services are stable before tests begin

Test plan

  • Reproduced failure locally on fresh Kind cluster without fix (100% failure rate)
  • Verified fix: fresh Kind cluster with Unleash wait → 57/57 tests passing

🤖 Generated with Claude Code

The frontend retries Unleash connections on failure, causing continuous
React re-renders that detach DOM nodes. When Unleash is still starting
(DB migration, restarts), this makes cy.click() fail with
"element has detached from DOM". Wait for the Unleash deployment to be
available before proceeding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 10, 2026

Walkthrough

Adds a new wait sequence for the Unleash deployment in the e2e test initialization script. After verifying frontend readiness, the script now waits for the Unleash deployment to be ready with a 300-second timeout, with graceful fallback if the service is not deployed.

Changes

Cohort / File(s) Summary
Unleash readiness check
e2e/scripts/wait-for-ready.sh
Adds kubectl wait command for Unleash deployment (ambient-code namespace) with 300s timeout and warning fallback for missing deployment.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a wait for Unleash deployment before running E2E tests, which directly addresses the root cause of test failures.
Description check ✅ Passed The description clearly explains the problem (E2E test failures due to DOM detachment), root cause (Unleash startup issues triggering React re-renders), and the solution (adding kubectl wait for Unleash), with test results demonstrating the fix works.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/e2e-wait-for-unleash

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

Claude Code Review

Summary: PR 869 adds a kubectl wait step for Unleash in e2e/scripts/wait-for-ready.sh to prevent E2E test failures caused by React re-render loops when Unleash is still initializing. Small, focused, correct fix.

Issues by Severity

  • Blocker: None
  • Critical: None
  • Major: None
  • Minor: The 2>/dev/null suppresses ALL kubectl wait stderr, including genuine K8s errors. A stuck Unleash pod (image pull backoff, DB stall) would silently pass, then print a misleading not-deployed message. A kubectl get pre-check would be more targeted, but this is a style note only - not a blocker.

Positive Highlights

  • Exact pattern consistency: mirrors MinIO and other deployment wait blocks verbatim (same timeout, namespace flag, echo style)
  • Accurate root-cause comment: explains the re-render loop mechanism, helping future debuggers
  • Graceful degradation: the fallback echo handles minimal clusters where Unleash is not deployed
  • Verified fix: PR documents 100% failure rate before and 57/57 tests passing after

Recommendations

  1. (Optional, low priority) Use a kubectl get pre-check instead of 2>/dev/null to avoid masking real Unleash errors with a misleading not-deployed message. Not required for merge.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@e2e/scripts/wait-for-ready.sh`:
- Around line 27-29: The script currently swallows kubectl wait failures for
deployment/unleash; change it to first check existence of the Unleash deployment
(kubectl get deployment/unleash -n ambient-code) and only skip waiting if the
deployment is truly absent, otherwise run kubectl wait --for=condition=available
--timeout=300s deployment/unleash -n ambient-code and fail (exit non-zero) if
that wait times out or errors. Locate the wait line in wait-for-ready.sh and
replace the current "|| echo ..." fallback with an existence check + conditional
that echoes "Unleash not deployed (feature flags disabled)" only when the
deployment is missing and returns a non-zero exit (exit 1) when the deployment
exists but never becomes available.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 31e6b0ba-0bb2-4768-9da6-63cb53341832

📥 Commits

Reviewing files that changed from the base of the PR and between 07d50e5 and c8c6c0f.

📒 Files selected for processing (1)
  • e2e/scripts/wait-for-ready.sh

Comment on lines +27 to +29
kubectl wait --for=condition=available --timeout=300s \
deployment/unleash \
-n ambient-code 2>/dev/null || echo "⚠️ Unleash not deployed (feature flags disabled)"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't swallow Unleash readiness failures.

In this repo Unleash is part of the base manifests and wait-for-ready.sh runs after deploy, so this || echo ... path will also hide real timeouts/crash loops, not just a missing deployment. That lets E2E continue before feature flags are stable, which defeats the purpose of this fix. Check for existence first, then fail if the deployed rollout never becomes available. As per coding guidelines, "**: -Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

Suggested fix
 # Wait for Unleash (feature flags - frontend retries on connect failure cause re-render loops)
 echo "⏳ Waiting for unleash..."
-kubectl wait --for=condition=available --timeout=300s \
-  deployment/unleash \
-  -n ambient-code 2>/dev/null || echo "⚠️  Unleash not deployed (feature flags disabled)"
+if kubectl get deployment/unleash -n ambient-code >/dev/null 2>&1; then
+  kubectl wait --for=condition=available --timeout=300s \
+    deployment/unleash \
+    -n ambient-code
+else
+  echo "⚠️  Unleash not deployed (feature flags disabled)"
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kubectl wait --for=condition=available --timeout=300s \
deployment/unleash \
-n ambient-code 2>/dev/null || echo "⚠️ Unleash not deployed (feature flags disabled)"
# Wait for Unleash (feature flags - frontend retries on connect failure cause re-render loops)
echo "⏳ Waiting for unleash..."
if kubectl get deployment/unleash -n ambient-code >/dev/null 2>&1; then
kubectl wait --for=condition=available --timeout=300s \
deployment/unleash \
-n ambient-code
else
echo "⚠️ Unleash not deployed (feature flags disabled)"
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@e2e/scripts/wait-for-ready.sh` around lines 27 - 29, The script currently
swallows kubectl wait failures for deployment/unleash; change it to first check
existence of the Unleash deployment (kubectl get deployment/unleash -n
ambient-code) and only skip waiting if the deployment is truly absent, otherwise
run kubectl wait --for=condition=available --timeout=300s deployment/unleash -n
ambient-code and fail (exit non-zero) if that wait times out or errors. Locate
the wait line in wait-for-ready.sh and replace the current "|| echo ..."
fallback with an existence check + conditional that echoes "Unleash not deployed
(feature flags disabled)" only when the deployment is missing and returns a
non-zero exit (exit 1) when the deployment exists but never becomes available.

@Gkrumbach07 Gkrumbach07 merged commit f981ed3 into main Mar 10, 2026
7 checks passed
@Gkrumbach07 Gkrumbach07 deleted the fix/e2e-wait-for-unleash branch March 10, 2026 16:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants