Skip to content

feat(frontend): HPA for Frontend#1059

Merged
Gkrumbach07 merged 4 commits intoambient-code:mainfrom
syntaxsdev:feat/frontend-hpa
Mar 26, 2026
Merged

feat(frontend): HPA for Frontend#1059
Gkrumbach07 merged 4 commits intoambient-code:mainfrom
syntaxsdev:feat/frontend-hpa

Conversation

@syntaxsdev
Copy link
Copy Markdown
Contributor

Description:

Add HPA for frontend deployment

  • frontend-hpa.yaml: HPA that scales the frontend deployment between 2–10 pods based on CPU
    utilization (70% threshold) of the frontend container only, with conservative scale-down behavior

  • kustomization.yaml: one line added to include frontend-hpa.yaml as a resource in the production
    overlay

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 26, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 422908e4-8e1d-4bdf-bce5-73c5202850d8

📥 Commits

Reviewing files that changed from the base of the PR and between fa34bd4 and 23ebb55.

📒 Files selected for processing (1)
  • components/manifests/base/core/frontend-deployment.yaml

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting


Walkthrough

Adds a production HorizontalPodAutoscaler named frontend targeting Deployment/frontend with minReplicas: 4, maxReplicas: 10, container-scoped CPU utilization target 70% (container frontend), and customized scaleUp/scaleDown stabilization windows and pod policies; also removes explicit replica count from the base Deployment and includes the HPA in the production kustomize overlay.

Changes

Cohort / File(s) Summary
HPA Configuration
components/manifests/overlays/production/frontend-hpa.yaml
Adds HorizontalPodAutoscaler/frontend (autoscaling/v2) targeting Deployment/frontend with minReplicas: 4, maxReplicas: 10, a container-scoped CPU ContainerResource metric for container frontend (target average utilization 70%), scaleDown stabilizationWindowSeconds: 300 with Pods policy limiting scale-down to 1 pod per 60s, and scaleUp stabilizationWindowSeconds: 30 with Pods policy limiting scale-up to 2 pods per 60s.
Kustomization Update
components/manifests/overlays/production/kustomization.yaml
Adds frontend-hpa.yaml to the production overlay resources list so the HPA is included in the kustomize build output.
Deployment change
components/manifests/base/core/frontend-deployment.yaml
Removes the explicit spec.replicas: 2 from the frontend Deployment to allow HPA-managed replica counts.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(frontend): HPA for Frontend' clearly summarizes the main change—adding a Horizontal Pod Autoscaler for the frontend deployment, which aligns with the primary objective of the changeset.
Description check ✅ Passed The description is directly related to the changeset, providing specific details about the HPA configuration (replica ranges, CPU utilization threshold, scale-down behavior) and mentioning the kustomization.yaml update.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@components/manifests/overlays/production/frontend-hpa.yaml`:
- Around line 12-13: The base Deployment hardcodes replicas which conflicts with
the HPA; create a patch file named frontend-deployment-patch.yaml in the
production overlay that sets spec.replicas to null for the Deployment named
"frontend" and add that patch to the overlay kustomization.yaml's patches
section so the base's replicas field is removed at build time; reference the
existing frontend Deployment (name: frontend, kind: Deployment, apiVersion:
apps/v1) and the production HPA (frontend-hpa.yaml) to ensure HPA has sole
control of scaling.
🪄 Autofix (Beta)

❌ Autofix failed (check again to retry)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 9ae909a2-cef0-4c6d-b47c-aea5d1eef7ae

📥 Commits

Reviewing files that changed from the base of the PR and between 9ca8168 and ad1e26c.

📒 Files selected for processing (2)
  • components/manifests/overlays/production/frontend-hpa.yaml
  • components/manifests/overlays/production/kustomization.yaml

@ktdreyer ktdreyer mentioned this pull request Mar 26, 2026
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
jeremyeder
jeremyeder previously approved these changes Mar 26, 2026
Copy link
Copy Markdown
Contributor

@jeremyeder jeremyeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Set minReplicas to 4. HPA config looks solid — good use of ContainerResource to exclude the oauth-proxy sidecar, conservative scale-down, and CPU requests are already in place.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
components/manifests/overlays/production/frontend-hpa.yaml (1)

8-13: ⚠️ Potential issue | 🔴 Critical

Prevent HPA/GitOps replica ownership conflict.

HorizontalPodAutoscaler/frontend (Line 8-Line 13) targets Deployment/frontend, but the base deployment still declares spec.replicas: 2. In GitOps reconciliation, that causes replica thrashing between the deployment desired state and HPA decisions. Remove/neutralize spec.replicas for Deployment/frontend in production (or configure ignore-differences for that field) so HPA is the sole scaler.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manifests/overlays/production/frontend-hpa.yaml` around lines 8 -
13, The HPA resource HorizontalPodAutoscaler/frontend is targeting
Deployment/frontend but the base Deployment still defines spec.replicas (2),
causing GitOps reconciliation conflicts; remove or neutralize the
Deployment/frontend's spec.replicas for the production overlay so the HPA is the
sole scaler—either delete the spec.replicas field in the production overlay (or
use a Kustomize patch/strategicMerge/json6902 to remove/set it to null) for
Deployment/frontend, or add an ignore-differences rule to your GitOps config for
the spec.replicas field of Deployment/frontend so HPA decisions are not
overwritten.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@components/manifests/overlays/production/frontend-hpa.yaml`:
- Around line 8-13: The HPA resource HorizontalPodAutoscaler/frontend is
targeting Deployment/frontend but the base Deployment still defines
spec.replicas (2), causing GitOps reconciliation conflicts; remove or neutralize
the Deployment/frontend's spec.replicas for the production overlay so the HPA is
the sole scaler—either delete the spec.replicas field in the production overlay
(or use a Kustomize patch/strategicMerge/json6902 to remove/set it to null) for
Deployment/frontend, or add an ignore-differences rule to your GitOps config for
the spec.replicas field of Deployment/frontend so HPA decisions are not
overwritten.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: b2bea983-a390-42f7-9ad2-a234662f3448

📥 Commits

Reviewing files that changed from the base of the PR and between ad1e26c and fa34bd4.

📒 Files selected for processing (1)
  • components/manifests/overlays/production/frontend-hpa.yaml

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 26, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

❌ Failed to clone repository into sandbox. Please try again.

Ambient Code Bot and others added 2 commits March 26, 2026 17:37
Add a Kustomize patch in the production overlay that nulls out the
replicas field from the base frontend Deployment, so the HPA has
sole control over scaling and avoids reconciliation conflicts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Remove hardcoded replicas from the base Deployment instead of using
a Kustomize patch. The HPA controls scaling in production, and
Kubernetes defaults to 1 replica elsewhere.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Gkrumbach07 Gkrumbach07 merged commit 3028dc9 into ambient-code:main Mar 26, 2026
13 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants