Skip to content

feat(kube): add KEDA event-driven autoscaler operator#7605

Merged
h0lybyte merged 1 commit intodevfrom
trunk/keda-1772665782
Mar 5, 2026
Merged

feat(kube): add KEDA event-driven autoscaler operator#7605
h0lybyte merged 1 commit intodevfrom
trunk/keda-1772665782

Conversation

@h0lybyte
Copy link
Member

@h0lybyte h0lybyte commented Mar 4, 2026

Ref #7590

Summary

  • Deploys KEDA 2.19.0 (latest stable) as an ArgoCD-managed Helm chart
  • Follows the CNPG multi-source pattern: vendor Helm chart + repo-hosted values override
  • Added to root kustomization.yaml alongside other platform operators
  • Deploys to keda namespace

What KEDA enables

KEDA extends Kubernetes autoscaling beyond CPU/memory to support event-driven triggers:

Trigger Use case
Redis list length n8n-worker queue scaling (#7590)
Kafka lag Analytics processors
Prometheus query Custom metric-based scaling
Cron schedule Time-based scaling

Files

  • apps/kube/keda/application.yaml — ArgoCD Application (Helm chart pinned to 2.19.0)
  • apps/kube/keda/manifests/values.yaml — Helm values (resources, security, monitoring)
  • apps/kube/kustomization.yaml — Added keda/application.yaml to resources

Configuration highlights

  • Operator + metrics server: 100m/128Mi requests, 500m/512Mi limits
  • PodMonitor enabled for Prometheus scraping
  • Security: non-root, read-only root filesystem, all capabilities dropped
  • Retry: 3 attempts with exponential backoff

Next steps (after merge + deploy)

  1. Verify KEDA pods healthy: kubectl get pods -n keda
  2. Verify CRDs registered: kubectl get crd | grep keda
  3. Create ScaledObject for n8n-worker targeting Redis Bull queue depth

Test plan

  • kubectl apply --dry-run=client passes for application.yaml
  • CI passes
  • After deploy: KEDA operator + metrics server running

Deploys KEDA 2.19.0 (event-driven autoscaler) to the keda namespace.
Follows the CNPG multi-source Helm pattern with pinned version and
values override from repo. Added to root kustomization alongside
other platform operators.

KEDA enables autoscaling workloads based on external events (Redis
queue depth, Kafka lag, Prometheus metrics) instead of CPU/memory.
Primary use case: scaling n8n-worker based on Bull queue length.

Ref #7590
@github-actions
Copy link
Contributor

github-actions bot commented Mar 4, 2026

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Scanned Files

None

@h0lybyte h0lybyte merged commit c6d1c87 into dev Mar 5, 2026
5 checks passed
@h0lybyte h0lybyte deleted the trunk/keda-1772665782 branch March 5, 2026 00:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant