From efc23e6747cf8af11eaa6f9b1b194e7f92a4e495 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 15:42:16 -0400 Subject: [PATCH 1/7] Update _index.md --- .../kedify-http-autoscaling/_index.md | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/_index.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/_index.md index 901aa77c37..7fc65f7eb4 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/_index.md @@ -7,8 +7,7 @@ cascade: minutes_to_complete: 45 -who_is_this_for: > - Developers and SREs running HTTP-based workloads on Kubernetes who want to enable intelligent, event-driven autoscaling. +who_is_this_for: This is an introductory topic for developers running HTTP-based workloads on Kubernetes who want to enable event-driven autoscaling. learning_objectives: - Install Kedify (KEDA build, HTTP Scaler, and Kedify Agent) via Helm @@ -18,14 +17,13 @@ learning_objectives: prerequisites: - A running Kubernetes cluster (local or cloud) - kubectl and helm installed locally - - Access to the Kedify Service dashboard (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one + - Access to the Kedify Service dashboard (https://dashboard.kedify.io/) to obtain Organization ID and API Key. You can log in or create an account if you don’t have one author: Zbynek Roubalik ### Tags skilllevels: Introductory subjects: Containers and Virtualization -cloud_service_providers: Any armips: - Neoverse operatingsystems: From e62135a57a02481863e269293819df1a780c82b8 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 15:47:44 -0400 Subject: [PATCH 2/7] Update install-kedify-helm.md --- .../install-kedify-helm.md | 28 +++++++++++-------- 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md index 1a79f07306..0c7d70d6fa 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md @@ -4,17 +4,19 @@ weight: 2 layout: "learningpathall" --- -This page installs Kedify on your cluster using Helm. You’ll add the Kedify chart repo, install KEDA (Kedify build), the HTTP Scaler, and the Kedify Agent, then verify everything is running. +In this section you will learn how to install Kedify on your Kubernetes cluster using Helm. You will add the Kedify chart repo, install KEDA (Kedify build), the HTTP Scaler, and the Kedify Agent, then verify everything is running. -For more details and all installation methods, see Kedify installation docs: https://docs.kedify.io/installation/helm#installation-on-arm +For more details and all installation methods on Arm, you can refer to the [Kedify installation docs](https://docs.kedify.io/installation/helm#installation-on-arm) -## Prerequisites +## Before you begin -- A running Kubernetes cluster (kind, minikube, EKS, GKE, AKS, etc.) -- kubectl and helm installed and configured to talk to your cluster -- Kedify Service account (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one +You will need: -## Prepare installation +- A running Kubernetes cluster (kind, minikube, EKS, GKE, AKS, etc.). This can be on any cloud service provider. +- kubectl and helm installed and configured to communicate with your cluster +- A Kedify Service account (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one + +## Installation 1) Get your Organization ID: In the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> Details and copy the ID. @@ -25,7 +27,7 @@ For more details and all installation methods, see Kedify installation docs: htt kubectl get secret -n keda kedify-agent -o=jsonpath='{.data.apikey}' | base64 --decode ``` -- Otherwise, in the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> API Keys, click Create Agent Key, and copy the key. +Otherwise, in the Kedify dashboard (https://dashboard.kedify.io/) go to Organization -> API Keys, click Create Agent Key, and copy the key. Note: The API Key is shared across all your Agent installations. If you regenerate it, update existing Agent installs and keep it secret. @@ -40,9 +42,9 @@ helm repo update ## Helm installation -Most providers like AWS EKS and Azure AKS automatically place pods on ARM nodes when you specify `nodeSelector` for `kubernetes.io/arch=arm64`. However, Google Kubernetes Engine (GKE) applies an explicit taint on ARM nodes, requiring matching `tolerations`. +Most providers like AWS EKS and Azure AKS automatically place pods on Arm nodes when you specify `nodeSelector` for `kubernetes.io/arch=arm64`. However, Google Kubernetes Engine (GKE) applies an explicit taint on Arm nodes, requiring matching `tolerations`. -To ensure a portable deployment strategy across all cloud providers, we recommend configuring both `nodeSelector` and `tolerations` in your Helm values or CLI flags. +To ensure a portable deployment strategy across all cloud providers, it is recommended that you configure both `nodeSelector` and `tolerations` in your Helm values or CLI flags. Install each component into the keda namespace. Replace placeholders where noted. @@ -101,13 +103,15 @@ helm upgrade --install kedify-agent kedifykeda/kedify-agent \ ## Verify installation +You are now ready to verify your installation: + ```bash kubectl get pods -n keda ``` -Expected example (names may differ): +Expected output should look like (names may differ): -```text +```output NAME READY STATUS RESTARTS AGE keda-add-ons-http-external-scaler-xxxxx 1/1 Running 0 1m keda-add-ons-http-interceptor-xxxxx 1/1 Running 0 1m From ab93015d7351197a77c6bd28f840a286e0dc8c99 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 15:49:36 -0400 Subject: [PATCH 3/7] Update install-kedify-helm.md --- .../kedify-http-autoscaling/install-kedify-helm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md index 0c7d70d6fa..273da1405b 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md @@ -121,4 +121,4 @@ keda-operator-metrics-apiserver-xxxxx 1/1 Running 0 1m kedify-agent-xxxxx 1/1 Running 0 1m ``` -Proceed to the next section to deploy a sample HTTP app and test autoscaling. +Proceed to the next section to learn how to install an Ingress controller before deploying a sample HTTP app and testing autoscaling. From 3c9336ef6eac13fa07e144972caa70f47f1c6104 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 15:58:05 -0400 Subject: [PATCH 4/7] Update install-ingress.md --- .../kedify-http-autoscaling/install-ingress.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md index 2782610599..038e07e45d 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-ingress.md @@ -4,7 +4,7 @@ weight: 3 layout: "learningpathall" --- -Before deploying HTTP applications with Kedify autoscaling, you need an Ingress Controller to handle incoming traffic. Most major cloud providers (AWS EKS, Google GKE, Azure AKS) do not include an Ingress Controller by default in their managed Kubernetes offerings. +Before deploying HTTP applications with Kedify autoscaling, you need an Ingress Controller to handle incoming traffic. Most managed Kubernetes services offered by major cloud providers (AWS EKS, Google GKE, Azure AKS) do not include an Ingress Controller by default. {{% notice Note %}} If your cluster already has an Ingress Controller installed and configured, you can skip this step and proceed directly to the [HTTP Scaling guide](../http-scaling/). @@ -64,13 +64,13 @@ This will save the external IP or hostname in the `INGRESS_IP` environment varia ## Configure Access -For this tutorial, you have two options: +To configure access to the ingress controller, you have two options: ### Option 1: DNS Setup (Recommended for production) Point `application.keda` to your ingress controller's external IP/hostname using your DNS provider. ### Option 2: Host Header (Quick setup) -Use the external IP/hostname directly with a `Host:` header in your requests. When testing, you'll use: +Use the external IP/hostname directly with a `Host:` header in your requests. When testing, you will use: ```bash curl -H "Host: application.keda" http://$INGRESS_IP @@ -80,7 +80,7 @@ The `$INGRESS_IP` environment variable contains the actual external IP or hostna ## Verification -Test that the ingress controller is working by checking its readiness: +Verify that the ingress controller is working by checking its readiness: ```bash kubectl get pods --namespace ingress-nginx @@ -88,6 +88,5 @@ kubectl get pods --namespace ingress-nginx You should see the `ingress-nginx-controller` pod in `Running` status. -## Next Steps -Now that you have an Ingress Controller installed and configured, proceed to the [HTTP Scaling guide](../http-scaling/) to deploy an application and configure Kedify autoscaling. +Now that you have an Ingress Controller installed and configured, proceed to the next section to deploy an application and configure Kedify autoscaling. From e8024c1c0c268afe1b50c26d85b37e017c686c97 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 16:20:40 -0400 Subject: [PATCH 5/7] Update http-scaling.md --- .../kedify-http-autoscaling/http-scaling.md | 94 ++++++++++--------- 1 file changed, 48 insertions(+), 46 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md index 1bb91db087..88715908f3 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/http-scaling.md @@ -4,29 +4,28 @@ weight: 4 layout: "learningpathall" --- -Use this section to get a quick, hands-on feel for Kedify HTTP autoscaling. We’ll deploy a small web service, expose it through a standard Kubernetes Ingress, and rely on Kedify’s autowiring to route traffic via its proxy so requests are measured and drive scaling. +In this section, you’ll gain hands-on experience with Kedify HTTP autoscaling. You will deploy a small web service, expose it through a standard Kubernetes Ingress, and rely on Kedify’s autowiring to route traffic via its proxy so requests are measured and drive scaling. -Scale a real HTTP app exposed through Kubernetes Ingress using Kedify’s [kedify-http](https://docs.kedify.io/scalers/http-scaler/) scaler. You will deploy a simple app, enable autoscaling with a [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/), generate load, and observe the system scale out and back in (including scale-to-zero when idle). +You will scale a real HTTP app exposed through Kubernetes Ingress using Kedify’s [kedify-http](https://docs.kedify.io/scalers/http-scaler/) scaler. You will deploy a simple application, enable autoscaling with a [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/), generate load, and observe the system scale out and back in (including scale-to-zero when idle). ## How it works With ingress autowiring enabled, Kedify automatically routes traffic through its proxy before it reaches your Service/Deployment: -``` +```output Ingress → kedify-proxy → Service → Deployment ``` The [Kedify Proxy](https://docs.kedify.io/scalers/http-scaler/#kedify-proxy) gathers request metrics used by the scaler to make decisions. -## What you’ll deploy - -- Deployment & Service: an HTTP server with a small response delay to simulate work -- Ingress: public entry using host `application.keda` -- ScaledObject: Kedify HTTP scaler with `trafficAutowire: ingress` +## Deployment Overview + * Deployment & Service: An HTTP server with a small response delay to simulate work + * Ingress: Public entry point configured using host `application.keda` + * ScaledObject: A Kedify HTTP scaler using `trafficAutowire: ingress` -## Step 0 — Set up Ingress IP environment variable +## Step 1 — Configure the Ingress IP environment variable -Before testing the application, ensure you have the `INGRESS_IP` environment variable set with your ingress controller's external IP or hostname. +Before testing the application, make sure the INGRESS_IP environment variable is set to your ingress controller’s external IP address or hostname. If you followed the [Install Ingress Controller](../install-ingress/) guide, you should already have this set. If not, or if you're using an existing ingress controller, run this command: @@ -34,19 +33,19 @@ If you followed the [Install Ingress Controller](../install-ingress/) guide, you export INGRESS_IP=$(kubectl get service ingress-nginx-controller --namespace=ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}{.status.loadBalancer.ingress[0].hostname}') echo "Ingress IP/Hostname: $INGRESS_IP" ``` -You should now have the correct IP address or hostname stored in the `$INGRESS_IP` environment variable. If the command doesn't print any value, please repeat it after some time. +This will store the correct IP or hostname in the $INGRESS_IP environment variable. If no value is returned, wait a short while and try again. {{% notice Note %}} -If your ingress controller service has a different name or namespace, adjust the command accordingly. For example, some installations use `nginx-ingress-controller` or place it in a different namespace. +If your ingress controller service uses a different name or namespace, update the command accordingly. For example, some installations use `nginx-ingress-controller` or place it in a different namespace. {{% /notice %}} -## Step 1 — Create the application and Ingress +## Step 2 — Deploy the application and configure Ingress -Let's start with deploying an application that responds to an incoming HTTP server and is exposed via Ingress. You can check the source code of the application on [GitHub](https://github.com/kedify/examples/tree/main/samples/http-server). +Now you will deploy a simple HTTP server and expose it using an Ingress resource. The source code for this application is available on [GitHub](https://github.com/kedify/examples/tree/main/samples/http-server). #### Deploy the application -Run the following command to deploy our application: +Run the following command to deploy your application: ```bash cat <<'EOF' | kubectl apply -f - @@ -123,27 +122,27 @@ Notes: #### Verify the application is running correctly -Let's check that we have 1 replica of the application deployed and ready: +You will now check if you have 1 replica of the application deployed and ready: ```bash kubectl get deployment application ``` -In the output we should see 1 replica ready: -``` +In the output you should see 1 replica ready: +```output NAME READY UP-TO-DATE AVAILABLE AGE application 1/1 1 1 3m44s ``` #### Test the application -Hit the app to confirm the app is ready and routing works: +Once the application and Ingress are deployed, verify that everything is working correctly by sending a request to the exposed endpoint. Run the following command: ```bash curl -I -H "Host: application.keda" http://$INGRESS_IP ``` -You should see similar output: -``` +If the routing is set up properly, you should see a response similar to: +```output HTTP/1.1 200 OK Date: Thu, 11 Sep 2025 14:11:24 GMT Content-Type: text/html @@ -151,9 +150,9 @@ Content-Length: 301 Connection: keep-alive ``` -## Step 2 — Enable autoscaling with Kedify +## Step 3 — Enable autoscaling with Kedify -The application is currectly running, Now we will enable autoscaling on this app, we will scale from 0 to 10 replicas. No request shall be lost at any moment. To do that, please run the following command to deploy our `ScaledObject`: +The application is now running. Next, you will enable autoscaling so that it can scale dynamically between 0 and 10 replicas. Kedify ensures that no requests are dropped during scaling. Apply the `ScaledObject` by running the following command: ```bash cat <<'EOF' | kubectl apply -f - @@ -193,25 +192,25 @@ spec: EOF ``` -What the key fields do: -- `type: kedify-http` — Use Kedify’s HTTP scaler. -- `hosts`, `pathPrefixes` — Which requests to observe for scaling. -- `service`, `port` — The Service and port receiving traffic. -- `scalingMetric: requestRate` and `targetValue: 10` — Target 1000 req/s (per granularity/window) before scaling out. -- `minReplicaCount: 0` — Allows scale-to-zero when idle. -- `trafficAutowire: ingress` — Lets Kedify auto-wire your Ingress to the kedify-proxy. +Key Fields explained: +- `type: kedify-http` — Specifies that Kedify’s HTTP scaler should be used. +- `hosts`, `pathPrefixes` — Define which requests are monitored for scaling decisions. +- `service`, `port` — TIdentify the Kubernetes Service and port that will receive the traffic. +- `scalingMetric: requestRate` and `targetValue: 10` — Scale out when request rate exceeds the target threshold (e.g., 1000 req/s per window, depending on configuration granularity). +- `minReplicaCount: 0` — Enables scale-to-zero when there is no traffic. +- `trafficAutowire: ingress` — Automatically wires your Ingress to the Kedify proxy for seamless traffic management. -After applying, the ScaledObject will appear in the Kedify dashboard (https://dashboard.kedify.io/). +After applying, the `ScaledObject` will appear in the Kedify dashboard (https://dashboard.kedify.io/). ![Kedify Dashboard With ScaledObject](images/scaledobject.png) -## Step 3 — Send traffic and observe scaling +## Step 4 — Send traffic and observe scaling -Becuase we are not sending any traffic to our application, after some time, it should be scaled to zero. +Since no traffic is currently being sent to the application, it will eventually scale down to zero replicas. #### Verify scale to zero -Run this command and wait until there is 0 replicas: +To confirm that the application has scaled down, run the following command and watch until the number of replicas reaches 0: ```bash watch kubectl get deployment application -n default @@ -224,31 +223,35 @@ Every 2,0s: kubectl get deployment application -n default NAME READY UP-TO-DATE AVAILABLE AGE application 0/0 0 0 110s ``` +This continuously monitors the deployment status in the default namespace. Once traffic stops and the idle window has passed, you should see the application deployment report 0/0 replicas, indicating that it has successfully scaled to zero. #### Verify the app can scale from zero -Now, hit the app again, it should be scaled to 1 replica and return back correct response: +Next, test that the application can scale back up from zero when traffic arrives. Send a request to the app: + ```bash curl -I -H "Host: application.keda" http://$INGRESS_IP ``` - -You should see a 200 OK response. Next, generate sustained load. You can use `hey` (or a similar tool): +The application should scale from 0 → 1 replica automatically. +You should receive an HTTP 200 OK response, confirming that the service is reachable again. #### Test higher load +Now, generate a heavier, sustained load against the application. You can use `hey` (or a similar benchmarking tool): + ```bash hey -n 40000 -c 200 -host "application.keda" http://$INGRESS_IP ``` -While the load runs, watch replicas change: +While the load test is running, open another terminal and monitor the deployment replicas in real time: ```bash watch kubectl get deployment application -n default ``` -For example something like this: +You will see the number of replicas change dynamically. For example: -``` +```output Every 2,0s: kubectl get deployment application -n default NAME READY UP-TO-DATE AVAILABLE AGE @@ -259,23 +262,22 @@ Expected behavior: - On bursty load, Kedify scales the Deployment up toward `maxReplicaCount`. - When traffic subsides, replicas scale down. After the cooldown, they can return to zero. -You can also observe traffic and scaling in the Kedify dashboard: +You can also monitor traffic and scaling in the Kedify dashboard: ![Kedify Dashboard ScaledObject Detail](images/load.png) ## Clean up +When you have finished testing, remove the resources created in this Learning Path to free up your cluster: + ```bash kubectl delete scaledobject application kubectl delete ingress application-ingress kubectl delete service application-service kubectl delete deployment application ``` +This will delete the `ScaledObject`, Ingress, Service, and Deployment associated with the demo application. ## Next steps -Explore the official Kedify [How-to guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes. - -### See also - -- Kedify documentation: https://docs.kedify.io +To go futher, you can explore the Kedify [How-to guides](https://docs.kedify.io/how-to/) for more configurations such as Gateway API, Istio VirtualService, or OpenShift Routes. From 074213f6ff4fed2ae9900b01fd61f125bc64f63f Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 16:22:24 -0400 Subject: [PATCH 6/7] Update install-kedify-helm.md --- .../kedify-http-autoscaling/install-kedify-helm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md index 273da1405b..0d65234874 100644 --- a/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md +++ b/content/learning-paths/servers-and-cloud-computing/kedify-http-autoscaling/install-kedify-helm.md @@ -12,7 +12,7 @@ For more details and all installation methods on Arm, you can refer to the [Kedi You will need: -- A running Kubernetes cluster (kind, minikube, EKS, GKE, AKS, etc.). This can be on any cloud service provider. +- A running Kubernetes cluster (e.g., kind, minikube, EKS, GKE, AKS, etc.), hosted on any cloud provider or local environment. - kubectl and helm installed and configured to communicate with your cluster - A Kedify Service account (https://dashboard.kedify.io/) to obtain Organization ID and API Key — log in or create an account if you don’t have one From cde4f104e09ca7725271ff360861d90502f94d27 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Thu, 25 Sep 2025 16:23:55 -0400 Subject: [PATCH 7/7] Update contributors.csv --- assets/contributors.csv | 1 + 1 file changed, 1 insertion(+) diff --git a/assets/contributors.csv b/assets/contributors.csv index 6d9123056d..a149228b12 100644 --- a/assets/contributors.csv +++ b/assets/contributors.csv @@ -103,3 +103,4 @@ Rui Chang,,,,, Alejandro Martinez Vicente,Arm,,,, Mohamad Najem,Arm,,,, Zenon Zhilong Xiu,Arm,,zenon-zhilong-xiu-491bb398,, +Zbynek Roubalik,Kedify,,,,