Skip to content

meriem-mounchid/Autoscale-NGINX-Ingress-Controller

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autoscale-NGINX-Ingress-Controller

Reducing Kubernetes Latency with Autoscaling

1. Configure a Simple App on a Kubernetes Cluster

  • Create a Minikube Cluster:

Screen Shot 2022-07-01 at 2 36 54 PM

  • Install the Podinfo App: kubectl apply -f deploy_pulp.yaml

Screen Shot 2022-07-01 at 2 38 43 PM

  • Run: kubectl get pods

Screen Shot 2022-07-01 at 2 40 02 PM

  • Run: minikube service --all

Screen Shot 2022-07-01 at 3 38 49 PM

Screen_Shot_2022-07-01_at_15 41 30

2. Use NGINX Ingress Controller to Route Traffic to the App

  • Add Nginx repo to Helm: helm repo add nginx-stable https://helm.nginx.com/stable

Screen Shot 2022-07-01 at 3 47 12 PM

  • Install NGINX Ingress Controller:
helm install main nginx-stable/nginx-ingress --set controller.watchIngressWithoutClass=true --set controller.service.type=NodePort --set controller.service.httpPort.nodePort=30005

Screen Shot 2022-07-01 at 3 49 50 PM

  • To confirm deployment run: kubectl get pods

Screen Shot 2022-07-01 at 3 51 26 PM

  • Deploy the Ingress file: kubectl apply -f ingress_pulp.yaml

Screen Shot 2022-07-01 at 3 55 14 PM

3. Generate and Monitor Traffic

  • List the Available Metrics: kubectl get pods -o wide

  • Obtain the IP address of the NGINX Ingress Controller pod so that you can query its list of metrics.

NAME                                               READY   STATUS    RESTARTS        AGE   IP               NODE       NOMINATED NODE   READINESS GATES
main-nginx-ingress-b79c6fff9-s2fmt                 1/1     Running   0               22h   172.17.0.4       minikube   <none>           <none>
  • Create a temporary BusyBox:
kubectl run -ti --rm=true busybox --image=busybox  
If you don't see a command prompt, try pressing enter. 
/ # 
/# wget -qO- <IP_address>:9113/metrics

IP_address: 172.17.0.4 (NGINX Ingress Controller pod)

  • Deploy Prometheus:

Run:helm repo add prometheus-community https://prometheus-community.github.io/helm-charts Run:helm install prometheus prometheus-community/prometheus --set server.service.type=NodePort --set server.service.nodePort=30010

kubectl get pods
NAME                                               READY   STATUS             RESTARTS         AGE
main-nginx-ingress-b79c6fff9-s2fmt                 1/1     Running            0                22h
podinfo-5d76864686-r8dmh                           1/1     Running            0                22h
prometheus-alertmanager-67dfc6ff85-k7r56           2/2     Running            0                22h
prometheus-kube-state-metrics-748fc7f64-4z5st      1/1     Running            20 (131m ago)    22h
prometheus-node-exporter-wj894                     1/1     Running            0                22h
prometheus-pushgateway-6df8cfd5df-zc6tt            1/1     Running            8                22h
prometheus-server-6bd8b49ff8-n2lnt                 1/2     CrashLoopBackOff   15 (4m51s ago)   22h
  • Open Prometheus Dashboard in Default Browser: minikube service --all
|-----------|---------------------------------|--------------|-----------------------------|
| NAMESPACE |              NAME               | TARGET PORT  |             URL             |
|-----------|---------------------------------|--------------|-----------------------------|
| default   | main-nginx-ingress              | http/80      | http://192.168.59.153:30005 |
|           |                                 | https/443    | http://192.168.59.153:30644 |
| default   | podinfo                         |           80 | http://192.168.59.153:30001 |
| default   | prometheus-alertmanager         | No node port |
| default   | prometheus-kube-state-metrics   | No node port |
| default   | prometheus-node-exporter        | No node port |
| default   | prometheus-pushgateway          | No node port |
| default   | prometheus-server               | http/80      | http://192.168.59.153:30010 |
|-----------|---------------------------------|--------------|-----------------------------|

Screen Shot 2022-07-05 at 2 24 17 PM

  • Type nginx_ingress_nginx_connections_active in the search bar to see the current value of the active connections metric.

  • Install Locust: Run: kubectl apply -f locust.yaml

  • Open Locust in a browser: minikube service --all

|-----------|---------------------------------|--------------|-----------------------------|
| NAMESPACE |              NAME               | TARGET PORT  |             URL             |
|-----------|---------------------------------|--------------|-----------------------------|
| default   | kubernetes                      | No node port |
| default   | locust                          |         8089 | http://192.168.59.153:30015 |
| default   | main-nginx-ingress              | http/80      | http://192.168.59.153:30005 |
...

Screen Shot 2022-07-05 at 2 29 10 PM

  • Enter the following values in the fields:
Number of users – 1000
Spawn rate – 10
Host – http://main-nginx-ingress
  • Click the Start swarming button to send traffic to the Podinfo app.

  • Return to the Prometheus dashboard to see how NGINX Ingress Controller responds. You may have to perform a new query for nginx_ingress_nginx_connections_active to see any change.

4. Autoscale NGINX Ingress Controller

  • Install KEDA:

Run: helm repo add kedacore https://kedacore.github.io/charts Run: helm install keda kedacore/keda

kubectl get pods 
NAME                                               READY   STATUS             RESTARTS         AGE
keda-operator-7879dcd589-mz64f                     1/1     Running            21 (3m30s ago)   22h
keda-operator-metrics-apiserver-54746f8fdc-gr5rx   1/1     Running            16 (8m14s ago)   22h
locust-77c699c94d-tpkd8                            1/1     Running            0                22h
main-nginx-ingress-b79c6fff9-s2fmt                 1/1     Running            0                22h
podinfo-5d76864686-r8dmh                           1/1     Running            0                22h
prometheus-alertmanager-67dfc6ff85-k7r56           2/2     Running            0                22h
prometheus-kube-state-metrics-748fc7f64-4z5st      1/1     Running            27 (53s ago)     22h
prometheus-node-exporter-wj894                     1/1     Running            0                22h
prometheus-pushgateway-6df8cfd5df-zc6tt            1/1     Running            13 (3m9s ago)    22h
prometheus-server-6bd8b49ff8-n2lnt                 1/2     Running            17 (7m4s ago)    22h
  • Create an Autoscaling Policy:

Run: kubectl apply -f scaled-object.yaml

  • Return to the Locust server in your browser. Enter the following values in the fields and click the Start swarming button:
Number of users – 2000
Spawn rate – 10
Host – http://main-nginx-ingress
  • Return to the Prometheus and Locust dashboards. The pink box under the Prometheus graph depicts the number of NGINX Ingress Controller pods scaling up and down.

  • Simulate a Traffic Surge and Observe the Effect of Autoscaling on Performance:

Run: kubectl get hpa

Done!!!

About

Reducing Kubernetes Latency with Autoscaling

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published