Skip to content
No description or website provided.
Branch: master
Clone or download
bhavin192 Add link to configuration walkthrough of prometheus adapter
Signed-off-by: Bhavin Gandhi <bhavin@infracloud.io>
Latest commit 00d7677 Nov 22, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
deploy More clean up for chart change Nov 22, 2018
images Add initial instructions Aug 3, 2018
LICENSE Add initial instructions Aug 3, 2018
README.md Add link to configuration walkthrough of prometheus adapter Nov 22, 2018

README.md

Kubernetes autoscaling with custom metrics

In this demo we will deploy an app mockmetrics which will generate a count at /metrics. These metrics will be scraped by Prometheus. With the help of k8s-prometheus-adapter, we will create APIService custom.metrics.k8s.io, which then will be utilized by HPA to scale the deployment of mockmetrics app (increase number of replicas).

Prerequisite

  • You have a running Kubernetes cluster somewhere with kubectl configured to access it
  • Clone the repo
    git clone https://github.com/infracloudio/kubernetes-autoscaling.git
    cd kubernetes-autoscaling
    
  • If you are using GKE to create the cluster make sure you have created ClusterRoleBinding with 'cluster-admin' (instructions)

Installing and configuring helm

We will be using helm to install some of the components we need for this demo

  • Install helm by following these instructions
  • Create ServiceAccount and ClusterRoleBinding for tiller
    kubectl apply -f deploy/helm/helm-tiller-rbac.yaml
    
    More information about this
  • Install tiller in cluster
    helm init --service-account tiller
    
  • Verify the installation, make sure the version is 2.10+
    $ helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Installing prometheus-operator and Prometheus

Now, we will install prometheus-operator, that will also deploy an instance of Prometheus with the help of operator

  • Install prometheus-operator

    This will install prometheus-operator in the namespace monitoring and it will create CustomResourceDefinitions for AlertManager, Prometheus and ServiceMonitor etc.

    $ helm install \
      --name mon \
      --namespace monitoring \
      stable/prometheus-operator
    
    $ kubectl get crd --namespace monitoring
    NAME                                    CREATED AT
    alertmanagers.monitoring.coreos.com     2018-11-22T10:26:55Z
    prometheuses.monitoring.coreos.com      2018-11-22T10:26:55Z
    prometheusrules.monitoring.coreos.com   2018-11-22T10:26:56Z
    servicemonitors.monitoring.coreos.com   2018-11-22T10:26:56Z
  • Check if all the components are deployed properly

    $ kubectl get pods --namespace monitoring
    NAME                                                  READY     STATUS    RESTARTS   AGE
    alertmanager-mon-prometheus-operator-alertmanager-0   2/2       Running   0          6m
    mon-grafana-f7c558d65-wwbrl                           3/3       Running   0          6m
    mon-kube-state-metrics-75b445797f-7jnzg               1/1       Running   0          6m
    mon-prometheus-node-exporter-n2zmq                    1/1       Running   0          6m
    mon-prometheus-operator-operator-587ccd9566-2ddq9     1/1       Running   0          6m
    prometheus-mon-prometheus-operator-prometheus-0       3/3       Running   1          6m

Deploying the mockmetrics application

It's a simple web server written in Golang which exposes total hit count at /metrics endpoint. We will create a deployment and service for it.

  • This will create Deployment, Service, HorizontalPodAutoscaler in the default namespace and ServiceMonitor in monitoring namespace

    $ kubectl create -f deploy/metrics-app/
    deployment.apps "mockmetrics-deploy" created
    horizontalpodautoscaler.autoscaling "mockmetrics-app-hpa" created
    servicemonitor.monitoring.coreos.com "mockmetrics-sm" created
    service "mockmetrics-service" created
    
    $ kubectl get svc,hpa
    NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    mockmetrics-service   ClusterIP   10.39.241.189   <none>        80/TCP    2m
    
    NAME                  REFERENCE                       TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   <unknown>/100   1         10        1          2m

    The <unknown> field will have a value once we deploy the custom metrics API server.

    The ServiceMonitor will be picked up by Prometheus, so it tells Prometheus to scrape the metrics at /metrics from the mockmetrics app at every 5s. Note that the ServiceMonitor is created in the namespace same as Prometheus.

    deploy/metrics-app/mockmetrics-service-monitor.yaml
    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: mockmetrics-sm
      namespace: monitoring
      labels:
        release: mon
    spec:
      jobLabel: mockmetrics
      selector:
        matchLabels:
          app: mockmetrics-app
      namespaceSelector:
        matchNames:
        - default
      endpoints:
      - port: metrics-svc-port
        interval: 10s
        path: /metrics

    Let's take a look at Horizontal Pod Autoscaler

    deploy/metrics-app/mockmetrics-hpa.yaml
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    metadata:
      name: mockmetrics-app-hpa
    spec:
      scaleTargetRef:
        apiVersion: apps/v1beta1
        kind: Deployment
        name: mockmetrics-deploy
      minReplicas: 1
      maxReplicas: 10
      metrics:
      - type: Object
        object:
          target:
            kind: Service
            name: mockmetrics-service
          metricName: total_hit_count
          targetValue: 100

    This will increase the number of replicas of mockmetrics-deploy when the metric total_hit_count associated with the service mockmetrics-service crosses the targetValue 1000.

  • Check if the mockmetrics-service appears as target in the Prometheus dashboard

    $ kubectl port-forward svc/mon-prometheus-operator-prometheus 9090:9090 --namespace monitoring
    Forwarding from 127.0.0.1:9090 -> 9090
    Forwarding from [::1]:9090 -> 9090

    Head over to http://localhost:9090/targets
    It should look like something similar to this,

    prometheus-dashboard-targets

Deploying the custom metrics API server

  • Create the resources required for deploying custom metrics API server using the adapter
    $ kubectl create -f deploy/custom-metrics-server/
    namespace "custom-metrics" created
    configmap "adapter-config" created
    serviceaccount "custom-metrics-apiserver" created
    clusterrolebinding.rbac.authorization.k8s.io "custom-metrics:system:auth-delegator" created
    rolebinding.rbac.authorization.k8s.io "custom-metrics-auth-reader" created
    clusterrolebinding.rbac.authorization.k8s.io "custom-metrics-resource-reader" created
    clusterrole.rbac.authorization.k8s.io "custom-metrics-server-resources" created
    clusterrole.rbac.authorization.k8s.io "custom-metrics-resource-reader" created
    clusterrolebinding.rbac.authorization.k8s.io "hpa-controller-custom-metrics" created
    deployment.apps "custom-metrics-apiserver" created
    service "api" created
    apiservice.apiregistration.k8s.io "v1beta1.custom.metrics.k8s.io" created
    This will create all the resources in the custom-metrics namespace
    • custom-metrics-server/custom-metrics-server-config.yaml
      This file contains the configMap used to create the configuration file for the adapter, which configures how the metrics are fetched from Prometheus and how to associate those with the Kubernetes resources. More details about writing the configuration can be found here. A walkthrough of the configuration.
    • custom-metrics-server/custom-metrics-server-rbac.yaml
      Contains ServiceAccount, ClusterRoles, RoleBindings, ClusterRoleBindings to grant required permissions to the adapter
    • custom-metrics-server/custom-metrics-server.yaml
      This contains definitions for Deployment of the adapter and a Service to expose it. It also contains definition of APIService, which is part of aggregation layer. It creates the API v1beta1.custom.metrics.k8s.io.
  • Check if everything is running as expected
    $ kubectl get svc --namespace custom-metrics
    NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    api       ClusterIP   10.39.243.202   <none>        443/TCP   5h
    
    $ kubectl get apiservice | grep v1beta1.custom.metrics.k8s.io
    v1beta1.custom.metrics.k8s.io          5h
  • Check if the metrics are getting collected, by querying the API custom.metrics.k8s.io/v1beta1
    $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/*/total_hit_count"
    {"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/%2A/total_hit_count"},"items":[{"describedObject":{"kind":"Service","namespace":"default","name":"mockmetrics-service","apiVersion":"/__internal"},"metricName":"total_hit_count","timestamp":"2018-08-01T11:30:39Z","value":"0"}]}

Scaling the application

  • Check the mockmetrics-app-hpa

    $ kubectl get hpa
    NAME                  REFERENCE                       TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   0/100     1         10        1          2h
  • The mockmetrics application has following endpoints

    • /scale/up: keeps on increasing the total_hit_count when /metrics is accessed
    • /scale/down: starts decreasing the value
    • /scale/stop: stops the increasing or decreasing value
  • Open a new terminal tab

    $ kubectl port-forward svc/mockmetrics-service 8080:80 &
    Forwarding from 127.0.0.1:8080 -> 8080
    Forwarding from [::1]:8080 -> 8080
    
    $ curl localhost:8080/scale/
    stop

    Let's set the application to increase the counter

    $ curl localhost:8080/scale/up
    Going up!

    As Prometheus is configured to scrape the metrics every 10s, the value of the total_hit_count will keep changing.

  • Now in different terminal tab let's watch the HPA

    $ kubectl get hpa -w
    NAME                  REFERENCE                       TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   0/100     1         10        1          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   2/100     1         10        1          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   20/100    1         10        1          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   56/100    1         10        1          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   110/100   1         10        1          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   90/100    1         10        2          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   126/100   1         10        2          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   162/100   1         10        2          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   270/100   1         10        2          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   306/100   1         10        2          11h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   171/100   1         10        4          11h
    ...

    Once the value is greater than the target, HPA will automatically increase the number of replicas for the mockmetrics-deploy

  • To bring the value down, execute following command in the first terminal tab

    $ curl localhost:8080/scale/down
    Going down :P
    
    $ kubectl get hpa -w
    NAME                  REFERENCE                       TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    ...
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   9/100     1         10        8          12h
    mockmetrics-app-hpa   Deployment/mockmetrics-deploy   0/100     1         10        8          12h

Other references and credits

Licensing

This repository is licensed under Apache License Version 2.0. See LICENSE for the full license text.

You can’t perform that action at this time.