Skip to content

How to setup a monitoring environment(kubernetes)

Chenyu Zhang edited this page Jun 2, 2022 · 1 revision

Background

Currently, the perf repository provides the ability of performance testing, includes prepare different size data on a fresh installed harbor and run API tests, finally a performance report can be generated after running the tests. In addition to focusing on performance test reports, the runtime metrics are also very important for us to help analyze performance problems. For example, we also need resource usage and distributed tracing capabilities to help locate performance bottlenecks. So here is an example to show how to set up a monitoring environment.

Installation

System Requirements:

  1. you need to install a k8s cluster
  2. helm cli required
  3. nfs server
  4. jaeger
  5. promethues
  6. grafana

Monitoring components

prometheus & grafana

prometheus-operator: https://github.com/prometheus-operator/prometheus-operator

kube-prometheus: https://github.com/prometheus-operator/kube-prometheus

Here we use kube-prometheus as an example:

first: install prometheus crds:

kubectl apply --server-side -f manifests/setup

then: install manifests:

kubectl apply -f manifests/

jaeger

jaeger-operator: https://github.com/jaegertracing/jaeger-operator

install jaeger example:

install jaeger crd:

mkdir jaeger-operaotr && cd jaeger-operator

wget https://github.com/jaegertracing/jaeger-operator/releases/download/v1.29.1/jaeger-operator.yaml

kubectl create namespace observability

kubectl apply -f jaeger-operator.yaml -n observability

install jaeger instance:

apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: my-jaeger
  namespace: observability
spec:
  query:
    serviceType: NodePort
  collector:
    serviceType: NodePort
  strategy: allInOne
  allInOne:
    image: jaegertracing/all-in-one:latest
    options:
      log-level: debug
  storage:
    type: memory
    options:
      memory:
        max-traces: 100000
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx

kubectl apply -f jaeger-instance.yaml -n observability

After these components' container be ready, we can open the browser with http://nodeIP:NodePort to access the grafana and http://nodeIP:NodePort to access the jaeger, for grafana we require adding prometheus as data source(because of the prometheus is running in the same network with grafana, so we can configure the data source endpoint just by service name like http://prometheus:NodePort), and then import dashboard to grafana manually. Here are some useful dashboards from the community.

postgresql exporter

Download postgresql exporter helm-chars: https://github.com/helm/charts/tree/master/stable/prometheus-node-exporter

define values.yaml like this:

serviceMonitor:
  enabled: true
  namespace: monitoring
  labels:
    app.kubernetes.io/part-of: kube-prometheus
config:
  datasource:
    # Specify one of both datasource or datasourceSecret
    host:
    user: postgres
    # Only one of password and passwordSecret can be specified
    password: changeit
    # Specify passwordSecret if DB password is stored in secret.
    passwordSecret: {}
    # Secret name
    #  name:
    # Password key inside secret
    #  key:
    port: "5432"
    database: ''
    sslmode: disable

helm install -f values.yaml pg-exporter . --namespace=harbor

Deploy harbor

Harbor: https://github.com/goharbor/harbor-helm

We can clone the harbor-helm repository to install harbor, values.yaml configuration should be modified before install to enable monitor related features.

harbor-helm prod-values.yaml:

expose:
  type: nodePort
  tls:
    enabled: false
  nodePort:
    ports:
      http:
        nodePort: 30004
      https:
        nodePort: 30005
      notary:
        nodePort: 30006
externalURL: http://192.168.136.71:30004
persistence:
  persistentVolumeClaim:
    registry:
      existingClaim: 
      storageClass: "managed-nfs-storage"
      subPath: "registry"
    chartmuseum:
      existingClaim: 
      storageClass: "managed-nfs-storage"
      subPath: "chartmuseum"
    jobservice:
      existingClaim: 
      storageClass: "managed-nfs-storage"
      subPath: "jobservice"
    database:
      existingClaim: 
      storageClass: "managed-nfs-storage"
      subPath: "database"
    redis:
      existingClaim: 
      storageClass: "managed-nfs-storage"
      subPath: "redis"
metrics:
  enabled: true
  core:
    path: /metrics
    port: 8001
  registry:
    path: /metrics
    port: 8001
  jobservice:
    path: /metrics
    port: 8001
  exporter:
    path: /metrics
    port: 8001
  serviceMonitor:
    enabled: true   

trace:
  enabled: true
  provider: jaeger
  sample_rate: 1
  jaeger:
    # jaeger supports two modes:
    #   agent mode(uncomment endpoint and uncomment username, password if needed)
    #   collector mode(uncomment agent_host and agent_port)
    endpoint: http://192.168.136.71:31088/api/traces

Deploy harbor has become very easy, We just need to execute the following shell command:

helm install -f values-prod.yaml test-harbor . --namespace=harbor