diff --git a/docs/content/en/docs/examples/metrics.md b/docs/content/en/docs/examples/metrics.md index 16ac86ffc..d51cbfa75 100644 --- a/docs/content/en/docs/examples/metrics.md +++ b/docs/content/en/docs/examples/metrics.md @@ -6,164 +6,126 @@ description: > Demonstrate how to collect and expose ingress controller and haproxy metrics. --- -{{% pageinfo %}} -This is a `v0.10` example and need HAProxy Ingress `v0.10-snapshot.5` or above -{{% /pageinfo %}} - -This example demonstrates how to configure [Prometheus](https://prometheus.io) to collect ingress controller and haproxy metrics, and also to configure a [Grafana](https://grafana.com) dashboard to expose these metrics. +This example demonstrates how to configure [Prometheus](https://prometheus.io) and [Grafana](https://grafana.com) to collect and expose HAProxy and HAProxy Ingress metrics using [Prometheus Operator](https://prometheus-operator.dev). ## Prerequisites -This document has the following prerequisite: +This document requires only a Kubernetes cluster. HAProxy Ingress doesn't need to be installed, and if so, the installation process should use the [Helm chart]({{% relref "/docs/getting-started#installation" %}}). -* A Kubernetes cluster with a running HAProxy Ingress controller v0.10 or above. See the [getting started]({{% relref "../getting-started" %}}) guide. +## Configure Prometheus Operator -## Configure the controller +This section can be skipped if the Kubernetes cluster has already a running Prometheus Operator. -HAProxy Ingress by default does not configure the haproxy's prometheus exporter. The patch below configures the haproxy's internal prometheus exporter in the port `9105`: +HAProxy Ingress installation configures Prometheus using a ServiceMonitor custom resource. This resource is used by [Prometheus Operator](https://prometheus-operator.dev) to configure Prometheus instances. The following steps deploy Prometheus Operator via [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) Helm chart. -``` -kubectl --namespace ingress-controller patch configmap haproxy-ingress -p '{"data":{"prometheus-port":"9105"}}' -``` +Create a file named `prometheus-operator-values.yaml` - change both hostnames with a name that resolves to the Kubernetes cluster: -The following patch adds ports `9105` and `10254` to the HAProxy Ingress container. The port declaration is used by the Prometheus' service discovery: +```yaml +grafana: + enabled: true + ingress: + enabled: true + annotations: + kubernetes.io/ingress.class: haproxy + hosts: + - grafana.192.168.0.11.nip.io + tls: + - hosts: + - grafana.192.168.0.11.nip.io +``` -Note: this patch will restart the controller! +Add `kube-prometheus-stack` helm repo: ``` -kubectl --namespace ingress-controller patch deployment haproxy-ingress -p '{"spec":{"template":{"spec":{"containers":[{"name":"haproxy-ingress","ports":[{"name":"exporter","containerPort":9105},{"name":"ingress-stats","containerPort":10254}]}]}}}}' +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts ``` -## Deploy Prometheus - -This will create a Prometheus deployment with no resource limits, a configuration file which will scrape haproxy and also HAProxy Ingress metrics every `10s`, and also a role and rolebinding which allows Prometheus to discover haproxy and controller endpoints using k8s: +Install the chart: ``` -kubectl create -f https://haproxy-ingress.github.io/docs/examples/metrics/prometheus.yaml +helm install prometheus prometheus-community/kube-prometheus-stack\ + --create-namespace --namespace monitoring\ + -f prometheus-operator-values.yaml ``` {{% alert title="Note" %}} -This deployment has no persistent volume, so all the collected metrics will be lost if the pod is recreated. +Bitnami has also a Prometheus Operator [helm chart](https://github.com/bitnami/charts/tree/master/bitnami/kube-prometheus) and it's also a good option. Note however that the values file has a different syntax. {{% /alert %}} -{{% alert title="Warning" color="warning" %}} -If HAProxy Ingress wasn't deployed with Helm, change the following line in the `configmap/prometheus-cfg` resource, jobs `haproxy-ingress` and `haproxy-exporter`: +## Configure HAProxy Ingress -```diff - relabel_configs: -- - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance] -+ - source_labels: [__meta_kubernetes_pod_label_run] - regex: haproxy-ingress -``` - -This will ensure that Prometheus finds the controller pods. -{{% /alert %}} +The steps below configures HAProxy Ingress' Helm chart to add a new ServiceMonitor custom resource. This resource will be responsible for HAProxy and HAProxy Ingress metrics scrape. -Check if Prometheus is up and running: - -``` -kubectl --namespace ingress-controller get pod -lrun=prometheus -w -``` - -Check also if Prometheus found the haproxy and the controller endpoints: - -``` -kubectl --namespace ingress-controller port-forward svc/prometheus 9090:9090 +Merge the content below to the actual `haproxy-ingress-values.yaml` file: +```yaml +controller: + stats: + enabled: true + metrics: + enabled: true + serviceMonitor: + enabled: true + labels: + release: prometheus + metrics: + relabelings: + - replacement: cl1 + targetLabel: cluster + - sourceLabels: [__meta_kubernetes_pod_node_name] + targetLabel: hostname + ctrlMetrics: + relabelings: + - replacement: cl1 + targetLabel: cluster + - sourceLabels: [__meta_kubernetes_pod_node_name] + targetLabel: hostname +``` + +There are two important configurations in the snippet above: + +* Added a label `release: prometheus` in the ServiceMonitor. HAProxy Ingress metrics will share the same Prometheus instance installed by Prometheus Operator. This can be changed to another dedicated instance, and must be checked if using another customized Prometheus Operator deployment. +* Added relabels to HAProxy and HAProxy Ingress metrics. The HAProxy Ingress dashboard uses `hostname` label as a way to distinguish two controller instances, and also `cluster` label to distinguish controllers running on distinct clusters. The source of the name can be adjusted but the label name should be the same. + +Now upgrade the chart - change `upgrade` to `install` if HAProxy Ingress isn't installed yet: +``` +helm upgrade haproxy-ingress haproxy-ingress/haproxy-ingress\ + --create-namespace --namespace ingress-controller\ + -f haproxy-ingress-values.yaml ``` -Open [localhost:9090/targets](http://127.0.0.1:9090/targets) in your browser, all haproxy and controller instances should be listed, up, and green. - -## Deploy Grafana - -The following instruction will create a Grafana deployment with no resource limit, and also its service: +## Compatibility -``` -kubectl create -f https://haproxy-ingress.github.io/docs/examples/metrics/grafana.yaml -``` +This dashboard works with HAProxy's internal Prometheus exporter. Follow these steps to adjust the scrape config and the dashboard if using [Prometheus' HAProxy Exporter](https://github.com/prometheus/haproxy_exporter): -Check if Grafana is up and running: +Change the metric name of "Backend status / Top 5 max/avg connection time" to `haproxy_backend_http_connect_time_average_seconds` -``` -kubectl --namespace ingress-controller get pod -lrun=grafana -w -``` - -Create the ingress which will expose Grafana. Change `HOST` below to a domain of the cluster, or just change the inner IP number to the IP of the HAProxy Ingress node: - -``` -HOST=grafana.192.168.1.1.nip.io -kubectl create -f - <