Skip to content

monitoring metrics of the cluster using mimir,prometheus and grafana

Notifications You must be signed in to change notification settings

vijaybiradar/grafana_mimir

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

monitoring metrics of the cluster using mimir,prometheus and grafana

Grafana Mimir Installation on Kubernetes Cluster

This guide explains how to install Grafana Mimir on a Kubernetes cluster using Helm.

Installation Steps:

1.Install Mimir-distributed via Helm

2.Install kube-prometheus-stack via Helm

3.Verify that Prometheus instances are scraping samples from various targets and pushing them to Grafana Mimir using Prometheus' remote write API.

4.Configure a ServiceMonitor for Prometheus, labeling release: kube-prometheus-stack, to allow the Prometheus operator to discover which exporter services to scrape.

5.Verify that Mimir metrics are being scraped through the Prometheus interface.

6.Follow the instructions in the Grafana Mimir documentation to build the Mimir dashboards.

1.Install Mimir-distributed via Helm

Add Mimir Helm repository:

helm repo add mimir-test grafana/mimir-distributed

Install Mimir using Helm: Create a values file (let's name it mimir.yaml) with the mimir configuration. Here's an example of how it might look:

metaMonitoring:
  serviceMonitor:
    enabled: true
    labels:
      release: kube-prometheus-stack
mimir:
  structuredConfig:
    alertmanager_storage:
      s3:
        access_key_id: access_key
        bucket_name: mimir-alert
        endpoint: s3.eu-central-1.amazonaws.com
        secret_access_key: secret_access
    blocks_storage:
      backend: s3
      bucket_store:
        sync_dir: /data/tsdb-sync
      s3:
        access_key_id: access_key
        bucket_name: mimir-tsdb
        endpoint: s3.eu-central-1.amazonaws.com
        secret_access_key: secret_access
      tsdb:
        dir: /data/tsdb
    compactor:
      data_dir: /data
    frontend:
      align_queries_with_step: true
      log_queries_longer_than: 10s
    ingester:
      instance_limits:
        max_ingestion_rate: 0
      ring:
        final_sleep: 0s
        num_tokens: 512
    ingester_client:
      grpc_client_config:
        max_recv_msg_size: 104857600
        max_send_msg_size: 104857600
    limits:
      ingestion_rate: 40000
      max_global_series_per_metric: 0
      max_global_series_per_user: 0
    memberlist:
      abort_if_cluster_join_fails: false
      compression_enabled: false
    ruler:
      alertmanager_url: dnssrvnoa+http://_http-metrics._tcp.{{ template "mimir.fullname"
        . }}-alertmanager-headless.{{ .Release.Namespace }}.svc.{{ .Values.global.clusterDomain
        }}/alertmanager
      enable_api: true
      rule_path: /data
    ruler_storage:
      s3:
        access_key_id: access_key
        bucket_name: mimir-ruler-test
        endpoint: s3.eu-central-1.amazonaws.com
        secret_access_key: secret_access
    runtime_config:
      file: /var/{{ include "mimir.name" . }}/runtime.yaml
    server:
      grpc_server_max_concurrent_streams: 1000
      grpc_server_max_recv_msg_size: 104857600
      grpc_server_max_send_msg_size: 104857600
minio:
  enabled: false
querier:
  replicas: 3

Update Helm repository within the mimir-test namespace:

helm repo update -n mimir-test

install Mimir using Helm, specifying custom values from mimir.yaml:

helm -n mimir-test upgrade mimir-test grafana/mimir-distributed --values=mimir.yaml --install

image

This setup will deploy mimir with S3 storage backend and kube-prometheus-stack configured to send metrics to mimir. Adjust configurations as per your specific environment and requirements.

image

image

Object storage: I configured three separate buckets named mimir-alert, mimir-ruler-test, and mimir-tsdb

image

After installing the Helm chart and waiting awhile, I could see data starting to show up in the bucket from my object store web interface:

image

2.Install kube-prometheus-stack via Helm

Install kube-prometheus-stack: Create a values file (let's name it kube-prometheus-stack-values.yaml) for kube-prometheus-stack:

The role of Prometheus Prometheus instances scrape samples from various targets and push them to Grafana Mimir by using Prometheus’ remote write API. ServiceMonitor for Prometheus

label release: kube-prometheus-stack, which is what allows the Prometheus operator to discover which exporter services to scrape. I verified through the Prometheus interface that the Mimir metrics were being scraped. Then I followed the instructions in the Grafana Mimir documentation to build the Mimir dashboards.

vi kube-prometheus-stack-values.yaml

prometheus:
  prometheusSpec:
    remoteWrite:
    - url:  http://mimir-test-nginx.mimir-test.svc:80/api/v1/push
    externalLabels:
       environment: mimir

install kube-prometheus-stack using Helm:

helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -f kube-prometheus-stack-values.yaml --namespace monitoring --create-namespace

image

Access Grafana Dashboard

Port Forward Grafana Service

 kubectl port-forward svc/kube-prometheus-stack-grafana 3000:80 -n monitoring

Access Grafana Dashboard Open a web browser and go to localhost:3000.

image

Login with the following credentials: Username: admin Password: (retrieve password using the following command)

kubectl get secret prom-grafana -o jsonpath="{.data.admin-password}" -n monitoring | base64 --decode ; echo

Username: admin Password: prom-operator

Add mimir datasource in grafana with remote read URL http://mimir-test-nginx.mimir-test.svc:80/prometheus

image image

Add default mimir (mimir mixin) dashboards to grafana image

image

About

monitoring metrics of the cluster using mimir,prometheus and grafana

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published