Skip to content

guptaachin/metrics-observability-pipeline

Repository files navigation

MIT License LinkedIn


Logo

merics-observability-pipeline

Set up your local Observability - metrics pipeline
Report Bug · Request Feature


Table of Contents
  1. About The Project
  2. Open Source tools used
  3. Getting started
  4. Contributing
  5. FAQs
  6. License
  7. Contact
  8. Acknowledgments

About The Project

The growth of Cloud Computing has seen a tremendous upward trajectory in the past 10 years. With services distributed in swarm of virtual machines, it has become necessary to set up a solution to monitor the vitals of these machines and services running on them.
Therefore Observability into these services is extremely critical to achieve those five nines availability (meaning the service is available for 99.999% of the time, that is unavailable for not more than 5 minutes and 15 seconds a year)

Observability has three main pillars- metrics, logs and traces. While logs are more common tool for gauging the behaviour, metrics are as much if not more important. Metrics are time series data, composed of the vitals reported by your service or machine. For example, the number_of_cpu_cores_used by a service can be represented as [(t1, 0.1), (t2, 0.23), (t3, 0.05), (t4, 0.3)] where [t1, t2, t3, t4] are the timestamps when the cpu core usages [0.1, 0.23, 0.05, 0.3] were reported.


Components of metric pipeline

This repo will help you set up a simple and complete observability pipeline in your docker environment. The metrics-observability-pipeline has the following capabilities:

  1. Set up a collector to collect prometheus format metrics
  2. Set up a collector to collect statsd format metrics
  3. Set up a metrics database to store and serve the collected metrics
  4. Set up a vizualization interface to be able to plot these metrics in using MetricsQL (Victoria Metrics query language)
  5. Set up an alerts rules evaluator.
  6. Set up an alerts manager to send notifications and manage the evaluated alert rules.

(back to top)

Open Source tools used

Here is a list of all the open source tools we will use.

(back to top)

Getting Started

Let's try this hands on. To get started on setting up your own pipeline, you will need to set up docker and docker-compose.

Prerequisites

  • Docker set up - Please follow the official get-docker steps to install docker for your operating system.
  • Docker compose set up - Please follow the official get-docker-compose steps to install docker for your operating system.
  • Git client - Please follow this git-guide to install git client for your operating system.

Installation

  1. Clone the repo
    git clone https://github.com/guptaachin/metrics-observability-pipeline.git
    
  2. Change directoy into metrics-observability-pipeline
    cd metrics-observability-pipeline
    
  3. Bring up the metrics-observability-pipeline Run
    ./start-mop
    
    If that errors out for windows just run the docker compose command directly
    docker-compose -f ./docker-compose-mop.yaml up -d --force-recreate --build --remove-orphans
    
  4. It might take a while for the previous command to run as it downloads the standard docker images for the open source tools.
  5. Run
    docker ps | grep mop
    
    You will see all the mop container up and running
 % docker ps | grep mop
d9cb8acbc384   victoriametrics/vmagent:v1.83.1             "/vmagent-prod --pro…"   41 seconds ago   Up 41 seconds   0.0.0.0:8429->8429/tcp                                                      mop-vmagent
94c9c8d78b43   victoriametrics/vminsert:v1.83.1-cluster    "/vminsert-prod --st…"   42 seconds ago   Up 41 seconds   0.0.0.0:8480->8480/tcp                                                      mop-vminsert
8a94eb7ea359   victoriametrics/vmalert:v1.83.1             "/vmalert-prod --dat…"   46 seconds ago   Up 46 seconds   0.0.0.0:8880->8880/tcp                                                      mop-vmalert
a7409d3288a3   victoriametrics/vmselect:v1.83.1-cluster    "/vmselect-prod --st…"   47 seconds ago   Up 46 seconds   0.0.0.0:8481->8481/tcp                                                      mop-vmselect
36352fd7ba55   victoriametrics/vmstorage:v1.83.1-cluster   "/vmstorage-prod --s…"   48 seconds ago   Up 47 seconds   0.0.0.0:64250->8400/tcp, 0.0.0.0:64251->8401/tcp, 0.0.0.0:64252->8482/tcp   mop-vmstorage
7bdc90ab1fc1   metrics-observability-pipeline_grafana      "/run.sh"                48 seconds ago   Up 47 seconds   0.0.0.0:3000->3000/tcp                                                      mop-grafana
67de46bf371d   prom/alertmanager:v0.24.0                   "/bin/alertmanager -…"   48 seconds ago   Up 47 seconds   0.0.0.0:9093->9093/tcp                                                      mop-alertmanager
31667c83339d   metrics-observability-pipeline_telegraf     "/entrypoint.sh tele…"   48 seconds ago   Up 47 seconds   8092/udp, 8094/tcp, 0.0.0.0:8125->8125/udp                                  mop-telegraf

Access Grafana

After the pipeline is up and running

  1. Open a new web-browser window.
  2. Launch your locally running Grafana instance http://localhost:3000/
  3. Punch in Grafana credentials.
username: mopadmin
password: moppassword

You can change these login credentials in grafana.ini file under grafana directory. 4. After logging in try looking at one of the precreated victoria metrics health dashbord by heading over to http://localhost:3000/d/wNf0q_kZk/victoriametrics?orgId=1&refresh=30s

(back to top)

Install a node exporter on your host machine for exporting your machine system stats

  1. Find the right node exporter for yourself from here. node-exporter-downloads-page
  2. Run wget with the correct link.
  3. Unzip. Run tar xvfz node_exporter-*.*-amd64.tar.gz in the directory where you wget the node-exporter.
  4. Change into the unzip directory. cd node_exporter-*.*-amd64
  5. Run the node exporter. ./node_exporter
  6. This will expose the machine metrics on 9100. Check these metrics to this link from browser - http://localhost:9100/metrics
  7. The scrape configs are already included in vmagent.

> For windows please follow `https://github.com/prometheus-community/windows_exporter`

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

FAQs

1. My node exported dashboard does not show up right values. How should I proceed ?

The node exporter exports different slightly different values depending on your environment. Please import the right dashboard https://github.com/rfmoz/grafana-dashboards/tree/master/prometheus. The current dashboard is https://github.com/rfmoz/grafana-dashboards/blob/master/prometheus/node-exporter-freebsd.json

2. How can I deploy this in Kubernetes ?

The most obvious way is to use Kompose for converting the docker compose to a helm chart and then use helm commands to do a set up in Kubernetes. Please note I haven't tested it. Setting this up is out of scope of this repo.

3. Getting bind address already in use errors. How should I proceed ?

This usually happens when the port an application like Grafana (3000) is already in use by some other application.
This repo assumes that the standard ports of used open source tools are vacant on the machine you are trying to set up. To solve, either change the ports mapped in docker-compose-mop file or stop the processes hogging those ports.

4. What is the username and password for Grafana ?

Please check Access Grafana [Best Part]. If you want to change the credentials, you can do so in /grafana/grafana.ini file.

5. Difference in prometheus queries and queries I write in grafana. How should I proceed ?

This is because while grafana uses Prometheus as the datasource type, it uses Victoria Metrics under the hood. Victoria Metrics under the hood uses MetricsQL which was created by adding some more features on PromQL.

6. Can I used this set up to use for logs or traces ?

You are welcome to fork this repo and contribute. For now, this repo for is only for metrics.

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.
Open source libraries like datadog and prometheus-client and tools listed here Open Source tools used are trademarks of respective companies. We do not intend to claim credit or blame for the work.

(back to top)

Contact

Achin Gupta - Github

Acknowledgments

(back to top)