Read Nagios Cross-Platform Agent metrics and store them on an Elasticsearch Cluster. Then use Grafana to plot some graphs on a modest dashboard.
Sometimes you are working on an environment with a limited set of tools available, but if you have an Elasticsearch cluster at hand, and NCPA installed, maybe because your customer asked for it, now you have the possibility to plot some metrics on a grafana dashboard.
Storing metrics on ES may only require a few megabytes a day if your have only a few hosts, and, you can tweak the script to exclude some metrics if you don't need them and want to save a few bytes.
- Copy ncpa2es.py and config.yml-dist to someplace on a system capable of reaching ncpa_listener on your hosts and your ES cluster.
- Grab a copy check_ncpa.py and put it on the same place.
- Rename config.yml-dist to config.yml and tailor it to match your setup
- Rut ncpa2es.py at regular intervals, for instance, using cron.
If you don't have a working copy of grafana, you can visit their getting started page.
Once you have a working grafana be sure your grafana server is able to access your ES cluster, it doesn't matter if it can be accessed directly or using a SSH tunnel, or querying through your browser.
This is an example datasource, set it up to match your ES. The index name template should match the index defined in config.yml file.
It's not necessary to use the same user defined in config.yml, and it's considered a best practice to use different users for writing (ncpa2es.py) and reading (grafana).
Import the dashboard on grafana using the wizard and choose the previously configured datasource, and you are ready to go.
Yes, have fun!.
Now you can play with the panels, the querys, set up alerts and so on. And when you get bored, I will be here to read your complaints, you know what to do.
{
"order": 0,
"template": ".ncpa-metrics-*",
"settings": {
"index": {
"number_of_shards": "2",
"number_of_replicas": "1"
}
},
"mappings": {},
"aliases": {}
}
I'm working with automatic id generation and automatic mapping, so this is the only settings I've used.
You can use the Elasticsearch X-Pack ILM to manage the lifcycle of indices. (Recommeded)
You can also use Elasticsearch Curator if you don't have a licensed X-Pack.
Another option is to use the Elasticsearch Index API to delete old indices.
And of course, you can keep them forever.
- NagiosEnterprises / ncpa - My reference for extracting ncpa data.
- trevorndodds / elasticsearch-metrics - My inspiration for the whole work.
- Juanjo García - Initial work - juanjo-vlc
This project is licensed under the MIT License - see the LICENSE.md file for details
- Remove dependency from ncpa
- Manage ncpa host failure
- Manage es host failure
- Upload to github
- Convert to a long running process/systemd service
- Reload by signal
- Improve documentation (endless loop)