We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Right now it's not easy to monitor if clickhouse_exporter can reach it's target.
For example: To solve that problem Prometheus' jmx_exporter exposes a gauge:
jmx_exporter
# HELP jmx_scrape_error Non-zero if this scrape failed. # TYPE jmx_scrape_error gauge jmx_scrape_error 0.0
that goes to 1 when there is a problem scraping it's target.
It would be great to have something like this for clickhouse_exporter.
The text was updated successfully, but these errors were encountered:
Will look at this
Sorry, something went wrong.
it's already exposed exporter_scrape_failures_total in grafana you can use sum(exporter_scrape_failures_total) OR vector(0)
exporter_scrape_failures_total
sum(exporter_scrape_failures_total) OR vector(0)
PS. if you are using grafana you can also look at https://grafana.com/dashboards/882
We are using the grafana dashboard 👍 . But because we have many clickhouse instances its getting a bit busy. But that's another problem :-)
I saw the exporter_scrape_failures_total but I'm not sure how to alert on it.
If the rate(exporter_scrape_failures_total[1m]) > 0 sounds weird
rate(exporter_scrape_failures_total[1m]) > 0
Looks legit. If you have further questions feel free to reopen an issue.
Add a new metric: healthy_replicas (ClickHouse#1)
3068f41
No branches or pull requests
Right now it's not easy to monitor if clickhouse_exporter can reach it's target.
For example: To solve that problem Prometheus'
jmx_exporter
exposes a gauge:that goes to 1 when there is a problem scraping it's target.
It would be great to have something like this for clickhouse_exporter.
The text was updated successfully, but these errors were encountered: