Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upAlert will not trigger #4231
Comments
This comment has been minimized.
This comment has been minimized.
|
The github issues are focused on bug reports I would recommend looking through the official docs and examples as well as searching in the google groups or asking in the irc channel and I am sure there will be someone to help you out. irc: #prometheus Feel free to reopen if you are 100% convinced that is not a support request and a bug in Prometheus. |
krasi-georgiev
closed this
Jun 7, 2018
This comment has been minimized.
This comment has been minimized.
|
no, its a bug report. This just plain doesn't work and I see nothing else aboutit anywhere else. |
This comment has been minimized.
This comment has been minimized.
|
@jurgenweber can you show some logs that might be related to the issue. You are probably familiar with the alerting works but here is the doc page just in case. @simonpasquier is a bit more familiar with the alerting so might give us some more ideas. |
krasi-georgiev
reopened this
Jun 8, 2018
This comment has been minimized.
This comment has been minimized.
|
Hi I tried debug logs, which are very noisy. I left it running for a while and tried to search on the name of the alert without result but I did not when searching for hte metric I sometimes see:
(I will keep watching the logs for another hour or so and see if anything interesting pops up). Screen shot; http://take.ms/qbHEs, config is provided in the original post. Its a simple alert. Please note other alerts look fine, its not like its the first alert I have ever made. This one just will not trigger even thou it is true. So I feel like I am hitting some weird edge case of oddity. I have worked on researched on it for days (because it just seemed so simple, what is going on?!) and my counterpart worked on it also and came to the weird end of nothing I have. Thanks |
This comment has been minimized.
This comment has been minimized.
|
I meant the job config , I just wanted to see the scrape frequency as according to the docs this might have influence. the screenshot shows different alert config than the original post, but I assume you tried with many different variations? |
This comment has been minimized.
This comment has been minimized.
|
ah, right.
The rest are defaults as per the helm chart. yeah, many different variants... the screen shot is 'what we want' and then I tried to make it simpler, removing the ! (aws_ebs_burst_balance_maximum{volume_id!="vol-0c3627f137e583133"} < 80 for 5m vs aws_ebs_burst_balance_maximum < 90 for 1m), increasing the value to ensure there was always something that should be triggering it, etc. You will note there is the warning and critical alert we are after in the screen shot. Neither work. |
This comment has been minimized.
This comment has been minimized.
|
maybe this screen short is a bit more meaningful/helpful; http://take.ms/2Iv1I On the left is the alert... on the right a graph showing two items that are under that limit... and should be triggering it right now. |
This comment has been minimized.
This comment has been minimized.
|
You didn't mention that you were running a bleeding edge cloudwatch exporter binary, which includes as yet unreleased code. This is not a problem with Prometheus, you need to add an |
This comment has been minimized.
This comment has been minimized.
|
@brian-brazil that is interesting. Is this offset because the scraped data is 10min behind? The graph shows some gap at the end so I guess that confirms it. |
This comment has been minimized.
This comment has been minimized.
Yes. |
This comment has been minimized.
This comment has been minimized.
|
well, I am not to know that it is some 'bleeding edge' thing. It doesn't have a sign on it, I just found it all in my travels. I will add an offset, thanks for your help. |
jurgenweber
closed this
Jun 10, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
jurgenweber commentedJun 7, 2018
•
edited
Bug Report
What did you do?
Added an alert.
What did you expect to see?
When the alert condition is met..... it alert.
What did you see instead? Under which circumstances?
No alert.
Environment
Using the helm chart:
https://github.com/kubernetes/charts/tree/master/stable/prometheus
System information:
repository: prom/prometheus
Prometheus version:
tag: v2.2.1
Alertmanager version:
repository: prom/alertmanager
tag: v0.14.0
Prometheus configuration file:
I don't get it, it should be simple but it just does not alert. I have two EBS's volumes (data exported using https://github.com/kubernetes/charts/tree/master/stable/prometheus-cloudwatch-exporter) that are constantly under that 90 threshold and it never alerts.
http://take.ms/gphy5
Also other alerts work just fine.
Thanks