Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alertmanager relabel config not being honoured #3239

Closed
luke-orden opened this issue Oct 4, 2017 · 12 comments
Closed

Alertmanager relabel config not being honoured #3239

luke-orden opened this issue Oct 4, 2017 · 12 comments

Comments

@luke-orden
Copy link
Contributor

luke-orden commented Oct 4, 2017

What did you do?
I have two alertmanagers set in my config, and I am trying to send only certain alerts to each:

alerting:
  alertmanagers:
    - static_configs:
      - targets:
        - localhost:9093
      relabel_configs:
        - action: drop
          source_labels: [secure]
          regex: yes 
    - static_configs:
      - targets:
        - localhost:9095

What did you expect to see?
I would expect to alerts with the label {secure="yes"} to be dropped for target localhost:9093.

What did you see instead? Under which circumstances?
All alerts, including ones with {secure="yes"}, were sent to both alertmanagers.

If I set the relabel config under alert_relabel_configs all alerts with {secure="yes"} are dropped, but the docs suggest that I should be able to set different relabel_configs for each alertmanager.

Here is the config when dropping {secure="yes"} for both alertmanagers:

alerting:
  alert_relabel_configs:
    - action: drop
      source_labels: [secure]
      regex: yes
  alertmanagers:
    - static_configs:
      - targets:
        - localhost:9093
    - static_configs:
      - targets:
        - localhost:9095

Environment

  • System information:
$ uname -srm
Linux 4.10.0-35-generic x86_64
  • Prometheus version:
$ ./prometheus -version
prometheus, version 1.7.1 (branch: master, revision: 3afb3fffa3a29c3de865e1172fb740442e9d0133)
  build user:       root@0aa1b7fc430d
  build date:       20170612-11:44:05
  go version:       go1.8.3
  • Prometheus configuration file:
global:
  scrape_interval:     5s
  evaluation_interval: 5s

rule_files:
- alerts

scrape_configs:
  - job_name:       'test 1'
    static_configs:
      - targets: ['localhost:8080']
    relabel_configs:
      - replacement: 'device1'
        target_label: instance
  - job_name:       'test 2'
    static_configs:
      - targets: ['localhost:8080']
    relabel_configs:
      - replacement: 'device2'
        target_label: instance

alerting:
  alertmanagers:
    - static_configs:
      - targets:
        - localhost:9093
      relabel_configs:
        - action: drop
          source_labels: [secure]
          regex: yes
    - static_configs:
      - targets:
        - localhost:9095

Alerts file:

ALERT test_alert_1
  IF up{instance="device1"} == 0
  LABELS { secure = "yes" }
  ANNOTATIONS {
    description = "{{ $labels.instance }} is currently unreachable",
  }

ALERT test_alert_2
  IF up{instance!="device1"} == 0
  LABELS { secure = "no" }
  ANNOTATIONS {
    description = "{{ $labels.instance }} is currently unreachable",
  }
luke-orden added a commit to luke-orden/prometheus that referenced this issue Oct 12, 2017
Currently the `relabel_configs` under each `alertmanager_config` are not
honoured - see prometheus#3239.

I have made it so we run a relabel process for each of the alertmanager
sets. So the relabeling doesn't change the alerts for other am sets I
have had to create a new func called `amsRelabelAlerts` which updates a
copy of the alert and not the alert itself. This does duplicate code a
little, so there may be a better way to do this, suggestions welcome.
@brian-brazil
Copy link
Contributor

The relabel configs at this level are part of service discovery to select the alertmanager, not to change the alerts themselves. So this is the expected behaviour.

@luke-orden
Copy link
Contributor Author

luke-orden commented Dec 8, 2017

The relabel configs at this level are part of service discovery to select the alertmanager

This suggests that I should be able to select which alert manager gets which alert(s), which is what I am trying to do, but isn't working as expected. Am I misunderstanding your point?

@brian-brazil
Copy link
Contributor

No, they're to select the alertmanagers that all alerts are sent to. Consider that if you were on kubernetes that the SD would return many many different pods, these relabel_configs let you limit that down just to those running the alertmanager.

@luke-orden
Copy link
Contributor Author

Ah I see what you mean. Thanks

@luke-orden
Copy link
Contributor Author

Is there any other way to have only certain alertmanagers receive only certain alerts?

@brian-brazil
Copy link
Contributor

No, that's not possible currently. What's you use case?

@luke-orden
Copy link
Contributor Author

I plan to have one alertmanager for standard monitoring, and one for auto-remediation. The auto-remediation alertmanager will be locked down and part of a secure pipeline. I plan to have the auto-remediation alerts contain an auth token which I do not want sent to the standard monitoring alertmanager.

@brian-brazil
Copy link
Contributor

I'm not seeing how that works, labels are not secret. I also don't understand why you need a 2nd alertmanager for that.

@luke-orden
Copy link
Contributor Author

luke-orden commented Dec 9, 2017

I need to add authentication to the auto-remediation alertmanager as I do not want anyone to be able to send alerts to it, which could cause an automatic action.

I would prefer not to send alerts to both if they are not needed on both alertmanagers.

@brian-brazil
Copy link
Contributor

Things are built with the assumption that the alertmanagers are homogeneous, I think you might be over-complicating things for yourself a bit.

@luke-orden
Copy link
Contributor Author

Shame

@lock
Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants