How to put description to slack message? #307

Closed
hryamzik opened this Issue Apr 13, 2016 · 25 comments

Comments

Projects
None yet
@hryamzik

I would expect the following syntax but it doesn't work.

  slack_configs:
  - channel: '#alerts'
    text: '{{ .description }} {{ .value }}'
@SleepyBrett

This comment has been minimized.

Show comment
Hide comment
@SleepyBrett

SleepyBrett Apr 15, 2016

It seems like it would be best if the slack notifier supported attachments.

https://api.slack.com/docs/attachments

It seems like it would be best if the slack notifier supported attachments.

https://api.slack.com/docs/attachments

@fabxc

This comment has been minimized.

Show comment
Hide comment
@fabxc

fabxc Apr 16, 2016

Member

We have to write proper documentation on Alertmanager templating. There's prometheus/docs#359 for that already.
The references {{ .description }} and {{ .value }} don't point anywhere.

What would you expect them to be? Seems like you want an alerts annotations in there.
Since one notification is about multiple alerts, this is not valid.

Member

fabxc commented Apr 16, 2016

We have to write proper documentation on Alertmanager templating. There's prometheus/docs#359 for that already.
The references {{ .description }} and {{ .value }} don't point anywhere.

What would you expect them to be? Seems like you want an alerts annotations in there.
Since one notification is about multiple alerts, this is not valid.

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Apr 16, 2016

That's a good point! So there's no way to include annotations, values and triggered targets? I'm fine with a list or a single target selected by max/min value, for example.

That's a good point! So there's no way to include annotations, values and triggered targets? I'm fine with a list or a single target selected by max/min value, for example.

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Apr 21, 2016

By the way, here's a json message to a web hook receiver:

{
    "receiver": "admins-critical",
    "status": "resolved",
    "alerts": [
        {
            "status": "resolved",
            "labels": {
                "alertname": "node_down",
                "env": "prod",
                "instance": "testhost.local:9100",
                "job": "node",
                "monitor": "prometheus",
                "severity": "critical"
            },
            "annotations": {
                "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
                "summary": "Instance testhost.local:9100 down"
            },
            "startsAt": "2016-04-21T20:14:37.698Z",
            "endsAt": "2016-04-21T20:15:37.698Z",
            "generatorURL": "https://monitoring.promehteus.local/graph#%5B%7B%22expr%22%3A%22up%20%3D%3D%200%22%2C%22tab%22%3A0%7D%5D"
        }
    ],
    "groupLabels": {
        "alertname": "node_down",
        "instance": "testhost.local:9100"
    },
    "commonLabels": {
        "alertname": "node_down",
        "env": "prod",
        "instance": "testhost.local:9100",
        "job": "node",
        "monitor": "prometheus",
        "severity": "critical"
    },
    "commonAnnotations": {
        "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
        "summary": "Instance testhost.local:9100 down"
    },
    "externalURL": "https://am.promehteus.local",
    "version": "3",
    "groupKey": somenumber
}

I can see annotations here!

By the way, here's a json message to a web hook receiver:

{
    "receiver": "admins-critical",
    "status": "resolved",
    "alerts": [
        {
            "status": "resolved",
            "labels": {
                "alertname": "node_down",
                "env": "prod",
                "instance": "testhost.local:9100",
                "job": "node",
                "monitor": "prometheus",
                "severity": "critical"
            },
            "annotations": {
                "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
                "summary": "Instance testhost.local:9100 down"
            },
            "startsAt": "2016-04-21T20:14:37.698Z",
            "endsAt": "2016-04-21T20:15:37.698Z",
            "generatorURL": "https://monitoring.promehteus.local/graph#%5B%7B%22expr%22%3A%22up%20%3D%3D%200%22%2C%22tab%22%3A0%7D%5D"
        }
    ],
    "groupLabels": {
        "alertname": "node_down",
        "instance": "testhost.local:9100"
    },
    "commonLabels": {
        "alertname": "node_down",
        "env": "prod",
        "instance": "testhost.local:9100",
        "job": "node",
        "monitor": "prometheus",
        "severity": "critical"
    },
    "commonAnnotations": {
        "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
        "summary": "Instance testhost.local:9100 down"
    },
    "externalURL": "https://am.promehteus.local",
    "version": "3",
    "groupKey": somenumber
}

I can see annotations here!

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Apr 27, 2016

- name: 'admins'
  slack_configs:
  - channel: '#alerts'
    text: "<!channel> \ndescription: {{ .CommonAnnotations.description }}\nsummary: {{ .CommonAnnotations.summary }}"
    send_resolved: True
- name: 'admins'
  slack_configs:
  - channel: '#alerts'
    text: "<!channel> \ndescription: {{ .CommonAnnotations.description }}\nsummary: {{ .CommonAnnotations.summary }}"
    send_resolved: True
@fabxc

This comment has been minimized.

Show comment
Hide comment
@fabxc

fabxc Apr 27, 2016

Member

Does this resolve the question for you?

Member

fabxc commented Apr 27, 2016

Does this resolve the question for you?

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Apr 27, 2016

Absolutely, failed to close due to github availability issues this morning.

Absolutely, failed to close due to github availability issues this morning.

@hryamzik hryamzik closed this Apr 27, 2016

@krestjaninoff

This comment has been minimized.

Show comment
Hide comment
@krestjaninoff

krestjaninoff Jun 10, 2016

@fabxc I think the point that @hryamzik mentioned is very useful and important. I would suggest to add it into prometheus/docs#359

@fabxc I think the point that @hryamzik mentioned is very useful and important. I would suggest to add it into prometheus/docs#359

@zevarito

This comment has been minimized.

Show comment
Hide comment
@zevarito

zevarito Aug 16, 2016

@hryamzik Where I can get that JSON structure you posted?

@hryamzik Where I can get that JSON structure you posted?

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Aug 16, 2016

@zevarito with tcpdump or tcpflow. ;-)

@zevarito with tcpdump or tcpflow. ;-)

@zevarito

This comment has been minimized.

Show comment
Hide comment
@zevarito

zevarito Aug 16, 2016

hehe alright, I thought it was something like that but hopping it was
accessible in some other way ;)

Having serious troubles notifying Slack, same alerts are posted sometimes
with description OK and sometimes Empty, if you have a clue about what
could be causing it please let me know. Thanks!

2016-08-16 10:28 GMT-03:00 Roman Belyakovsky notifications@github.com:

@zevarito https://github.com/zevarito with tcpdump or tcpflow. ;-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgaukdVCo6W3e1CQFzswH35L7rqFiUks5qgbr3gaJpZM4IGVTa
.

Alvaro

hehe alright, I thought it was something like that but hopping it was
accessible in some other way ;)

Having serious troubles notifying Slack, same alerts are posted sometimes
with description OK and sometimes Empty, if you have a clue about what
could be causing it please let me know. Thanks!

2016-08-16 10:28 GMT-03:00 Roman Belyakovsky notifications@github.com:

@zevarito https://github.com/zevarito with tcpdump or tcpflow. ;-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgaukdVCo6W3e1CQFzswH35L7rqFiUks5qgbr3gaJpZM4IGVTa
.

Alvaro

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Aug 16, 2016

@zevarito if you have grouped alerts with different tags only ones with equal values will be included. So if alerts description differs it will be omitted.

@zevarito if you have grouped alerts with different tags only ones with equal values will be included. So if alerts description differs it will be omitted.

@zevarito

This comment has been minimized.

Show comment
Hide comment
@zevarito

zevarito Aug 16, 2016

@hryamzik that would make sense I'll check, thanks.

zevarito commented Aug 16, 2016

@hryamzik that would make sense I'll check, thanks.

@zevarito

This comment has been minimized.

Show comment
Hide comment
@zevarito

zevarito Aug 16, 2016

It doesn't seem to be the problem, I removed groups and still happen the same. I will try to provide a test example for it.

zevarito commented Aug 16, 2016

It doesn't seem to be the problem, I removed groups and still happen the same. I will try to provide a test example for it.

@ngu04

This comment has been minimized.

Show comment
Hide comment
@ngu04

ngu04 Sep 8, 2016

@hryamzik how do you get json from tcpdump ? are you filtering on port 9093 for alertmanager or 9090 for prometheus ? I am not able to get json from tcpdump.

ngu04 commented Sep 8, 2016

@hryamzik how do you get json from tcpdump ? are you filtering on port 9093 for alertmanager or 9090 for prometheus ? I am not able to get json from tcpdump.

@hryamzik

This comment has been minimized.

Show comment
Hide comment
@hryamzik

hryamzik Sep 8, 2016

@ngu04 I've set up an https service and pointed alertmanager to it with webhook url. There's also a JSON exchange between prometheus and alertmanager but you are not interested in it.

hryamzik commented Sep 8, 2016

@ngu04 I've set up an https service and pointed alertmanager to it with webhook url. There's also a JSON exchange between prometheus and alertmanager but you are not interested in it.

@zevarito

This comment has been minimized.

Show comment
Hide comment
@zevarito

zevarito Sep 9, 2016

Nilesh, I think you should define a field, say "summary", on your ALERT
definition and then it will become available as .CommonAnnotations.summary
for your alerting templates/config.

2016-09-08 12:54 GMT-03:00 Nilesh Gupta notifications@github.com:

@hryamzik https://github.com/hryamzik text: "{{
.CommonAnnotations.description }}" is getting resolved as text: "" . Did
you had any such issue ? Also I am not to manage get json from tcpdump .

basically text: "{{ .CommonAnnotations.description }}" is not working for
me . I am alertmanager version v0.3.0 and prometheus version 0.18.0 .


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgahA2NtGVSb0jTb6GpXGYSdhK3pDGks5qoC-cgaJpZM4IGVTa
.

Alvaro

zevarito commented Sep 9, 2016

Nilesh, I think you should define a field, say "summary", on your ALERT
definition and then it will become available as .CommonAnnotations.summary
for your alerting templates/config.

2016-09-08 12:54 GMT-03:00 Nilesh Gupta notifications@github.com:

@hryamzik https://github.com/hryamzik text: "{{
.CommonAnnotations.description }}" is getting resolved as text: "" . Did
you had any such issue ? Also I am not to manage get json from tcpdump .

basically text: "{{ .CommonAnnotations.description }}" is not working for
me . I am alertmanager version v0.3.0 and prometheus version 0.18.0 .


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgahA2NtGVSb0jTb6GpXGYSdhK3pDGks5qoC-cgaJpZM4IGVTa
.

Alvaro

@ngu04

This comment has been minimized.

Show comment
Hide comment
@ngu04

ngu04 Sep 9, 2016

I manage get json but commonAnnotations is empty . May be something not right in alert rules . Following is my alert rule ...

ALERT InstanceDown
  IF up == 0
  FOR 5m
  LABELS { severity = "critical" }
  ANNOTATIONS {
    summary = "Instance {{ $labels.instance }} down",
    description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.",
  }

ngu04 commented Sep 9, 2016

I manage get json but commonAnnotations is empty . May be something not right in alert rules . Following is my alert rule ...

ALERT InstanceDown
  IF up == 0
  FOR 5m
  LABELS { severity = "critical" }
  ANNOTATIONS {
    summary = "Instance {{ $labels.instance }} down",
    description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.",
  }

@ngu04

This comment has been minimized.

Show comment
Hide comment
@ngu04

ngu04 Sep 9, 2016

@fabxc @hryamzik @zevarito In json , I am getting annotation under alerts , and commonAnnotations is empty . So i tried {{ .Alerts.annotations.description}}, which is also not working .

ngu04 commented Sep 9, 2016

@fabxc @hryamzik @zevarito In json , I am getting annotation under alerts , and commonAnnotations is empty . So i tried {{ .Alerts.annotations.description}}, which is also not working .

@ngu04

This comment has been minimized.

Show comment
Hide comment
@ngu04

ngu04 Sep 12, 2016

Following is my config yml

route:
  receiver: 'slack_my_dev_alerts'
  group_by: ['alertname', 'cluster']
  group_wait: 30s
  group_interval: 1m
  repeat_interval: 3h
  routes:
  - match_re:
      severity: ^([a-zA-Z0-9 ]*)$
    receiver: slack_my_alerts
    continue: true
  - match_re:
     severity: ^([a-zA-Z0-9 ]*)$
    receiver: custom_webhook

inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  # Apply inhibition if the alertname is the same.
  equal: ['alertname']

receivers:
- name: 'slack_my_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts'
    text: "{{ .CommonAnnotations.description }}"

- name: 'slack_my_dev_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts-dev'
    text: "{{ .CommonAnnotations.description }}"
    send_resolved: true

- name: 'custom_webhook'
  webhook_configs:
  - url: 'http://localhost:8080'

ngu04 commented Sep 12, 2016

Following is my config yml

route:
  receiver: 'slack_my_dev_alerts'
  group_by: ['alertname', 'cluster']
  group_wait: 30s
  group_interval: 1m
  repeat_interval: 3h
  routes:
  - match_re:
      severity: ^([a-zA-Z0-9 ]*)$
    receiver: slack_my_alerts
    continue: true
  - match_re:
     severity: ^([a-zA-Z0-9 ]*)$
    receiver: custom_webhook

inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  # Apply inhibition if the alertname is the same.
  equal: ['alertname']

receivers:
- name: 'slack_my_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts'
    text: "{{ .CommonAnnotations.description }}"

- name: 'slack_my_dev_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts-dev'
    text: "{{ .CommonAnnotations.description }}"
    send_resolved: true

- name: 'custom_webhook'
  webhook_configs:
  - url: 'http://localhost:8080'
@kevin-bockman

This comment has been minimized.

Show comment
Hide comment
@kevin-bockman

kevin-bockman Feb 23, 2017

I was having the same problems. This worked for me. It will loop through the alerts and print on new lines. My alert setup is like here

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"

kevin-bockman commented Feb 23, 2017

I was having the same problems. This worked for me. It will loop through the alerts and print on new lines. My alert setup is like here

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"
@r4j4h

This comment has been minimized.

Show comment
Hide comment
@r4j4h

r4j4h Mar 3, 2017

To follow up with further information, I found {{ printf "%+v" . }} and {{ printf "%#v" . }} useful for showing the entire object of an alert.

Here's an example of %#v to illustrate structure (which validates hryamzik's structure above):

& template.Data {
    Receiver: "my_receiver",
    Status: "firing",
    Alerts: template.Alerts {
        template.Alert {
            Status: "firing",
            Labels: template.KV {
                "alertname": "PrometheusAlertSendLatencySlow"
            },
            Annotations: template.KV {
                "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager.",
                "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager."
            },
            StartsAt: time.Time {
                sec: 63624162217,
                nsec: 803000000,
                loc: ( * time.Location)(0xbac320)
            },
            EndsAt: time.Time {
                sec: 0,
                nsec: 0,
                loc: ( * time.Location)(nil)
            },
            GeneratorURL: "https://prometheus.default.whatever.com/graph?g0.expr=max%28prometheus_notifications_latency_seconds%29+%3E+1&g0.tab=0"
        }
    },
    GroupLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonAnnotations: template.KV {
        "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager.",
        "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager."
    },
    ExternalURL: "https://alertmanager.whatever.com"
}

r4j4h commented Mar 3, 2017

To follow up with further information, I found {{ printf "%+v" . }} and {{ printf "%#v" . }} useful for showing the entire object of an alert.

Here's an example of %#v to illustrate structure (which validates hryamzik's structure above):

& template.Data {
    Receiver: "my_receiver",
    Status: "firing",
    Alerts: template.Alerts {
        template.Alert {
            Status: "firing",
            Labels: template.KV {
                "alertname": "PrometheusAlertSendLatencySlow"
            },
            Annotations: template.KV {
                "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager.",
                "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager."
            },
            StartsAt: time.Time {
                sec: 63624162217,
                nsec: 803000000,
                loc: ( * time.Location)(0xbac320)
            },
            EndsAt: time.Time {
                sec: 0,
                nsec: 0,
                loc: ( * time.Location)(nil)
            },
            GeneratorURL: "https://prometheus.default.whatever.com/graph?g0.expr=max%28prometheus_notifications_latency_seconds%29+%3E+1&g0.tab=0"
        }
    },
    GroupLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonAnnotations: template.KV {
        "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager.",
        "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager."
    },
    ExternalURL: "https://alertmanager.whatever.com"
}
@civik

This comment has been minimized.

Show comment
Hide comment
@civik

civik Mar 10, 2017

I think I have a pretty good grip on using the Group* and Common* values after reading this thread. I'm still having difficulty accessing data OUTSIDE of the group union however. Take this example case - A filesystem alert where I group by [alertname,instance] and I want to access the individual different 'device' label attributes of each group.

EDIT: As usual 30 seconds after I post I figure it out. This is how I did it, hope it helps somebody:

{{ range .Alerts }} {{ .Labels.device }} {{ end }}

civik commented Mar 10, 2017

I think I have a pretty good grip on using the Group* and Common* values after reading this thread. I'm still having difficulty accessing data OUTSIDE of the group union however. Take this example case - A filesystem alert where I group by [alertname,instance] and I want to access the individual different 'device' label attributes of each group.

EDIT: As usual 30 seconds after I post I figure it out. This is how I did it, hope it helps somebody:

{{ range .Alerts }} {{ .Labels.device }} {{ end }}

@pir1981

This comment has been minimized.

Show comment
Hide comment
@pir1981

pir1981 May 18, 2017

@kevin-bockman is there a way to the split of alerts also for logstash? I tried it with title and text and that does not work. I assume because a <webhook_config> is structured differently?

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"

  - name: 'logstash'
    webhook_configs:
      - send_resolved: true
        url: "http://logstash:8080/"
       ?????
       ????

Thanks

pir1981 commented May 18, 2017

@kevin-bockman is there a way to the split of alerts also for logstash? I tried it with title and text and that does not work. I assume because a <webhook_config> is structured differently?

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"

  - name: 'logstash'
    webhook_configs:
      - send_resolved: true
        url: "http://logstash:8080/"
       ?????
       ????

Thanks

@dogopupper

This comment has been minimized.

Show comment
Hide comment

dogopupper commented Nov 27, 2017

!tagged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment