Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to put description to slack message? #307

Closed
hryamzik opened this issue Apr 13, 2016 · 29 comments
Closed

How to put description to slack message? #307

hryamzik opened this issue Apr 13, 2016 · 29 comments

Comments

@hryamzik
Copy link

@hryamzik hryamzik commented Apr 13, 2016

I would expect the following syntax but it doesn't work.

  slack_configs:
  - channel: '#alerts'
    text: '{{ .description }} {{ .value }}'
@SleepyBrett
Copy link

@SleepyBrett SleepyBrett commented Apr 15, 2016

It seems like it would be best if the slack notifier supported attachments.

https://api.slack.com/docs/attachments

@fabxc
Copy link
Member

@fabxc fabxc commented Apr 16, 2016

We have to write proper documentation on Alertmanager templating. There's prometheus/docs#359 for that already.
The references {{ .description }} and {{ .value }} don't point anywhere.

What would you expect them to be? Seems like you want an alerts annotations in there.
Since one notification is about multiple alerts, this is not valid.

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Apr 16, 2016

That's a good point! So there's no way to include annotations, values and triggered targets? I'm fine with a list or a single target selected by max/min value, for example.

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Apr 21, 2016

By the way, here's a json message to a web hook receiver:

{
    "receiver": "admins-critical",
    "status": "resolved",
    "alerts": [
        {
            "status": "resolved",
            "labels": {
                "alertname": "node_down",
                "env": "prod",
                "instance": "testhost.local:9100",
                "job": "node",
                "monitor": "prometheus",
                "severity": "critical"
            },
            "annotations": {
                "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
                "summary": "Instance testhost.local:9100 down"
            },
            "startsAt": "2016-04-21T20:14:37.698Z",
            "endsAt": "2016-04-21T20:15:37.698Z",
            "generatorURL": "https://monitoring.promehteus.local/graph#%5B%7B%22expr%22%3A%22up%20%3D%3D%200%22%2C%22tab%22%3A0%7D%5D"
        }
    ],
    "groupLabels": {
        "alertname": "node_down",
        "instance": "testhost.local:9100"
    },
    "commonLabels": {
        "alertname": "node_down",
        "env": "prod",
        "instance": "testhost.local:9100",
        "job": "node",
        "monitor": "prometheus",
        "severity": "critical"
    },
    "commonAnnotations": {
        "description": "testhost.local:9100 of job node has been down for more than 5 minutes.",
        "summary": "Instance testhost.local:9100 down"
    },
    "externalURL": "https://am.promehteus.local",
    "version": "3",
    "groupKey": somenumber
}

I can see annotations here!

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Apr 27, 2016

- name: 'admins'
  slack_configs:
  - channel: '#alerts'
    text: "<!channel> \ndescription: {{ .CommonAnnotations.description }}\nsummary: {{ .CommonAnnotations.summary }}"
    send_resolved: True
@fabxc
Copy link
Member

@fabxc fabxc commented Apr 27, 2016

Does this resolve the question for you?

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Apr 27, 2016

Absolutely, failed to close due to github availability issues this morning.

@hryamzik hryamzik closed this Apr 27, 2016
@krestjaninoff
Copy link

@krestjaninoff krestjaninoff commented Jun 10, 2016

@fabxc I think the point that @hryamzik mentioned is very useful and important. I would suggest to add it into prometheus/docs#359

@zevarito
Copy link

@zevarito zevarito commented Aug 16, 2016

@hryamzik Where I can get that JSON structure you posted?

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Aug 16, 2016

@zevarito with tcpdump or tcpflow. ;-)

@zevarito
Copy link

@zevarito zevarito commented Aug 16, 2016

hehe alright, I thought it was something like that but hopping it was
accessible in some other way ;)

Having serious troubles notifying Slack, same alerts are posted sometimes
with description OK and sometimes Empty, if you have a clue about what
could be causing it please let me know. Thanks!

2016-08-16 10:28 GMT-03:00 Roman Belyakovsky notifications@github.com:

@zevarito https://github.com/zevarito with tcpdump or tcpflow. ;-)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgaukdVCo6W3e1CQFzswH35L7rqFiUks5qgbr3gaJpZM4IGVTa
.

Alvaro

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Aug 16, 2016

@zevarito if you have grouped alerts with different tags only ones with equal values will be included. So if alerts description differs it will be omitted.

@zevarito
Copy link

@zevarito zevarito commented Aug 16, 2016

@hryamzik that would make sense I'll check, thanks.

@zevarito
Copy link

@zevarito zevarito commented Aug 16, 2016

It doesn't seem to be the problem, I removed groups and still happen the same. I will try to provide a test example for it.

@ngu04
Copy link

@ngu04 ngu04 commented Sep 8, 2016

@hryamzik how do you get json from tcpdump ? are you filtering on port 9093 for alertmanager or 9090 for prometheus ? I am not able to get json from tcpdump.

@hryamzik
Copy link
Author

@hryamzik hryamzik commented Sep 8, 2016

@ngu04 I've set up an https service and pointed alertmanager to it with webhook url. There's also a JSON exchange between prometheus and alertmanager but you are not interested in it.

@zevarito
Copy link

@zevarito zevarito commented Sep 9, 2016

Nilesh, I think you should define a field, say "summary", on your ALERT
definition and then it will become available as .CommonAnnotations.summary
for your alerting templates/config.

2016-09-08 12:54 GMT-03:00 Nilesh Gupta notifications@github.com:

@hryamzik https://github.com/hryamzik text: "{{
.CommonAnnotations.description }}" is getting resolved as text: "" . Did
you had any such issue ? Also I am not to manage get json from tcpdump .

basically text: "{{ .CommonAnnotations.description }}" is not working for
me . I am alertmanager version v0.3.0 and prometheus version 0.18.0 .


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACgahA2NtGVSb0jTb6GpXGYSdhK3pDGks5qoC-cgaJpZM4IGVTa
.

Alvaro

@ngu04
Copy link

@ngu04 ngu04 commented Sep 9, 2016

I manage get json but commonAnnotations is empty . May be something not right in alert rules . Following is my alert rule ...

ALERT InstanceDown
  IF up == 0
  FOR 5m
  LABELS { severity = "critical" }
  ANNOTATIONS {
    summary = "Instance {{ $labels.instance }} down",
    description = "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.",
  }

@ngu04
Copy link

@ngu04 ngu04 commented Sep 9, 2016

@fabxc @hryamzik @zevarito In json , I am getting annotation under alerts , and commonAnnotations is empty . So i tried {{ .Alerts.annotations.description}}, which is also not working .

@ngu04
Copy link

@ngu04 ngu04 commented Sep 12, 2016

Following is my config yml

route:
  receiver: 'slack_my_dev_alerts'
  group_by: ['alertname', 'cluster']
  group_wait: 30s
  group_interval: 1m
  repeat_interval: 3h
  routes:
  - match_re:
      severity: ^([a-zA-Z0-9 ]*)$
    receiver: slack_my_alerts
    continue: true
  - match_re:
     severity: ^([a-zA-Z0-9 ]*)$
    receiver: custom_webhook

inhibit_rules:
- source_match:
    severity: 'critical'
  target_match:
    severity: 'warning'
  # Apply inhibition if the alertname is the same.
  equal: ['alertname']

receivers:
- name: 'slack_my_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts'
    text: "{{ .CommonAnnotations.description }}"

- name: 'slack_my_dev_alerts'
  slack_configs:
  - api_url: 'hide'
    channel: '#my-alerts-dev'
    text: "{{ .CommonAnnotations.description }}"
    send_resolved: true

- name: 'custom_webhook'
  webhook_configs:
  - url: 'http://localhost:8080'
@kevin-bockman
Copy link

@kevin-bockman kevin-bockman commented Feb 23, 2017

I was having the same problems. This worked for me. It will loop through the alerts and print on new lines. My alert setup is like here

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"
@r4j4h
Copy link

@r4j4h r4j4h commented Mar 3, 2017

To follow up with further information, I found {{ printf "%+v" . }} and {{ printf "%#v" . }} useful for showing the entire object of an alert.

Here's an example of %#v to illustrate structure (which validates hryamzik's structure above):

& template.Data {
    Receiver: "my_receiver",
    Status: "firing",
    Alerts: template.Alerts {
        template.Alert {
            Status: "firing",
            Labels: template.KV {
                "alertname": "PrometheusAlertSendLatencySlow"
            },
            Annotations: template.KV {
                "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager.",
                "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager."
            },
            StartsAt: time.Time {
                sec: 63624162217,
                nsec: 803000000,
                loc: ( * time.Location)(0xbac320)
            },
            EndsAt: time.Time {
                sec: 0,
                nsec: 0,
                loc: ( * time.Location)(nil)
            },
            GeneratorURL: "https://prometheus.default.whatever.com/graph?g0.expr=max%28prometheus_notifications_latency_seconds%29+%3E+1&g0.tab=0"
        }
    },
    GroupLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonLabels: template.KV {
        "alertname": "PrometheusAlertSendLatencySlow"
    },
    CommonAnnotations: template.KV {
        "summary": "Prometheus is experiencing over a full second of latency dispatching alerts to AlertManager.",
        "description": "Prometheus is experiencing over 10.000436954 seconds of latency dispatching alerts to AlertManager."
    },
    ExternalURL: "https://alertmanager.whatever.com"
}
@civik
Copy link

@civik civik commented Mar 10, 2017

I think I have a pretty good grip on using the Group* and Common* values after reading this thread. I'm still having difficulty accessing data OUTSIDE of the group union however. Take this example case - A filesystem alert where I group by [alertname,instance] and I want to access the individual different 'device' label attributes of each group.

EDIT: As usual 30 seconds after I post I figure it out. This is how I did it, hope it helps somebody:

{{ range .Alerts }} {{ .Labels.device }} {{ end }}

@pir1981
Copy link

@pir1981 pir1981 commented May 18, 2017

@kevin-bockman is there a way to the split of alerts also for logstash? I tried it with title and text and that does not work. I assume because a <webhook_config> is structured differently?

receivers:
  - name: 'default-receiver'
    slack_configs:
    - channel: 'alerts'
    - title: "{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}"
    - text: "{{ range .Alerts }}{{ .Annotations.description }}\n{{ end }}"

  - name: 'logstash'
    webhook_configs:
      - send_resolved: true
        url: "http://logstash:8080/"
       ?????
       ????

Thanks

@dogopupper
Copy link

@dogopupper dogopupper commented Nov 27, 2017

!tagged

@valferon
Copy link

@valferon valferon commented Aug 20, 2018

This displayed alerts in a nice way

receivers:
- name: slack_webhook
  slack_configs:
  - send_resolved: false
    api_url: https://something.slack.com/APIURLSECRET
    channel: alerts
    username: '{{ template "slack.default.username" . }}'
    color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
    title: '{{ template "slack.default.title" . }}'
    title_link: '{{ template "slack.default.titlelink" . }}'
    pretext: '{{ .CommonAnnotations.summary }}'
    text: |-
      {{ range .Alerts }}
         *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
        *Description:* {{ .Annotations.description }}
        *Details:*
        {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
        {{ end }}
      {{ end }}
    fallback: '{{ template "slack.default.fallback" . }}'
    icon_emoji: '{{ template "slack.default.iconemoji" . }}'
    icon_url: '{{ template "slack.default.iconurl" . }}'
templates: '/etc/alertmanager/config/*.tmpl'
@VR6Pete
Copy link

@VR6Pete VR6Pete commented Oct 2, 2018

Hi @valferon - Looks like a good example, but produces an error relating to templates.

would it be possible to share your templates you have in '/etc/alertmanager/config/*.tmpl' ?

Thanks.

Pete

@valferon
Copy link

@valferon valferon commented Oct 10, 2018

Hey @VR6Pete,
the last line starting with templates: isn't actually part of the receivers config, I've edited it out.

The previous alerts were also too verbose for my liking and I've since changed to this :

global:
  slack_api_url: https://hooks.slack.com/services/[xxxxxxxxxxxxxxxxxxxxxxx]
receivers:
- name: default-receiver
  slack_configs:
  - channel: '#[prometheus-channel]'
    send_resolved: true
    text: |-
      {{ range .Alerts }}
         *Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
        *Description:* {{ .Annotations.description }}
        *Details:*
        {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
        {{ end }}
      {{ end }}

Hope this helps (otherwise try posting your error)

@MohanSai1997
Copy link

@MohanSai1997 MohanSai1997 commented Mar 30, 2020

For the people who want to display the data. Please start the flask app with below code and paste the same URL in <webhook_config>.

from flask import Flask
from flask import request

app = Flask(__name__)

@app.route("/alert", methods=['POST'])
def handle_url():
    print(request.json)

    return { "result" : "data updated" }


if __name__=="__main__":
    app.run(host="0.0.0.0", port=5001, debug=True)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.