Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alert annotations disappear when using templating #2424

Closed
clockworksoul opened this Issue Feb 14, 2017 · 2 comments

Comments

Projects
None yet
3 participants
@clockworksoul
Copy link

clockworksoul commented Feb 14, 2017

Apologies if this belongs elsewhere.

What did you do?

Created a configuration with three rules (copy/pasted below)

  • 1 emits just plain text, and works fine
  • 1 emits using templating of {{ $value }} in the description. The description fails to appear.
  • 1 emits using templating of {{ $labels.instance }} in the description. It fails similarly.

What did you expect to see?

The following output in Slack:

Test plain text
Nothing to see here.

This is K8STest1
This is just the value: 123

This is K8STest2
Kubelet foo.bar is running.

What did you see instead? Under which circumstances?

Test plain text
Nothing to see here.

This is K8STest1

This is K8STest2

Environment

  • System information:

Deploying in a Kops-managed K8s cluster

  • Prometheus version:

    v1.5.2

  • Alertmanager version:

    v0.5.1

  • Prometheus configuration file:

    ALERT K8SPlainText
      IF kubelet_running_pod_count > 0
      LABELS {
        service = "k8s",
        severity = "warning",
      }
      ANNOTATIONS {
        summary = "Test plain text",
        description = "Nothing to see here.",
      }

    ALERT K8STest1
      IF kubelet_running_pod_count > 0
      LABELS {
        service = "k8s",
        severity = "warning",
      }
      ANNOTATIONS {
        summary = "This is K8STest1",
        description = "This is just the value: {{ $value }}",
      }

    ALERT K8STest2
      IF kubelet_running_pod_count > 0
      LABELS {
        service = "k8s",
        severity = "warning",
      }
      ANNOTATIONS {
        summary = "This is K8STest2",
        description = "Kubelet {{ $labels.instance }} is running.",
      }
  • Alertmanager configuration file:
apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-main
data:
  alertmanager.yaml: |-
    global:
      resolve_timeout: 5m

    route:
      group_wait: 10s
      group_interval: 1m
      repeat_interval: 1m
      receiver: 'slack'

    receivers:
    - name: 'slack'
      slack_configs:
      - api_url: 'https://hooks.slack.com/services/SomeLongWebhook'
        text: '{{ .CommonAnnotations.description }}'
        title: '{{ .CommonAnnotations.summary }}'
        send_resolved: true
  • Relevant logs output

This may or may not be relevant: output of alertmanager-main on config (re)load:

time="2017-02-14T16:00:39Z" level=info msg="Loading configuration file" file="/etc/alertmanager/config/alertmanager.yaml" source="main.go:195" 
time="2017-02-14T16:00:39Z" level=error msg="Error on notify: context canceled" source="notify.go:272" 
time="2017-02-14T16:00:39Z" level=error msg="Notify for 14 alerts failed: context canceled" source="dispatch.go:246" 
time="2017-02-14T16:00:39Z" level=error msg="Error on notify: context canceled" source="notify.go:272" 
time="2017-02-14T16:00:39Z" level=error msg="Notify for 14 alerts failed: context canceled" source="dispatch.go:246" ```
@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Mar 2, 2017

Alertmanager attempts to aggregate alerts along grouping labels you specified. In your AM config, there are non, which means that all alerts go into one large group.
The problem is that CommonAnnotations which you are using for templating are only populated with annotations that all alerts in the group actually share. As the annotations are templated per-alert before being sent to Alertmanager, they now actually become different for alerts within your setup and those are not part of CommonAnnotations.

You still have all alerts within the notification available to iterate over. But it's not a good idea as there might be many.

Generally, you probably want to avoid instance all together as it doesn't make for good alert aggregations. But if you want that level of granularity, you can add a group_by: [instance] to your AM route.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.