Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chained notifications have empty body #2304

Closed
rickynils opened this issue Sep 10, 2018 · 2 comments
Closed

Chained notifications have empty body #2304

rickynils opened this issue Sep 10, 2018 · 2 comments
Assignees

Comments

@rickynils
Copy link

@rickynils rickynils commented Sep 10, 2018

(this issue might be related to #2226 and/or #1583 (comment))

I have alerts with chained notifications that look like this (you can probably completely ignore the mail notifications here, they work fine. It's the HTTP POST notifications that don't work correctly):

template notification {
  body1 = BODY1
  body2 = BODY2
}

notification mail2 {
  print = true
  runOnActions = false
  email = ...
  timeout = 1m
  next = null
}

notification mail1 {
  print = true
  runOnActions = false
  email = ...
  timeout = 1m
  next = mail2
}

notification phone2 {
  print = true
  runOnActions = false
  post = https://...
  bodyTemplate = body2
  timeout = 1m
  next = null
}

notification phone1 {
  print = true
  runOnActions = false
  post = https://...
  bodyTemplate = body1
  timeout = 1m
  next = phone2
}

lookup notifications {
  entry environment=staging {
    crit = null
    warn = null
  }
  entry environment=production {
    crit = mail1,phone1
    warn = mail1
  }
  entry environment=* {
    crit = null
    warn = null
  }
}

template alert1 {
  inherit = notification
  subject = ...
  body = ...
}

alert alert1 {
  template = alert1
  warn = ...
  crit = ...
  critNotification = lookup("notifications", "crit")
  warnNotification = lookup("notifications", "warn")
}

I have a low timeout (1m) for debug purposes. Longer timeouts behave the same.

When a critical alert is triggered, the first notifications (phone1 and mail1 works as expected. However, if don't acknowledge the alert, and let go on to the next notification (phone2), the HTTP POST request is performed with an empty body (size 0).

Look at the logs below:

UTC 180910 13:43:25.777info: notify.go:59: alert1/critical
UTC 180910 13:43:25.777info: notify.go:251: relayed email alert1{...} to [...] sucessfully. Subject: 44 bytes. Body: 1089 bytes.
UTC 180910 13:43:26.321 info: notify.go:59: Subject: alert1/critical, Body: BODY1

UTC 180910 13:44:25.572 info: notify.go:59: alert1/critical
UTC 180910 13:44:25.572 info: notify.go:251: relayed email alert1{...} to [...] sucessfully. Subject: 44 bytes. Body: 1089 bytes.
UTC 180910 13:44:25.905 info: notify.go:59: Subject: alert1/critical, Body:
UTC 180910 13:44:25.905 error: notify.go:54: sending http: bad response for 'phone2' alert notification using template key 'body2' for alert keys alert1{...} method POST: 400

Look at the two last lines. Bosun correctly picked the body2 field, but it never set it to BODY2 as expected, instead it just POSTed an empty body, causing a 400 bad request for the HTTP endpoint.

@muffix
Copy link
Member

@muffix muffix commented Nov 19, 2019

Hi @rickynils, apologies for the late response. I didn't manage to reproduce the bug with the minimal config below and the latest code from master.

template notification {
  body1 = BODY1
  body2 = BODY2
}

notification phone2 {
  print = true
  runOnActions = false
  post = https://httpbin.org/post
  bodyTemplate = body2
}

notification phone1 {
  print = true
  runOnActions = false
  post = https://httpbin.org/post
  bodyTemplate = body1
  timeout = 10s
  next = phone2
}

template alert1 {
  inherit = notification
  subject = subject from template
  body = body from template
}

alert alert1 {
  template = alert1
  crit = 1
  critNotification = phone1
}

This config produced the expected output:

2019/11/19 17:00:47 info: web.go:220: bosun web listening http on: :8070
2019/11/19 17:00:47 info: alertRunner.go:104: runHistory on alert1 took 3.308ms
2019/11/19 17:00:47 info: notify.go:95: type: alert; name: phone1; transport: http_POST; dst: https://httpbin.org/post; body: BODY1
2019/11/19 17:00:57 info: check.go:561: check alert alert1 start with now set to 2019-11-19 17:00:57.304724
2019/11/19 17:00:57 info: check.go:609: check alert alert1 done (613µs): 1 crits, 0 warns, 0 unevaluated, 0 unknown
2019/11/19 17:00:57 info: alertRunner.go:104: runHistory on alert1 took 1.626ms
2019/11/19 17:01:07 info: notify.go:95: type: alert; name: phone2; transport: http_POST; dst: https://httpbin.org/post; body: BODY2

Do you have any further instructions on how to reproduce the issue?

@stale
Copy link

@stale stale bot commented Nov 13, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Nov 13, 2020
@stale stale bot closed this Dec 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants