Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alertmanager HA external URL #3890

Closed
nelsonfassis opened this Issue Feb 26, 2018 · 2 comments

Comments

Projects
None yet
2 participants
@nelsonfassis
Copy link

nelsonfassis commented Feb 26, 2018

What did you do?
This is my Alertmanager Statefulset so far.
spec:
containers:
- name: alertmanager
image: prom/alertmanager:v0.12.0
args:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
- '--web.external-url=http://myk8sdomain.com:31080/alertmanager'
- '--mesh.listen-address=:6783'
- '--mesh.peer=alertmanager-mesh'

Alertmanager service:
apiVersion: v1
kind: Service
metadata:
name: alertmanager
labels:
app: alertmanager
spec:
ports:

  • port: 9093
    name: alertmanager
    clusterIP: None
    selector:
    app: alertmanager

Prometheus configuration:
alerting:
alertmanagers:
- scheme: http
static_configs:
- targets:
- alertmanager-0.alertmanager:9093
- alertmanager-1.alertmanager:9093

What did you expect to see?
I expected Prometheus (as many instances as needed) to fire alerts to Alertmanager that would dedup those alerts and fire alerts to my slack.
What did you see instead? Under which circumstances?
If I have --web.external-url set:

level=error ts=2018-02-26T20:22:13.937418135Z caller=notifier.go:444 component=notifier alertmanager=http://alertmanager-0.alertmanager:9093/api/v1/alerts count=1 msg="Error sending alert" err="bad response status 404 Not Found"

If I comment out --web.external-url:
Alerts are properly sent to Alertmanager which fires to my Slack. But when I click on the link in the slack alert, it opens ( http://alertmanager-1:9093/#/alerts?receiver=slack_alert ) instead of (http://myk8sdomain.com:31080/alertmanager/alerts?receiver=slack_alert').

I've also tried using nodePort as service to make sure Ingress wasn't the problem here, same result.

Environment

  • System information:

    insert output of uname -srm here

  • Prometheus version:
    Version | 2.0.0

  • Alertmanager version:

0.12.0

  • Logs:
 level=info ts=2018-02-26T20:21:48.984925901Z caller=main.go:394 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2018-02-26T20:21:48.989696818Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.991574227Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.993095013Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.994535886Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.995950911Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.997557533Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:48.998983519Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:49.000719298Z caller=kubernetes.go:100 component="target manager" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-26T20:21:49.00367165Z caller=main.go:371 msg="Server is ready to receive requests."
level=error ts=2018-02-26T20:22:13.937418135Z caller=notifier.go:444 component=notifier alertmanager=http://alertmanager-0.alertmanager:9093/api/v1/alerts count=1 msg="Error sending alert" err="bad response status 404 Not Found"
level=error ts=2018-02-26T20:22:13.938614548Z caller=notifier.go:444 component=notifier alertmanager=http://alertmanager-1.alertmanager:9093/api/v1/alerts count=1 msg="Error sending alert" err="bad response status 404 Not Found" 
@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Mar 8, 2018

It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.