Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error connecting Prometheus to Alertmanager when both are running behind reverse proxies. #4621

Closed
FireDrunk opened this Issue Sep 17, 2018 · 2 comments

Comments

Projects
None yet
1 participant
@FireDrunk
Copy link

FireDrunk commented Sep 17, 2018

Bug Report

What did you do?
Configure Prometheus & Alertmanager to work behind a reverse proxy.

What did you expect to see?
Working prometheus -> Alertmanager connection, and alert pushing.

What did you see instead? Under which circumstances?
Prometheus crashes :'(

Environment

  • System information:

    Docker EE (UCP 3.0.1)

  • Prometheus version:

prom/prometheus:latest

  • Alertmanager version:

prom/alertmanager:latest

  • Prometheus configuration file:
global:
  scrape_interval:     15s
  evaluation_interval: 15s
  external_labels:
    monitor: 'monitoring'

rule_files:
  - 'alert.rules'

alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - alertmanager:9093/alertmanager/

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'container-exporter'
    scrape_interval: 5s
    dns_sd_configs:
    - names:
      - 'tasks.container-exporter'
      type: 'A'
      port: 9104

  - job_name: 'cadvisor'
    scrape_interval: 5s
    dns_sd_configs:
    - names:
      - 'tasks.cadvisor'
      type: 'A'
      port: 8080

  - job_name: 'node-exporter'
    scrape_interval: 5s
    dns_sd_configs:
    - names:
      - 'tasks.node-exporter'
      type: 'A'
      port: 9100

  • Extra startup command:
- '--web.external-url=http://non-relevant-domain-name/prometheus/'
  • Alertmanager configuration file:
global:
  http_config:
    tls_config:
      insecure_skip_verify: true
    proxy_url: '<not relevant>'

route:
  receiver: 'slack'

receivers:
  - name: 'slack'
    slack_configs:
      - send_resolved: true
        username: 'docker-ucp-alertmanager'
        channel: '<not relevant>'
        api_url: '<redacted>'

  • Extra startup command:
- '--web.external-url=http://non-relevant-domain-name/alertmanager/'
  • Logs:
level=error ts=2018-09-17T11:49:51.937458837Z caller=main.go:617 err="error loading config from \"/etc/prometheus/prometheus.yml\": couldn't load configuration (--config.file=\"/etc/prometheus/prometheus.yml\"): parsing YAML file /etc/prometheus/prometheus.yml: \"alertmanager:9093/alertmanager/\" is not a valid hostname"

Because I'm using a reverse proxy, I have to tell both prometheus and alertmanager that they are serving on a different URL. This works fine, but I should also be able to tell Prometheus that Alertmanager is running on a different base url.

When using the base (alertmanager:9093) I get 404's on AlertManager's side, which makes sense.
Upon configuring the extra context root (/alertmanager/) prometheus crashes.

I couldn't find any options to override the URL part of the hostname, so I guess this is both a bug and a feature request ;)

return fmt.Errorf("%q is not a valid hostname", address)

This seems to be the point that gives me the error, but in my case, the URL is valid, just not 'supported'.

@FireDrunk

This comment has been minimized.

Copy link
Author

FireDrunk commented Sep 17, 2018

And right after posting this, I found a solution:

alerting:
  alertmanagers:
    - scheme: http
      path_prefix: /alertmanager
      static_configs:
        - targets:
          - alertmanager:9093

The path_prefix option was not something that was easily found in the documentation.
Perhaps some additional documentation around the subject could help.

@FireDrunk FireDrunk closed this Sep 17, 2018

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.