Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It takes Prometheus 5m to notice a metric is not available #1810

Closed
tzach opened this Issue Jul 13, 2016 · 4 comments

Comments

Projects
None yet
3 participants
@tzach
Copy link

tzach commented Jul 13, 2016

What did you do?
I have a service reporting a metrics using collectd_exporter.
After killing the services, its take collectd_exporter a few second to reflect that, and stop showing service metric. It takes Prometheus additional 5 min to reflect the fact no new metrics are coming.
It looks like Prometheus has a 5min cache for the last metric.

What did you expect to see?
Prometheus immediately reflect the fact no metrics are available

What did you see instead? Under which circumstances?
Prometheus showing old value for 5 min.

Environment
Using the latest docker

sudo docker run -d -v $PWD/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml -p 9090:9090 prom/prometheus
  • System information:

Linux 4.4.6-201.fc22.x86_64 x86_64

  • Prometheus version:

0.18.0

  • Prometheus configuration file:
global:
  scrape_interval: 15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'scylla-monitor'

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'scylla'

    target_groups:
      - targets: ['x.x.x.x:9103','y.y.y.y:9103']
@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 13, 2016

This is #398, there's not much you can do here until this is resolved.

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Jul 14, 2016

Thanks for the report. As Brian said this is well known and covered by another issue. So closing here.

@fabxc fabxc closed this Jul 14, 2016

@tzach

This comment has been minimized.

Copy link
Author

tzach commented Jul 14, 2016

Thanks, will follow #398

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.