Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upContext Deadline Exceeded #2546
Comments
This comment has been minimized.
This comment has been minimized.
|
|
This comment has been minimized.
This comment has been minimized.
|
I am able to reach the endpoint /metrics and it takes like 5s -10s to load. |
This comment has been minimized.
This comment has been minimized.
|
@SRAM85 The default scrape timeout is 10 seconds - maybe sometimes it takes longer than that and that is the problem? You could try increasing it: global:
scrape_timeout: 30s
...(or per scrape config) |
brian-brazil
closed this
Apr 5, 2017
This comment has been minimized.
This comment has been minimized.
srikanthdixit
commented
Jun 7, 2017
•
|
Hello julisuv, Below is my prometheus.yml configuration file. my global configglobal: scrape_timeout is set to the global default (10s).Attach these labels to any time series or alerts when communicating withexternal systems (federation, remote storage, Alertmanager).external_labels: Load and evaluate rules in this file every 'evaluation_interval' seconds.rule_files: - "first.rules"- "second.rules"A scrape configuration containing exactly one endpoint to scrape:Here it's Prometheus itself.scrape_configs: The job name is added as a label
|
This comment has been minimized.
This comment has been minimized.
xyr115
commented
Oct 12, 2018
|
Any movement on this? |
This comment has been minimized.
This comment has been minimized.
weskinner
commented
Dec 18, 2018
|
The fix for me was editing the NetworkPolicy associated with the Pod prometheus was trying to scrape. |
SRAM85 commentedMar 30, 2017
•
edited by juliusv
What did you do?
Using JMX_exporter to push metrics from baremetal serves to prometheus which is hosted in another baremetal server. we graph the metrics in grafana.
What did you expect to see?
Up and running targets with metrics information being scrapped
What did you see instead? Under which circumstances?
Targets for some jobs are up but the targets for jobs scrapping kafka metrics is down with "CONTEXT DEADLINE EXCEEDED" . i am not able to see metrics in grafana. Increasing the number of chunks or the memory of the jmx-exporter did not help. Restarting Prometheus did not help
Environment
Linux 3.10.0-327.10.1.el7.x86_64 x86_64
prometheus-1.5.2.linux-amd64
context deadline exceeded