Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upMemory leak in Kubernetes discovery #4095
Comments
grobie
added
kind/bug
priority/P1
component/service discovery
labels
Apr 17, 2018
grobie
changed the title
Memory leak in Prometheus Kubernetes discovery
Memory leak in Kubernetes discovery
Apr 17, 2018
This comment has been minimized.
This comment has been minimized.
|
Built from 2cbba4e now (finally a version without known data races so that we can rule out races as a chaos source). Still seing the memory leak. New interesting finding: The K8s pod targets were not updated (Prometheus tried to scrape pods that didn't exist anymore and failed to scrape the new pods). I could fix that by sending SIGHUP which then also brought the escalating memory usage back to normal levels. This seems to have to do with something in K8s SD getting stuck. |
This comment has been minimized.
This comment has been minimized.
|
Fixed by #4117 . |
beorn7
closed this
Apr 30, 2018
siggy
referenced this issue
May 23, 2018
Closed
upgrade Prometheus beyond v2.2.1 prior to 0.5.0 release #987
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
grobie commentedApr 17, 2018
•
edited
Bug Report
What did you do?
Ran Prometheus v2.2.1 plus latest race patches (v2.2.1...f8dcf9b) for several days.
What did you expect to see?
Stable memory footprint.
What did you see instead? Under which circumstances?
Memory leak, daily crashes.
Environment
System information:
Linux 4.4.10+soundcloud x86_64
Prometheus version:
custom built from f8dcf9b
Prometheus configuration file:
22 jobs (17 kubernetes_sd from a big cluster, 1 ec2_sd, 2 dns_sd, 2 static_config)
Profiles
pprof top 10 comparison