Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upPrometheus 1.4.1 leaves (a lot of) sockets on scraped node in CLOSE_WAIT state #2388
Comments
This comment has been minimized.
This comment has been minimized.
|
Further debugging indicates this is some form of bug on the exporter side causing an overload, Prometheus appears to be behaving correctly. |
brian-brazil
closed this
Feb 11, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 24, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
iggyzap commentedFeb 1, 2017
•
edited
Hi,
Issue at hand
We are running prometheus in production and recently we've started to see situation when it kill servers under pressure by leaving scraped nodes with a lot of connections in CLOSE_WAIT state.
Expected behaviour:
0 Connection in CLOSE_WAIT state, since prometheus should follow http and underlying protocols for correctly closing connections after scrape.
Observations
We are observing a lot of opened sockets left in CLOSE_WAIT mode.
That's happening in AWS ap-south-1 region.
System information:
Linux 4.4.19-29.55.amzn1.x86_64 x86_64
Prometheus version:
/usr/local/prometheus/prometheus -version
prometheus, version 1.4.1 (branch: master, revision: 2a89e87)
build user: root@e685d23d8809
build date: 20161128-09:59:22
go version: go1.7.3