Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upError too many open files and file already closed - Prometheus 2.0 #3563
Comments
This comment has been minimized.
This comment has been minimized.
|
Try increasing the ulimit for open files. It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided. |
brian-brazil
closed this
Dec 8, 2017
This comment has been minimized.
This comment has been minimized.
Keleir
commented
Feb 7, 2018
•
|
This comment has been minimized.
This comment has been minimized.
andreasnuesslein
commented
Feb 20, 2018
|
Hey @brian-brazil and folks :) We're having the same problem running into a bunch of these:
as well as
It happens only after minutes/hours of restarting prometheus. Not after days. Any help would be highly appreciated. Thanks! |
This comment has been minimized.
This comment has been minimized.
andreasnuesslein
commented
Feb 21, 2018
|
I haven't changed anything but now it seems to be fine... I don't know what happened. Please ignore the above. If it reappears I shall open a new issue |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
zarkoc commentedDec 8, 2017
•
edited
Hi all,
Im running Prometheus 2.0. And scraping AWS targerts. WE have around 2000 targets with node_exporter that are scraped. We have only a couple of relabeling options in the config. Everything else is by default.
The same configuration works as expected on Prometheus 1.8.
Are there some tweaks that need to be done on Prometheus 2.0 (like there were for Prometheus 1.x) for the storage engine to be able to consume this many targets.
The disk where the data is stored isnt full, its only 6% full, with around 500GB free.
We get the following errors in the Prometheus logs: