-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error too many open files and file already closed - Prometheus 2.0 #3563
Comments
Try increasing the ulimit for open files. It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided. |
|
Hey @brian-brazil and folks :) We're having the same problem running into a bunch of these:
as well as
It happens only after minutes/hours of restarting prometheus. Not after days. Any help would be highly appreciated. Thanks! |
I haven't changed anything but now it seems to be fine... I don't know what happened. 😕 Please ignore the above. If it reappears I shall open a new issue |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Hi all,
Im running Prometheus 2.0. And scraping AWS targerts. WE have around 2000 targets with node_exporter that are scraped. We have only a couple of relabeling options in the config. Everything else is by default.
The same configuration works as expected on Prometheus 1.8.
Are there some tweaks that need to be done on Prometheus 2.0 (like there were for Prometheus 1.x) for the storage engine to be able to consume this many targets.
The disk where the data is stored isnt full, its only 6% full, with around 500GB free.
We get the following errors in the Prometheus logs:
The text was updated successfully, but these errors were encountered: