Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upPlan compaction: too many open files #3875
Comments
This comment has been minimized.
This comment has been minimized.
|
You need to increase your file ulimit. It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided. |
brian-brazil
closed this
Feb 21, 2018
This comment has been minimized.
This comment has been minimized.
|
one more good entry for the FAQ :) @brian-brazil haven't decided what to with the FAQ page yet. The wiki is so convenient to edit that I will use for now. also seems the the FAQ on the website and the FAQ for github issues targets different audience. |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
ryanash999 commentedFeb 21, 2018
•
edited
What did you do?
Upgraded from 1.8.2 to v2.1.0
What did you expect to see?
We had remote read configured with both instances running locally. This migration plan worked fine in lower level environments but once we hit our upper environments with more data we are seeing this on numerous servers.
What did you see instead? Under which circumstances?
We are seeing the error below regarding 'plan compaction' and 'too many files open'
Environment
System information:
Linux 3.10.0-693.11.6.el7.x86_64 x86_64
Prometheus version:
We have 35d retention
I have tried setting '--storage.tsdb.max-block-duration=1d'