-
Notifications
You must be signed in to change notification settings - Fork 637
-
Notifications
You must be signed in to change notification settings - Fork 637
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete old indices when disk space limit has been exceeded for all cluster #573
Comments
This will be possible in a future release, but only for newer versions of Elasticsearch. This is because Curator will make use of the Field Stats API. |
Thanks for your quick response. It would be really nice to have this feature. |
Hi @untergeek, Sorry to dig up an old issue; I tried having a search through the curator issues, the commits you mentioned above and the curator documentation and I still can't understand how to delete the oldest indices based upon an over-all cluster size (accumulated size of all indices) threshold value. I have a cluster I want to keep under a threshold value of 25.5 TB and delete the oldest indices until the overall cluster size is below the threshold specified. I'm guessing I just haven't understood the documentation correctly; would appreciate if you could shed some light on this issue and how I can address it. I can script around it for now; would just love to use the official toolkit. @sqshq Did you manage to get this working? |
@berglh Please address usage questions to https://discuss.elastic.co/c/elasticsearch where I, other Elastic engineers, and community members can help answer your question, and leave it in a place where others can find it later. |
Hello,
As I can see, at the moment we have 2 options to perform index delete operation:
But there is no option to delete the oldest index when we actually ran out of disk space on the machines.
For example, consider a 2 nodes cluster with a 1 Tb disk space per node. Logstash creates new index every day. In a standart situation, it is ok to store indices for 30 days. But if the logs DEBUG mode was switched on for a couple of days, we can just ran out of space.
I think it might be helpful to have a
cluster-disc-space
parameter = 2 Tb which can be used to delete the oldest logstash indices when the space consumed across all nodes exceeded mentioned threshold.Is it possible, or maybe I'm missing something?
The text was updated successfully, but these errors were encountered: