Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Delete old indices when disk space limit has been exceeded for all cluster #573

Closed
sqshq opened this issue Feb 28, 2016 · 5 comments
Closed

Comments

@sqshq
Copy link

sqshq commented Feb 28, 2016

Hello,

As I can see, at the moment we have 2 options to perform index delete operation:

  • delete indices which are older than a number of days
  • delete indices which disk space allocation is more than a number of gigabytes

But there is no option to delete the oldest index when we actually ran out of disk space on the machines.

For example, consider a 2 nodes cluster with a 1 Tb disk space per node. Logstash creates new index every day. In a standart situation, it is ok to store indices for 30 days. But if the logs DEBUG mode was switched on for a couple of days, we can just ran out of space.

I think it might be helpful to have a cluster-disc-space parameter = 2 Tb which can be used to delete the oldest logstash indices when the space consumed across all nodes exceeded mentioned threshold.

Is it possible, or maybe I'm missing something?

@untergeek
Copy link
Member

This will be possible in a future release, but only for newer versions of Elasticsearch. This is because Curator will make use of the Field Stats API.

@sqshq
Copy link
Author

sqshq commented Feb 28, 2016

Thanks for your quick response. It would be really nice to have this feature.

@untergeek
Copy link
Member

This should be addressed by #595 and #596 with Curator 4.0 imminent.

If you feel this is in error, please feel free to reopen this ticket or open another.

@berglh
Copy link

berglh commented Nov 22, 2016

Hi @untergeek,

Sorry to dig up an old issue; I tried having a search through the curator issues, the commits you mentioned above and the curator documentation and I still can't understand how to delete the oldest indices based upon an over-all cluster size (accumulated size of all indices) threshold value. I have a cluster I want to keep under a threshold value of 25.5 TB and delete the oldest indices until the overall cluster size is below the threshold specified.

I'm guessing I just haven't understood the documentation correctly; would appreciate if you could shed some light on this issue and how I can address it. I can script around it for now; would just love to use the official toolkit.

@sqshq Did you manage to get this working?

@untergeek
Copy link
Member

@berglh Please address usage questions to https://discuss.elastic.co/c/elasticsearch where I, other Elastic engineers, and community members can help answer your question, and leave it in a place where others can find it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants