You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I run an ELK stack and keep the last 30 days' worth of indices open for searching with Kibana. I automatically close any indexes older than 30 days using Curator, which works fine.
However, I'm looking to build in automation for keeping on top of disk space use and I've found that when I run
/usr/local/bin/curator --host localhost --prefix logstash- -C space -g 200
It seems to only take into account the currently-open indices, which implies it will run into issues when I run up against the 200G limit I've set, as the open indexes will never be that large.
Is this an intended feature or does something need altering in order to account for closed indexes as well when running a space-based cleanup?
The text was updated successfully, but these errors were encountered:
I run an ELK stack and keep the last 30 days' worth of indices open for searching with Kibana. I automatically close any indexes older than 30 days using Curator, which works fine.
However, I'm looking to build in automation for keeping on top of disk space use and I've found that when I run
It seems to only take into account the currently-open indices, which implies it will run into issues when I run up against the 200G limit I've set, as the open indexes will never be that large.
Is this an intended feature or does something need altering in order to account for closed indexes as well when running a space-based cleanup?
The text was updated successfully, but these errors were encountered: