New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to drain a pool #11990
Comments
This is currently a work in progress and will be available in future releases. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 15 days if no further activity occurs. Thank you for your contributions. |
I would still like this feature. |
Any ETA/current status on this? Is there a branch on which contribution is possible? A workaround would be to start a complete new minio cluster and replicate the buckets there. But this will require you to pay double the price for server rent as you need to have two separate cluster online for some time. |
@Butzlabben #12757 |
This is already added and merged. |
A minio cluster consists of one or more pools. Over time, it is typical to expand a minio cluster by adding additional pools.
Consider the following example:
Initial startup command:
Startup command after adding larger hosts 5...8:
When it is time to retire hardware, it is important to be able to migrate the data in parallel while keeping the cluster online.
Currently, in the above example situation, buckets would begin being spanned across both the original hosts 1...4 as well as hosts 5...8. It is not possible to tell minio, "Please empty hosts 1...4 because I intend to retire them soon."
As a solution, consider the following command as a possibility:
Discussion
An argument for data-rate throughput or priority should be available so that the drain can happen in the background while providing high quality service to the consumers.
Minio would then either exit with code 0 indicating that the pool had begun draining or print an error message indicating an exception and exit with code 1.
It is important that pool draining persist through cluster reboots (or, perhaps preferably, return as "draining paused", which would be a read-only mode).
It is important that the pool draining is guaranteed to finish (new objects should be directed to online pools so that it is not possible that draining the pool could end up going slower than new data was filling the pool).
It is important that the pool draining take place in parallel, using all of the network interfaces of the source servers and the destination servers.
In addition, consider the following commands food for thought:
mc admin pool pause-drain http://host{1...4}/export{1...4}
- pause the draining process (for example, to mitigate a performance problem)mc admin pool resume-drain http://host{1...4}/export{1...4}
- self explanatory, but provide the arguments for tuning the rate of draining heremc admin pool status http://host{1...4}/export{1...4}
- shows the status of the pool ("operational", "draining for X minutes: Y% (Z GiB/TiB/PiB & ## objects) remaining", "draining paused: Y% (Z GiB/TiB/PiB & ## objects) remaining", "maintenance mode"mc admin pool enter-maintenance http://host{1...4}/export{1...4}
- put the pool into maintenance mode (read-only).mc admin pool exit-maintenance http://host{1...4}/export{1...4}
- Opposite of enter-maintenanceThe text was updated successfully, but these errors were encountered: