New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug:1784402] storage.reserve ignored by self-heal so that bricks are 100% full #869
Comments
Time: 20191223T04:35:51 |
Time: 20191223T11:09:17 |
Time: 20191223T12:15:28 |
Time: 20191224T04:46:03 |
Time: 20191224T04:49:24
|
Time: 20191224T05:12:13 |
Thank you for your contributions. |
Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it. |
URL: https://bugzilla.redhat.com/1784402
Creator: david.spisla at iternity
Time: 20191217T11:16:35
Created attachment 1645849
Gluster vo info and status, df -hT, heal info, logs of glfsheal and all related bricks
Description of problem:
Setup: 3-Node VMWare Cluster (2 Storage Nodes and 1 Arbiter Node), Distribute-Replica 2 Volume with 1 Arbiter brick per Replica-Tupel (see attached file for the detail configuration).
Version-Release number of selected component (if applicable):
Gluster FS v5.10
How reproducible:
Steps to Reproduce:
Actual results:
storage.reserve was ignored and all bricks are 100% full within a few seconds. All brick processes died. Volume not mountable and can not trigger heal.
Expected results:
self-heal process should be blocked by storage.reserve and brick processes still running and volume is accessible.
Additional info:
See attached file
The above scenario was not only reproduced on a VM Cluster. We could also monitor it on a real HW Cluster
The text was updated successfully, but these errors were encountered: