You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
example of a good issue report: #1005
example of a bad issue report: #1008
Describe the bug
Volumes were actively overflowing before the issue at #2188 was successfully resolved. After successful cleaning and then vacuum, a large number of empty volumes remain.
If you do not quickly switch the volumes to read-only mode, and then manually delete them, they begin to fill with data, which leads to the following situation (screenshot below): a large number of volumes are poorly used and I do not see the possibility of compacting the storage.
System Setup
Writing to the cluster occurs through s3 from gitlab-ci pipeline (similar to gitlab-amazon compatibility)
Expected behavior
To be able to delete specific volumes more transparently, that is: now, in order to be completely sure of the correctness and safety of the procedure, it is necessary to transfer all empty volumes on all nodes to read-only mode, and then delete them consistently everywhere - this is highly manual work, which can lead to oversight and errors. It would be extremely useful to have a command that can be used to transfer the id of the volume, which, for example, is switched to read-only mode, checked and if it is empty on all nodes, it is quietly deleted.
If there are no free volumes left to be able to compact or, in other words, compact the data on volumes with the release and deletion of unnecessary ones.
All this will allow better management of the available space in the cluster and avoid the human factor when servicing specific volumes in large clusters.
Screenshots
Immediately after cleaning and vacuum:
After a short time after the next assembly and roll-out cycle:
Additional context
# cassandra -v
3.11.6
# weed version
version 8000GB 2.58 297b412 linux amd64
The text was updated successfully, but these errors were encountered:
Sponsors SeaweedFS via Patreon https://www.patreon.com/seaweedfs
Report issues here. Ask questions here https://stackoverflow.com/questions/tagged/seaweedfs
Please ask questions in https://github.com/chrislusf/seaweedfs/discussions
example of a good issue report:
#1005
example of a bad issue report:
#1008
Describe the bug
Volumes were actively overflowing before the issue at #2188 was successfully resolved. After successful cleaning and then vacuum, a large number of empty volumes remain.
If you do not quickly switch the volumes to read-only mode, and then manually delete them, they begin to fill with data, which leads to the following situation (screenshot below): a large number of volumes are poorly used and I do not see the possibility of compacting the storage.
System Setup
Writing to the cluster occurs through s3 from gitlab-ci pipeline (similar to gitlab-amazon compatibility)
Expected behavior
To be able to delete specific volumes more transparently, that is: now, in order to be completely sure of the correctness and safety of the procedure, it is necessary to transfer all empty volumes on all nodes to read-only mode, and then delete them consistently everywhere - this is highly manual work, which can lead to oversight and errors. It would be extremely useful to have a command that can be used to transfer the id of the volume, which, for example, is switched to read-only mode, checked and if it is empty on all nodes, it is quietly deleted.
If there are no free volumes left to be able to compact or, in other words, compact the data on volumes with the release and deletion of unnecessary ones.
All this will allow better management of the available space in the cluster and avoid the human factor when servicing specific volumes in large clusters.
Screenshots
Immediately after cleaning and vacuum:
After a short time after the next assembly and roll-out cycle:
Additional context
The text was updated successfully, but these errors were encountered: