New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Disk eviction not doing anything #2995
Comments
I tried something different this morning, I disabled scheduling on a node and enable eviction request and the node started to remove the replicas. I could do my migration that way, but it would be nice to have the feature on disk to work |
Pre Ready-For-Testing Checklist
|
Workaround:
|
After the fix, the nightly tests should pass. And the case mentioned in the reproducing step should work as expected. |
Hey @vinid223 thanks for commenting back to my test result! I've setup a new test environment and retest it with v1.2.1.-rc2, result as following gif: Steps:
The replicas will start migrating themselves to the disk that is available for use. To narrow down the issue, I have also turn off other two nodes in this cluster, so only one node is running in this test. |
@kaxing This looks good to me. Thank you. |
Describe the bug
I added disks to a cluster to each of my nodes to replace the existing disks as the main storage.
For each node, I went into the disk settings and disabled the scheduling for the old disks and enabled eviction. The new disks are activated for scheduling.
It's been hours and not a single replicat have been moved. I can't see any logs in the longhorn ui. The disks works when I force delete a replicat and it rebuilt fine in the other disks.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Replicat are being moved. If not, show logs or information in the volume page info or node info or main page.
Log
If needed, I can generate a support bundle
Environment:
Additional context
N/A
The text was updated successfully, but these errors were encountered: