New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vmstorage statefulset not correct update when the container of vmBackup has extra "vmstorage-db" mount #366
Comments
relate logs in operator:
|
I found out the reason is use helm chart simultaneous update operator and vmbackup. 0.19.0 operator first handle vmbackup update, then 0.20.1 operator second handle update, but it stuck in pod crashLoop and then context timeout. operator/controllers/factory/k8stools/sts.go Line 137 in a34b662
operator/controllers/factory/k8stools/sts.go Lines 202 to 226 in a34b662
In this situation, it need update sts first, then delete pod. |
now operator performs update for statefulet before checking pod status with rolling update #366
Sorry for delay, it must be fixed at related PR. It was a regression at statefulset update mechanism. Can you verify it with docker image: Thanks for investigation, it helps a lot! |
Changes was included to the 0.21.0 release. |
If vmBackup container has extra
vmstorage-db
mount path, next remove this extra mount in vmcluster, the result is that remain oldvmstorage-db
mount in statefulset when upgrade from 0.19.0 to 0.20.1old vmcluster:
old vmstorage statefulset:
new vmcluster:
new vmstorage statfulset:
The text was updated successfully, but these errors were encountered: