-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Object buckets gone after upgrade from v1.5.1 to v1.5.2 #6767
Comments
Having taken a little break and come back to this, I think this is a result of a bug that is fixed in v1.5.2. For creating new object bucket claims using Rook v1.5.2, the behavior should work properly. But when upgrading, the bug existing before v1.5.2 is exacerbated/exposed. I think the fix for the upgrade scenario here is to fix more of the small issues in the lib-bucket-provisioner and make it so it will overwrite existing secrets and configmaps if they already exist rather than giving up and erroring out. |
Update to the latest lib bucket provisioner code. Fixes rook#6650 Modifies CRD for objectbucketclaims to fix an additional bug where an ObjectBucket's 'ClaimRef' is lost due to the CRD validation being specified incorrectly. Does not reintroduce bug rook#6767 from previous fix for rook#6650 Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
Is this a bug report or feature request?
I have a rook-ceph cluster running with cephfs, block storage and object store enabled. I set it up just recently with version v1.5.1 and used most of the time the example manifests.
After applying the v1.5.1 manifests, the s3 buckets are gone inside radosgw, but the CRD's still exists.
It seems that the rook operator got stuck in a loop of re-creating and deleting the buckets.
Expected behavior:
Applying the upgrade to v1.5.2 without any issues, especially without deleting all data.
How to reproduce it (minimal and precise):
rook-ceph-delete-bucket
radosgw-admin bucket ls
. The CRD's still existsFile(s) to submit:
ceph-s3-crash.log (Operator log)
Environment:
kubectl version
): v1.19.3ceph health
in the Rook Ceph toolbox): HealthyThe text was updated successfully, but these errors were encountered: