Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

caCert is required in Recipe-generated BackupStorageLocation CR to access S3/MinIO #921

Closed
keslerzhu opened this issue Jun 9, 2023 · 6 comments

Comments

@keslerzhu
Copy link

In production environment, I have to config the property caCert in DPA to make Velero to acccess MinIO.
However, I found the BackupStorageLocation generated by Ramen Recipe does not include caCert
I was lucky that I edited the BackupStorageLocation once, added caCert, made it work for one time.
But that BackupStorageLocation was deleted very soon, nny new BSL instance does not have caCert any more.

image_720
image_720
image_720

@hatfieldbrian
Copy link
Collaborator

hatfieldbrian commented Jun 12, 2023

@keslerzhu until we can get you a fix, please try the following workaround to preserve the bsls so you can specify their caCerts:

  1. Wait up to VRG.spec.kubeObjectProtection.captureInterval, or 5 minutes if unspecified, for Ramen to create a bsl for each s3 store:
    watch -n1 kubectl get bsl -nopenshift-adp --show-labels
  2. Ctrl+C to terminate watch
  3. Quickly run the following command to remove the Ramen VRG owner labels from the bsls, replacing the angle-bracketed parameters with their values:
    for i in 0 1;do for group in <groups>;do for s3profile in <s3profiles>;do kubectl -n<velero_namespace> label bsl/<vrg namespace>--<vrg name>--$i--$group--$s3profile ramendr.openshift.io/owner-name- ramendr.openshift.io/owner-namespace-name-;done;done;done
    For example:
    $ for i in 0 1;do for group in "";do for s3profile in minio-on-cluster1 minio-on-cluster2;do kubectl -nopenshift-adp label bsl/asdf--bb--$i--$group--$s3profile ramendr.openshift.io/owner-name- ramendr.openshift.io/owner-namespace-name-;done;done;done
    Error from server (NotFound): backupstoragelocations.velero.io "asdf--bb--0----minio-on-cluster1" not found
    Error from server (NotFound): backupstoragelocations.velero.io "asdf--bb--0----minio-on-cluster2" not found
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster1 labeled
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster2 labeled
  4. Repeat the above steps for the other "slot" i.e., 0 or 1, expecting the watch to show more bsls within captureInterval; or, run the command once in a loop for 2x the captureInterval. For example:
    $ for i in 0 1;do for group in "";do for s3profile in minio-on-cluster1 minio-on-cluster2;do kubectl -nopenshift-adp label bsl/asdf--bb--$i--$group--$s3profile ramendr.openshift.io/owner-name- ramendr.openshift.io/owner-namespace-name-;done;done;done
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster1 labeled
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster2 labeled
    label "ramendr.openshift.io/owner-name" not found.
    label "ramendr.openshift.io/owner-namespace-name" not found.
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster1 labeled
    label "ramendr.openshift.io/owner-name" not found.
    label "ramendr.openshift.io/owner-namespace-name" not found.
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster2 labeled
  5. Confirm the presence of the expected bsls. For example:
    $ kubectl get bsl -nopenshift-adp --show-labels
    NAME                               PHASE       LAST VALIDATED   AGE   DEFAULT   LABELS
    asdf--bb--0----minio-on-cluster1   Available   51s              35m             <none>
    asdf--bb--0----minio-on-cluster2   Available   51s              35m             <none>
    asdf--bb--1----minio-on-cluster1   Available   51s              36m             <none>
    asdf--bb--1----minio-on-cluster2   Available   51s              36m             <none>
  6. Specify a caCert for each bsl
  7. Replicate the bsls on the other cluster, again replacing the angle-bracketed parameters with their values:
    for i in 0 1;do for group in <groups>;do for s3profile in <s3profiles>;do kubectl -n<velero namespace> get -oyaml bsl/<vrg namespace>--<vrg name>--$i--$group--$s3profile|kubectl --context <other cluster name> create -f-;done;done;done
    For example:
    $ for i in 0 1;do for group in "";do for s3profile in minio-on-cluster1 minio-on-cluster2;do kubectl -nopenshift-adp get -oyaml bsl/asdf--bb--$i--$group--$s3profile|kubectl --context cluster2 create -f-;done;done;done
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster1 created
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster2 created
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster1 created
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster2 created
  8. User is responsible for removing bsls when no longer needed or restoring VRG labels so Ramen deletes them. For example:
    $ for i in 0 1;do for group in "";do for s3profile in minio-on-cluster1 minio-on-cluster2;do kubectl -nopenshift-adp label bsl/asdf--bb--$i--$group--$s3profile ramendr.openshift.io/owner-name=bb ramendr.openshift.io/owner-namespace-name=asdf;done;done;done
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster1 labeled
    backupstoragelocation.velero.io/asdf--bb--0----minio-on-cluster2 labeled
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster1 labeled
    backupstoragelocation.velero.io/asdf--bb--1----minio-on-cluster2 labeled

@keslerzhu
Copy link
Author

@hatfieldbrian Thank you for the workaround.
Unfortunately (at least on our environment), it does not work as expected.

In our case, we can successfully unlabel the BSL shiopal--shiopal--1----site1 using this command

for i in 0 1;do for group in "";do for s3profile in site1 site2;do kubectl -nopenshift-adp label bsl/shiopal--shiopal--$i--$group--$s3profile ramendr.openshift.io/owner-name- ramendr.openshift.io/owner-namespace-name-;done;done;done

But from GUI, we still can see shiopal--shiopal--1----site1 being deleted and created, again and again.

I have tried it for about an hour, in vain.

image

image

@hatfieldbrian
Copy link
Collaborator

@keslerzhu I'm sorry it's not working in your environment. Will you please share the version of Ramen you're using? Perhaps a git commit identifier or image identifier?

@hatfieldbrian
Copy link
Collaborator

I verified this workaround in a "good" environment. @keslerzhu is trying it in a "bad" environment where a failed backup results in deletion of the bsl, not by label but by name. At this point I am abandoning a workaround and focusing on the fix. I intend to submit a PR and map it to a branch based off downstream release-4.13 so Fusion can verify it.

@hatfieldbrian
Copy link
Collaborator

Proposed fix: #925

@hatfieldbrian
Copy link
Collaborator

Fixed by #925

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants