-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The PV, PVC and DV of the virtual machine are backed up in the velero restic mode, and problems occur during the restore #117
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kubevirt-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@UltimateJava , How you are taking backup of VM/VMI/DV through restic. We could not do it. We are installing as below. Anything we are missing velero install --provider aws --bucket velero --secret-file credentials-velero --backup-location-config region=us-east-1,s3ForcePathStyle="true",s3Url=http://10.233.54.48:9000 ^Cuse-node-agent --plugins velero/velero-plugin-for-aws:v1.1.0,quay.io/kubevirt/kubevirt-velero-plugin:v0.6.1 --use-volume-snapshots=false -n velero --uploader-type restic |
I have the same issue. problem lies with the dataSource(dataSourceRef) field. When restoring the pvc, it will stuck in pending state because no dataSource exists
|
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Normal Provisioning 86m (x44 over 3h39m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_fd7a9b7e-c7cd-449f-b1a6-1436c8f19d1f External provisioner is provisioning volume for claim "vm/test-pvc-dvbrglpn" Warning ProvisioningFailed 63m (x14 over 82m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_c05622ec-0de5-4c69-88c9-8c55cd0d3ffe failed to provision volume with StorageClass "rook-ceph-block": error getting handle for DataSource Type VolumeSnapshot by Name cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a: error getting snapshot cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a from api server: volumesnapshots.snapshot.storage.k8s.io "cdi-tmp-513bb2a9-34b0-4f5e-8d89-761998a25d1a" not found Normal Provisioning 3m24s (x30 over 82m) rook-ceph.rbd.csi.ceph.com_csi-rbdplugin-provisioner-786866b564-r5jm8_c05622ec-0de5-4c69-88c9-8c55cd0d3ffe External provisioner is provisioning volume for claim "vm/test-pvc-dvbrglpn" Normal ExternalProvisioning 64s (x2982 over 12h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rook-ceph.rbd.csi.ceph.com" or manually created by system administrator
test-dvbrglpn 2d22h WaitingForVolumeBinding False
What you expected to happen:
The pvc is restored successfully, and the virtual machine starts normally
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl get deployments cdi-deployment -o yaml
):kubectl version
):The text was updated successfully, but these errors were encountered: