New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local persistent volume raises warnings during backup (No volume ID returned by volume snapshotter for persistent volume)) #6269
Comments
The warning "No volume ID returned by volume snapshotter for persistent volume" suggests Velero fails to get volume snapshotter to deal with this pv. VolumeSnapshotter is instantiated based on the provider of volume storage. Seem local persistent volume is not a supported volume type for snapshots. Need to futher investigate it. |
Thank you for taking the time to answer back. According to the documentation file system backup:
This setup was also working perfectly fine a few versions ago, so It looks like a regression. |
I'm using the AWS plugin to backup my data to an s3 bucket (Provider for my However, looking at the source code of it: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/109ba05302ff24dc61ec1b33121a0a6fff1b3b86/velero-plugin-for-aws/volume_snapshotter.go#L281 Seems like only a CSI or an AWS EBS type of PV would be supported, and not Otherwise, happy to create a PR to add support for the Thanks |
The volume snapshotter doesn't interact with s3 at all. The volume snapshotter is designed to create native snapshots of ebs volumes. The AWS plugin is still needed if you're storing your backups in an s3 bucket, but that uses the object store plugin of velero-plugin-for-aws. If you're not using ebs volumes, you don't need to create a VolumeSnapshotLocation. If your data is not in ebs or on a csi volume, then you need to use the filesystem backup type which stores volume data in the bucket using restic or kopia. Pass the |
Thanks you for your answer. I have both CSI and local-storage volumes so I reckon I still need a I always thought those warnings were the reasons some of my volumes were not backup-ed but investigating a bit more closely, I'm not quite sure that's the case. Looking at
What surprises me is that I deleted Velero and all associated resources, making sure there were no Full error from the logs:
I'm using Velero v1.11 Couldn't find any similar issue. Any idea? |
The PVB errors seem to suggest that you had a file system backup in progress when the node agent pod crashed/restarted. If a restart happens during a backup or restore, the operation will not continue where it left off -- it must be restarted. Since the velero backup picked up that message, the velero pod isn't restarting, but something caused your node agent pod to restart in the middle of a pod volume backup. You probably need to look into why this is restarting. VolumeSnapshotLocation isn't relevant to this error, since that's only used for VolumeSnapshotter plugins that back up PVs via native snapshots. They aren't used by the CSI plugin if you're using that plugin for csi volume backups, and they're not used for restic or kopia PodVolumeBackups. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands. |
Thank you for your extensive answer! I think this issue can be closed. |
What steps did you take and what happened:
I have a persistent volume defined:
and the following associated PVC:
I then create my backup with the usual command:
velero backup create my-backup
Once done, looking at the logs, I see this:
Looking at
kubectl -n velero get backup my-backup -oyaml
I do have a volumeSnapshotLocation properly defined:And
kubectl -n velero get volumesnapshotlocations -oyaml
does give back adefault
volumeSnapshotLocation.This seems to occur only with local persistent storage, not with
pvc
in a Ceph blockpool.What did you expect to happen:
I expected this to work without warning as It used to previously.
Environment:
velero version
): v1.11Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: