local-volume-provider
is a Velero plugin to enable storage directly to native Kubernetes' volume types instead of using Object or Blob storage APIs.
It also supports volume snapshots with Restic. It is designed to service small and air-gapped clusters that may not have access directly to Object Storage APIs like S3.
The plugin leverages the existing Velero service account credentials to mount volumes directly to the velero/node-agent pods. This plugin is also heavily based off of Velero's example plugin.
- Hostpath volumes are not designed to work on multi-node clusters unless the underlying host mounts point to shared storage. Volume snapshots performed in this configuration without shared storage can result in fragmented backups.
- Customized deployments of Velero (RBAC, container names), may not be supported.
- When BackupStorageLocations are removed, they are NOT cleaned up from the Velero and Node Agent pods.
- This plugin relies on a sidecar container at runtime to provide signed-url access to storage data.
Below is a listing of plugin versions and respective Velero versions that are compatible.
Plugin Version | Velero Version |
---|---|
v0.5.x | v1.10.x |
v0.4.x | v1.6.x - v1.9.x |
To deploy the plugin image to a Velero server:
- Make sure Velero is installed, optionally with Node Agent/Restic if Volume Snapshots are needed.
- (For NFS or HostPath volumes) Prepare the volume target.
- The source directory must already exist prior to creating the BackupStorageLocation
- The directory must have write permissions that are either writable by the Velero container by default, which runs as non-root, or to the same Uid/Gid as the plugin configuration. See the Customization section below for how to configuration these settings.
- Make sure the plugin images are pushed to a registry that is accessible to your cluster's nodes.
There are two images required for the plugin:
- replicated/local-volume-provider:v0.3.3
- Run
velero plugin add replicated/local-volume-provider:v0.3.3
. This will re-deploy Velero with the plugin installed. - Create a BackupStorageLocation according to the schemas below. The plugin will attach the volume to Velero (and Node Agent/Restic if available) It will also add a fileserver sidecar to the Velero pod if not already present. This is used to server assets like backup logs directly to consumers of the Velero api (e.g. the Velero CLI uses these logs to print backup status info)
You can configure certain aspects of plugin behavior by customizing the following ConfigMap spec and adding to the Velero namespace. It is based on the Velero Plugin Configuration scheme.
apiVersion: v1
kind: ConfigMap
metadata:
name: local-volume-provider-config
namespace: velero
labels:
velero.io/plugin-config: ""
replicated.com/nfs: ObjectStore
replicated.com/hostpath: ObjectStore
data:
# Useful for local development
fileserverImage: ttl.sh/<your user>/local-volume-provider:12h
# Helps to lock down file permissions to known users/groups on the target volume
securityContextRunAsUser: "1001"
securityContextRunAsGroup: "1001"
securityContextFsGroup: "1001"
# If provided, will clean up all other volumes on the Velero and Node Agent pods
preserveVolumes: "my-bucket,my-other-bucket"
The plugin can be removed with velero plugin remove replicated/local-volume-provider:v0.3.3
.
This does not detach/delete any volumes that were used during operation.
These can be removed manually using kubectl edit
or by re-deploying velero (velero uninstall
and velero install ...
)
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: replicated.com/hostpath
objectStorage:
# This corresponds to a unique volume name
bucket: hostPath-snapshots
config:
# This path must exist on the host and be writable outside the group
path: /tmp/snapshots
# Must be provided if you're using Restic; [default mount] + [bucket] + [prefix] + "restic"
resticRepoPrefix: /var/velero-local-volume-provider/hostpath-snapshots/restic
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: replicated.com/nfs
objectStorage:
# This corresponds to a unique volume name
bucket: nfs-snapshots
config:
# Path and server on share
path: /tmp/nfs-snapshots
server: 1.2.3.4
# Must be provided if you're using Restic; [default mount] + [bucket] + [prefix] + "restic"
resticRepoPrefix: /var/velero-local-volume-provider/nfs-snapshots/restic
NOTE
You can't test the PVC object storage plugin on clusters without ReadWriteMany (RWX) storage providers. This means NO K3S, Codeserver or Codespaces.
To build the plugin and fileserver, run
$ make plugin
$ make fileserver
To build the combined image, run
$ make container
This builds an image tagged as replicated/local-volume-provider:main
. If you want to specify a different name or version/tag, run:
$ IMAGE=your-repo/your-name VERSION=your-version-tag make container
To build a temporary image for testing, run
$ make ttl.sh
This builds an image tagged as ttl.sh/<unix user>/local-volume-provider:12h
.
Make sure the plugin will be configured to use the correct security context and development images by applying the optional ConfigMap (edit this configmap first with your username):
- Install Velero without the plugin (useful for testing the
velero
install/remove plugin commands):
Velero 1.10+
velero install --use-node-agent --uploader-type=restic --use-volume-snapshots=false --namespace velero --no-default-backup-location --no-secret
Velero 1.6-1.9
velero install --use-restic --use-volume-snapshots=false --namespace velero --no-default-backup-location --no-secret
- Add the plugin
velero plugin add ttl.sh/<user>/local-volume-provider:12h
- Create the default BackupStorageLocation (assuming Hostpath here)
kubectl apply -f examples/hostPath.yaml
OR, with Velero v1.7.1+
velero backup-location create default --default --bucket my-hostpath-snaps --provider replicated.com/hostpath --config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic
Install Velero with the plugin configured to host path by default:
Velero 1.10+
velero install --use-node-agent --uploader-type=restic --use-volume-snapshots=false --namespace velero --provider replicated.com/hostpath --plugins ttl.sh/<username>/local-volume-provider:12h --bucket my-hostpath-snaps --backup-location-config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic --no-secret
Velero 1.6-1.9
velero install --use-restic --use-volume-snapshots=false --namespace velero --provider replicated.com/hostpath --plugins ttl.sh/<username>/local-volume-provider:12h --bucket my-hostpath-snaps --backup-location-config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic --no-secret
NOTE: Works with Velero v1.7.1+ only
To update a BackupStorageLocation (BSL) in an existing cluster with Velero, you must first delete the BSL and re-create as follows (assuming you are using the BSL created by default):
velero plugin add ttl.sh/<user>/local-volume-provider:12h
velero backup-location delete default --confirm
velero backup-location create default --default --bucket my-hostpath-snaps --provider replicated.com/hostpath --config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic
- The Velero pod is stuck initializing:
- Verify the volume exists on the host. Create if it doesn't and delete the Velero pod.
- [HostPath Only] The Velero pod is running, but the backupstorage location is unavailable.
- Verify the path on the host is writable by the Velero pod. The Velero pod runs as user
nobody
.
- Verify the path on the host is writable by the Velero pod. The Velero pod runs as user
- Backups are partially failing and you're using Restic.
- Make sure you have defined
resticRepoPrefix
in you BackupStorageLocation Config. It should point to therestic
directory mountpoint in the Velero container - Velero 1.10+: Delete your Backup Repo CR
kubectl -n velero delete backuprepositories.velero.io default-default-<ID>
to have this regenerated. - Velero 1.6-1.9: Delete your Restic Repo CR
kubectl -n velero delete resticrepositories.velero.io default-default-<ID>
to have this regenerated.
- Make sure you have defined
- TESTING!