Opennebula ZFS Storage Driver
The ZFS datastore driver provides OpenNebula with the possibility of using ZVOL volumes instead of plain files to hold the Virtual Images.
To contribute bug patches or new features, you can use the github Pull Request model. It is assumed that code and documentation are contributed under the Apache License 2.0.
- How to Contribute
- Support: OpenNebula user forum
- Development: OpenNebula developers forum
- Issues Tracking: Github issues (https://github.com/OpenNebula/addon-zfs/issues)
This add-on is compatible with OpenNebula 4.6+
Password-less ssh access to an OpenNebula ZFS host. (eg. localhost)
OpenNebula ZFS Host
The oneadmin user should be able to execute ZFS related command with sudo passwordlessly.
- Password-less sudo permission for:
oneadminneeds to belong to the
diskgroup (for KVM).
There are some limitations that you have to consider, though:
- ZFS is not cluster filesystem! Use it on single host. For multiple hosts you may use ceph or opennebula-addon-iscsi with zfs support.
To install the driver you have to copy these files:
Configuring the System Datastore
To use ZFS drivers, you must configure the system datastore as shared. This system datastore will hold only the symbolic links to the block devices, so it will not take much space. See more details on the System Datastore Guide.
It will also be used to hold context images and Disks created on the fly, they will be created as regular files.
Configuring ZFS Datastores
The first step to create a ZFS datastore is to set up a template file for it. In the following table you can see the supported configuration attributes. The datastore type is set by its drivers, in this case be sure to add
TM_MAD=zfs for the transfer mechanism, see below.
||The name of the datastore|
||The zfs server host. Defaults to
||The top level dataset name under which all volumes are created. Defaults to
||Path to the zfs binary. Defaults to
||Paths that can not be used to register images. A space separated list of paths. (1)|
||If you need to un-block a directory under one of the RESTRICTED_DIRS. A space separated list of paths.|
||Do not try to untar or decompress the file to be registered. Useful for specialized Transfer Managers|
||Specify the maximum transfer rate in bytes/second when downloading images from a http/https URL. Suffixes K, M or G can be used.|
(1) This will prevent users registering important files as VM images and accessing them through their VMs. OpenNebula will automatically add its configuration directories: /var/lib/one, /etc/one and oneadmin's home. If users try to register an image from a restricted directory, they will get the following error message: “Not allowed to copy image file”.
For example, the following examples illustrates the creation of an ZFS datastore using a configuration file. In this case we will use the host
localhost as ZFS-enabled host.
> cat ds.conf NAME = "zfs" DS_MAD = zfs TM_MAD = zfs DISK_TYPE = block DATASET_NAME = rpool/ONE/images > onedatastore create ds.conf ID: 100 > onedatastore list ID NAME CLUSTER IMAGES TYPE TM 0 system none 0 fs shared 1 default none 3 fs shared 100 zfs none 0 zfs shared
The DS and TM MAD can be changed later using the onedatastore update command. You can check more details of the datastore by issuing the onedatastore show command.
Note that datastores are not associated to any cluster by default, and they are supposed to be accessible by every single host. If you need to configure datastores for just a subset of the hosts take a look to the Cluster guide.
Configuring DS_MAD and TM_MAD
These values must be added to
First we add
zfs as an option, replace:
TM_MAD = [ executable = "one_tm", arguments = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,vmfs,ceph" ]
TM_MAD = [ executable = "one_tm", arguments = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,vmfs,ceph,zfs" ]
After that create a new TM_MAD_CONF section:
TM_MAD_CONF = [ name = "zfs", ln_target = "NONE", clone_target = "SELF", shared = "yes" ]
Now we add
zfs as a new
DATASTORE_MAD option, replace:
DATASTORE_MAD = [ executable = "one_datastore", arguments = "-t 15 -d dummy,fs,vmfs,lvm,ceph" ]
DATASTORE_MAD = [ executable = "one_datastore", arguments = "-t 15 -d dummy,fs,vmfs,lvm,ceph,zfs" ]
For OpenNebula 5.0 we also need to create a new DS_MAD_CONF section:
DS_MAD_CONF = [ NAME = "zfs", REQUIRED_ATTRS = "DISK_TYPE", PERSISTENT_ONLY = "YES" ]
The ZFS transfer driver will create volume with zfs. Once the zvol is available, the driver will link it to
The host must have ZFS and have the dataset used in the
DATASET attributed of the datastore template.
It’s also required to have password-less sudo permission for
Tuning & Extending
System administrators and integrators are encouraged to modify these drivers in order to integrate them with their datacenter:
datastore/zfs/zfs.conf: Default values for LVM parameters
- ZFS_CMD: Path to the zfs binary
- BRIDGE_LIST: The zfs server host
- DATASET_NAME: Default dataset
- STAGING_DIR: Staging directory
datastore/zfs/cp: Registers a new image. Creates a new ZFS volume.
datastore/zfs/mkfs: Makes a new empty image. Creates a new ZFS volume.
datastore/zfs/rm: Removes the ZFS volume.
tm/zfs/ln: Links to the ZFS volume.
tm/zfs/clone: Clones the image by creating a snapshot.
tm/zfs/mvds: Saves the image in a new ZFS volume for SAVE_AS.
- Due this issue use more large values of
- Also you may to turn on the writeback cache or set