diff --git a/content/product/cluster_configuration/lvm/filemode.md b/content/product/cluster_configuration/lvm/filemode.md index aab38ad4d..2b56f9998 100644 --- a/content/product/cluster_configuration/lvm/filemode.md +++ b/content/product/cluster_configuration/lvm/filemode.md @@ -35,8 +35,8 @@ The LVM Datastore does not need CLVM configured in your cluster. The drivers ref {{< /alert >}} In case of rebooting the virtualization Host, the volumes need to be activated to have them available for the hypervisor again. There are two possibilities: -* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated. -* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/fs_lvm_ssh/activate`, located in the remote scripts. +* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated by the `/etc/cron.d/opennebula-node` cron file. +* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -K -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/fs_lvm_ssh/activate`, located in the remote scripts. Virtual Machine disks are symbolic links to the block devices. However, additional VM files like checkpoints or deployment files are stored under `/var/lib/one/datastores/`. Be sure that enough local space is present. diff --git a/content/product/cluster_configuration/lvm/generic_guide.md b/content/product/cluster_configuration/lvm/generic_guide.md index 3054a78ae..3cf88e6fc 100644 --- a/content/product/cluster_configuration/lvm/generic_guide.md +++ b/content/product/cluster_configuration/lvm/generic_guide.md @@ -128,3 +128,20 @@ case, just add the device path to the whitelist and check again: # echo /dev/mapper/mpatha >> /etc/lvm/devices/system.devices # pvs ``` + +### Pool becomes full after live datastore migration + +**Problem:** After a live datastore migration between two `fs_lvm_ssh` datastores with `LVM_THIN_ENABLE=yes` the disk now shows full: + +``` +# lvs + LV VG Attr LSize Pool Origin Data% Meta% + lv-one-11-0 vg-one-0 Vwi-aotz-k 256.00m lv-one-11-pool 100.00 + lv-one-11-pool vg-one-0 twi---tz-k 256.00m 100.00 12.60 +``` + +**Impact:** Everything should work as expected. The 100% in the `Data%` column only signifies that the thin volume is using the full space allocated by the pool. The filesystem is not affected and maintains the same space available as before. However, if that figure is used for monitoring, this should be taken into account to avoid false positives. + +**Explaination:** The libvirt operation that implements live migration between datastores ([blockcopy](https://www.libvirt.org/manpages/virsh.html#blockcopy)) copied all blocks in the source LV, including the empty ones. This is due to the copy being made between raw block devices that don't expose information about empty blocks; LVM returns zero-filled blocks when reading from an unallocated part of the volume. For some operations like [migrate](https://www.libvirt.org/manpages/virsh.html#migrate), libvirt is able to detect blocks consisting of only zeroes and omit them by using the `--migrate-disks-detect-zeroes` option. Currently that option is not available on the `blockcopy` command. + +**Mitigation:** If for any reason this situation unacceptable, an offline datastore migration should fix the issue (until libvirt supports the `--migrate-disks-detect-zeroes` option detailed above). A different mechanism is used for that case (`dd` with `conv=sparse`) which omits empty blocks from the copy. diff --git a/content/product/cluster_configuration/lvm/lvm.md b/content/product/cluster_configuration/lvm/lvm.md index d943e49b1..1b9c55b91 100644 --- a/content/product/cluster_configuration/lvm/lvm.md +++ b/content/product/cluster_configuration/lvm/lvm.md @@ -38,8 +38,8 @@ The LVM Datastore does not need CLVM configured in your cluster. The drivers ref {{< /alert >}} In case of rebooting the virtualization Host, the volumes need to be activated to have them available for the hypervisor again. There are two possibilities: -* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated. -* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/lvm/activate`, located in the remote scripts. +* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated by the `/etc/cron.d/opennebula-node` cron file. +* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -K -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/lvm/activate`, located in the remote scripts. Virtual Machine disks are symbolic links to the block devices. However, additional VM files like checkpoints or deployment files are stored under `/var/lib/one/datastores/`. To prevent filling local disks, allocate plenty of space for these files.