Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/product/cluster_configuration/lvm/filemode.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ The LVM Datastore does not need CLVM configured in your cluster. The drivers ref
{{< /alert >}}

In case of rebooting the virtualization Host, the volumes need to be activated to have them available for the hypervisor again. There are two possibilities:
* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated.
* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/fs_lvm_ssh/activate`, located in the remote scripts.
* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated by the `/etc/cron.d/opennebula-node` cron file.
* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -K -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/fs_lvm_ssh/activate`, located in the remote scripts.

Virtual Machine disks are symbolic links to the block devices. However, additional VM files like checkpoints or deployment files are stored under `/var/lib/one/datastores/<id>`. Be sure that enough local space is present.

Expand Down
17 changes: 17 additions & 0 deletions content/product/cluster_configuration/lvm/generic_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,3 +128,20 @@ case, just add the device path to the whitelist and check again:
# echo /dev/mapper/mpatha >> /etc/lvm/devices/system.devices
# pvs
```

### Pool becomes full after live datastore migration

**Problem:** After a live datastore migration between two `fs_lvm_ssh` datastores with `LVM_THIN_ENABLE=yes` the disk now shows full:

```
# lvs
LV VG Attr LSize Pool Origin Data% Meta%
lv-one-11-0 vg-one-0 Vwi-aotz-k 256.00m lv-one-11-pool 100.00
lv-one-11-pool vg-one-0 twi---tz-k 256.00m 100.00 12.60
```

**Impact:** Everything should work as expected. The 100% in the `Data%` column only signifies that the thin volume is using the full space allocated by the pool. The filesystem is not affected and maintains the same space available as before. However, if that figure is used for monitoring, this should be taken into account to avoid false positives.

**Explaination:** The libvirt operation that implements live migration between datastores ([blockcopy](https://www.libvirt.org/manpages/virsh.html#blockcopy)) copied all blocks in the source LV, including the empty ones. This is due to the copy being made between raw block devices that don't expose information about empty blocks; LVM returns zero-filled blocks when reading from an unallocated part of the volume. For some operations like [migrate](https://www.libvirt.org/manpages/virsh.html#migrate), libvirt is able to detect blocks consisting of only zeroes and omit them by using the `--migrate-disks-detect-zeroes` option. Currently that option is not available on the `blockcopy` command.

**Mitigation:** If for any reason this situation unacceptable, an offline datastore migration should fix the issue (until libvirt supports the `--migrate-disks-detect-zeroes` option detailed above). A different mechanism is used for that case (`dd` with `conv=sparse`) which omits empty blocks from the copy.
4 changes: 2 additions & 2 deletions content/product/cluster_configuration/lvm/lvm.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ The LVM Datastore does not need CLVM configured in your cluster. The drivers ref
{{< /alert >}}

In case of rebooting the virtualization Host, the volumes need to be activated to have them available for the hypervisor again. There are two possibilities:
* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated.
* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/lvm/activate`, located in the remote scripts.
* If the [node package]({{% relref "kvm_node_installation#kvm-node" %}}) is installed, they will be automatically activated by the `/etc/cron.d/opennebula-node` cron file.
* Otherwise, manual activation will be required. For each volume device of the Virtual Machines running on the Host before the reboot, run `lvchange -K -ay $DEVICE`. You can also run on the Host the activation script `/var/tmp/one/tm/lvm/activate`, located in the remote scripts.

Virtual Machine disks are symbolic links to the block devices. However, additional VM files like checkpoints or deployment files are stored under `/var/lib/one/datastores/<id>`. To prevent filling local disks, allocate plenty of space for these files.

Expand Down