New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Object deletion does not work properly on NVMe drives when using by-id dev alias #50

Open
mdziekon opened this Issue Sep 23, 2018 · 0 comments

Comments

Projects
None yet
1 participant
@mdziekon
Contributor

mdziekon commented Sep 23, 2018

Tested using:

  • openmediavault: 4.1.11
  • openmediavault-zfs: 4.0.4

It appears that the OMV-ZFS plugin does not properly support NVMe drives when deleting objects from pools created on these drives when using "by-id" dev aliasing. By analysing how OMVModuleZFSZpool::getDevDisks works, it appears that anything that uses this method (currently, limited to object deletion) won't work properly with these drives, because of a wrong assumption (that drives exist only as /dev/sdXY devices, where X is the drives letter and Y is its partition).

Consider the following scenario:

  1. Create a zpool using an NVMe drive (a basic pool is fine for this demonstration) and "by-id" aliasing
  2. The pool is created properly, no problems here
  3. Try to delete the pool
  4. When deleting, an error occurs with this message:

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; zpool labelclear /dev/nvme0n11 2>&1' with exit code '1': failed to open /dev/nvme0n11: No such file or directory

As you can see above, plugin is trying to clear the label on /dev/nvme0n11 device, which does not exist (because that not how Linux creates NVMe drives' entries, so far I've seen something like this when dealing with partitions: /dev/nvme0n1p1). The extraneous 1 at the end is appended by OMVModuleZFSZpool::getDevDisks, which appears to do that either when dealing with "by-id" aliases or "by-path" aliases.

To be fair, it appears that this is not entirely plugins fault, as I've been able to find a similar problem in zfsOnLinux's repo here: zfsonlinux/zfs#3478 - looks like pool creation does not properly partition CCISS (and NVMe as reported at the end) drives.

However, even if this issue gets eventually fixed, plugin's ::getDevDisks method is still wrong - it should use nvme<disk_no>p<partition_number> scheme for NVMe drives, not the sd<disk_letter><partition_number> that occurs with regular SATA drives.

This problem (in conjunction with very limited "by-id" regexp capabilities) will also occur when using zpools on encrypted drives (eg. using LUKS).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment