-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test case: zpool_expand_* #2437
Comments
This is expected, although it is a bug / missing feature. It's my understanding that the way this works on Illumos is that an autoexpand event is handled by their FMA infrastructure. In turn FMA calls |
@behlendorf why is this tagged 'Test Suite'? It was FOUND by the test suite, but it's not a problem with it, but with ZFS (ZED). |
Apologies if this isn't quite on topic, but I'm having a problem with manual autoexpand after a LUN expansion, not an automatic one. Environment: RHEL 6.5, ZFS 0.6.5.3-1, VMWare Workstation 9.0.4. I have added a 1GB disk, /dev/sdb, and created a pool: zpool create tank sdb I have tried: enabling autoexpand on tank, rebooting, zpool online -e tank sdb, exporting/importing pool, but it will not grow. Partition view:
I am unsure if the two partitions created are standard? Is the small second partition 9 blocking the first from being able to expand? The extra disk in vmware was turned into a zpool without doing anything else on it first - I did not create any existing partitions, just gave it the whole disk. |
I did some further testing, in this case I created an LVM physical volume, volume group and logical volume. If I use that logical volume for ZFS, I can both extend the LVM logical volume and the ZFS pool / filesystem quite happily. I suspect I'm running into an unsupported case by trying to extend ZFS when a raw disk device is used during pool creation. |
We experience the same issue as @andyeff (with Xen instead of VMware). |
@odoucet did you try offlining and then re-onlining -e? |
Cannot be done on this specific zpool because it is made of only one device, and zfs refused to offline the device in this case ("no valid replica"). |
I've hit the same issue as @andyeff but on Ubuntu 16.04 LTS with its standard ZFS implementation. But deleting this 8MiB partition doesn't sound like a good solution, there must be a reason why it is there? I haven't experienced any problems yet, but still... Any thoughts, comments or recommendations as to the correct procedure to expand a disk? |
In ZoL, this isn't done automatically (yet). You need to |
If I understand everything correct, then this should be a proper guide:
But this is not true... My tests shows that I need to do this:
|
|
@ilovezfs in which version of ZoL did whole_disk appear? On my host I have ii zfsutils-linux 0.6.5.6-0ubuntu9 amd64 Native OpenZFS management utilities for Linux |
whole_disk=0/1 is an upstream thing that's older than the hills. Yeah, anything that's a partition OR not "real" is supposed to be whole_disk=0, so that is correct behavior for dev mapper. |
@ilovezfs hmmm... How is whole_disk enabled? sudo zpool create -o whole_disk=1 pool02 /dev/sdc |
It's handled under the hood automatically. If you have a physical disk, and you use "sdc" not "sdc#{number}", then it will be using whole_disk=1 (i.e., it will autopartition, putting the pool on partition 1 and reserving some space in a partition 9). |
@ilovezfs Is there a standard method for doing this manually, e.g. if using SAN-provided disks as /dev/mapper devices where whole disk won't be 1? What's involved in the partition rewriting? |
For whole_disk=0 devices, you resize the faux-device or partition yourself and then run |
@andyeff I'm not sure what you mean by SAN-provided disks, but I can access my iSCSI targets (when logged in to them/it) as a |
@FransUrbo ah, I don't believe I can get them to be represented that way in my environment. We're using a pair of fibre-channel paths to access the disks, so multipathd presents the disks as /dev/mapper/mpath? items |
Ah, ok. Fair enough. I haven't used in FC in 10+ years, so I have no idea how it works now :). |
@ilovezfs If I understand correctly, this should have worked? I would have expected that partition 9 was moved and partition 1 was extended... but what am I doing wrong?
|
I have then tried to remove partition 9 before running zpool online, and then it is actually expanded:
Then the partition is infact resized, but what is partition 9 used for? is it critical for ZFS to function or is it something that is "just" added? |
It's not actually used for anything, it's a "buffer" partition. Meaning, you don't have to add a new disk with exactly the same number of blocks etc. |
@ThomasRasmussen No that procedure is not expected to work because you're changing the virtual disk size, but the original partition table would still be there. |
@ilovezfs so it is not possible to perform resize of a pool when the virtual-disk is expanded? On Solaris (and I also believe it is true for FreeBSD) this is infact working without problems... so I would have thought that it was true for Linux as well :-( |
@behlendorf @ilovezfs @ThomasRasmussen We had a similar problem of expanding current vdevs. We are running as a VM solution so the approach we are thinking of getting around all the expansion hassle is as follows:
Initial tests results look promising. Looking for some comments from the experts as I am not sure why first of all max_asize and asize exists if there were assigned the same value and secondly, what is the purpose of the expandsize zpool property. i dont think it is being used. |
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which successed under Linux due to differences in the inuse checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loopback devices but it would be preerable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwritting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue openzfs#6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, thesei test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3484 Issue openzfs#5634 Issue openzfs#2437 Issue openzfs#5202 Issue openzfs#4034 Issue openzfs#6095
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which successed under Linux due to differences in the inuse checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loopback devices but it would be preerable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwritting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue openzfs#6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, thesei test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3484 Issue openzfs#5634 Issue openzfs#2437 Issue openzfs#5202 Issue openzfs#4034 Issue openzfs#6095
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which successed under Linux due to differences in the inuse checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loopback devices but it would be preerable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwritting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue openzfs#6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, thesei test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3484 Issue openzfs#5634 Issue openzfs#2437 Issue openzfs#5202 Issue openzfs#4034 Issue openzfs#6095
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. Add O_EXCL to the in-use check to prevent the -f (force) option from working for mounted filesystems and improve the resulting error. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which successed under Linux due to differences in the inuse checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loopback devices but it would be preerable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwritting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue openzfs#6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, thesei test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3484 Issue openzfs#5634 Issue openzfs#2437 Issue openzfs#5202 Issue openzfs#4034 Issue openzfs#6095
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. Add O_EXCL to the in-use check to prevent the -f (force) option from working for mounted filesystems and improve the resulting error. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which successed under Linux due to differences in the inuse checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loopback devices but it would be preerable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwritting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue openzfs#6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, thesei test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3484 Issue openzfs#5634 Issue openzfs#2437 Issue openzfs#5202 Issue openzfs#4034 Issue openzfs#6095
Enable additional test cases, in most cases this required a few minor modifications to the test scripts. In a few cases a real bug was uncovered and fixed. And in a handful of cases where pools are layered on pools the test case will be skipped until this is supported. Details below for each test case. * zpool_add_004_pos - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. * zpool_add_005_pos.ksh - Skip dumpadm portion of the test which isn't relevant for Linux. The find_vfstab_dev, find_mnttab_dev, and save_dump_dev functions were updated accordingly for Linux. Add O_EXCL to the in-use check to prevent the -f (force) option from working for mounted filesystems and improve the resulting error. * zpool_add_006_pos - Update test case such that it doesn't depend on nested pools. Switch to truncate from mkfile to reduce space requirements and speed up the test case. * zpool_clear_001_pos - Speed up test case by filling filesystem to 25% capacity. * zpool_create_002_pos, zpool_create_004_pos - Use sparse files for file vdevs in order to avoid increasing the partition size. * zpool_create_006_pos - 6ba1ce9 allows raidz+mirror configs with similar redundancy. Updating the valid_args and forced_args cases. * zpool_create_008_pos - Disable overlapping partition portion. * zpool_create_011_neg - Fix to correctly create the extra partition. Modified zpool_vdev.c to use fstat64_blk() wrapper which includes the st_size even for block devices. * zpool_create_012_neg - Updated to properly find swap devices. * zpool_create_014_neg, zpool_create_015_neg - Updated to use swap_setup() and swap_cleanup() wrappers which do the right thing on Linux and Illumos. Removed '-n' option which succeeds under Linux due to differences in the in-use checks. * zpool_create_016_pos.ksh - Skipped test case isn't useful. * zpool_create_020_pos - Added missing / to cleanup() function. Remove cache file prior to test to ensure a clean environment and avoid false positives. * zpool_destroy_001_pos - Removed test case which creates a pool on a zvol. This is more likely to deadlock under Linux and has never been completely supported on any platform. * zpool_destroy_002_pos - 'zpool destroy -f' is unsupported on Linux. Mount point must not be busy in order to unmount them. * zfs_destroy_001_pos - Handle EBUSY error which can occur with volumes when racing with udev. * zpool_expand_001_pos, zpool_expand_003_neg - Skip test on Linux until adding zvols to pools is fully supported and deadlock free. The test could be modified to use loop-back devices but it would be preferable to use the test case as is for improved coverage. * zpool_export_004_pos - Updated test case to such that it doesn't depend on nested pools. Normal file vdev under /var/tmp are fine. * zpool_import_all_001_pos - Updated to skip partition 1, which is known as slice 2, on Illumos. This prevents overwriting the default TESTPOOL which was causing the failure. * zpool_import_002_pos, zpool_import_012_pos - No changes needed. * zpool_remove_003_pos - No changes needed * zpool_upgrade_002_pos, zpool_upgrade_004_pos - Root cause addressed by upstream OpenZFS commit 3b7f360. * zpool_upgrade_007_pos - Disabled in test case due to known failure. Opened issue #6112 * zvol_misc_002_pos - Updated to to use ext2. * zvol_misc_001_neg, zvol_misc_003_neg, zvol_misc_004_pos, zvol_misc_005_neg, zvol_misc_006_pos - Moved to skip list, these test case could be updated to use Linux's crash dump facility. * zvol_swap_* - Updated to use swap_setup/swap_cleanup helpers. File creation switched from /tmp to /var/tmp. Enabled minimal useful tests for Linux, skip test cases which aren't applicable. Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov> Reviewed-by: loli10K <ezomori.nozomu@gmail.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue #3484 Issue #5634 Issue #2437 Issue #5202 Issue #4034 Closes #6095
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few customizations for Linux the autoexpand property for whole disk configurations can be supported. Autoexpand works as follows; when a block device is resized a change event is generated by udev with the DISK_MEDIA_CHANGE key. The ZED, which is monitoring udev events detects the event for disks (but not partitions) and hands it off to zfs_deliver_dle(). The zfs_deliver_dle() function appends the exected whole disk partition suffix, and if the partition can be matched against a known pool vdev it re-opens it. Re-opening the vdev with trigger a re-reading of the partition table so the maximum possible expansion size can be reported. Next if the property autoexpand is set to "on" a vdev expansion will be attempted. After performing some sanity checks on the disk to verify it's safe to expand the ZFS partition (-part1) it will be expanded an the partition table updated. The partition is then re-opened again to detect the updated size which allows the new capacity to be used. Added PHYS_PATH="/dev/zvol/dataset" to vdev configuration for ZFS volumes. This was required for the test cases which test expansion by layering a new pool on top of ZFS volumes. Enable the zpool_expand_001_pos and /zpool_expand_003_pos test cases which excercise the autoexpand property. Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. Removed vdev_disk_rrpart() functionality which can be abandoned in favour of re-opening the device which trigger a re-read of the partition table as long no other partitions are in use. This will be true as long as we're working with hole disks. As a bonus this allows us to remove to Linux kernel API checks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few customizations for Linux the autoexpand property for whole disk configurations can be supported. Autoexpand works as follows; when a block device is resized a change event is generated by udev with the DISK_MEDIA_CHANGE key. The ZED, which is monitoring udev events detects the event for disks (but not partitions) and hands it off to zfs_deliver_dle(). The zfs_deliver_dle() function appends the exected whole disk partition suffix, and if the partition can be matched against a known pool vdev it re-opens it. Re-opening the vdev with trigger a re-reading of the partition table so the maximum possible expansion size can be reported. Next if the property autoexpand is set to "on" a vdev expansion will be attempted. After performing some sanity checks on the disk to verify it's safe to expand the ZFS partition (-part1) it will be expanded an the partition table updated. The partition is then re-opened again to detect the updated size which allows the new capacity to be used. Added PHYS_PATH="/dev/zvol/dataset" to vdev configuration for ZFS volumes. This was required for the test cases which test expansion by layering a new pool on top of ZFS volumes. Enable the zpool_expand_001_pos and /zpool_expand_003_pos test cases which excercise the autoexpand property. Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. Removed vdev_disk_rrpart() functionality which can be abandoned in favour of re-opening the device which trigger a re-read of the partition table as long no other partitions are in use. This will be true as long as we're working with hole disks. As a bonus this allows us to remove to Linux kernel API checks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few customizations for Linux the autoexpand property for whole disk configurations can be supported. Autoexpand works as follows; when a block device is resized a change event is generated by udev with the DISK_MEDIA_CHANGE key. The ZED, which is monitoring udev events detects the event for disks (but not partitions) and hands it off to zfs_deliver_dle(). The zfs_deliver_dle() function appends the exected whole disk partition suffix, and if the partition can be matched against a known pool vdev it re-opens it. Re-opening the vdev with trigger a re-reading of the partition table so the maximum possible expansion size can be reported. Next if the property autoexpand is set to "on" a vdev expansion will be attempted. After performing some sanity checks on the disk to verify it's safe to expand the ZFS partition (-part1) it will be expanded an the partition table updated. The partition is then re-opened again to detect the updated size which allows the new capacity to be used. Added PHYS_PATH="/dev/zvol/dataset" to vdev configuration for ZFS volumes. This was required for the test cases which test expansion by layering a new pool on top of ZFS volumes. Enable the zpool_expand_001_pos and /zpool_expand_003_pos test cases which excercise the autoexpand property. Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. Removed vdev_disk_rrpart() functionality which can be abandoned in favour of re-opening the device which trigger a re-read of the partition table as long no other partitions are in use. This will be true as long as we're working with hole disks. As a bonus this allows us to remove to Linux kernel API checks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few customizations for Linux the autoexpand property for whole disk configurations can be supported. Autoexpand works as follows; when a block device is resized a change event is generated by udev with the DISK_MEDIA_CHANGE key. The ZED, which is monitoring udev events detects the event for disks (but not partitions) and hands it off to zfs_deliver_dle(). The zfs_deliver_dle() function appends the exected whole disk partition suffix, and if the partition can be matched against a known pool vdev it re-opens it. Re-opening the vdev with trigger a re-reading of the partition table so the maximum possible expansion size can be reported. Next if the property autoexpand is set to "on" a vdev expansion will be attempted. After performing some sanity checks on the disk to verify it's safe to expand the ZFS partition (-part1) it will be expanded an the partition table updated. The partition is then re-opened again to detect the updated size which allows the new capacity to be used. Added PHYS_PATH="/dev/zvol/dataset" to vdev configuration for ZFS volumes. This was required for the test cases which test expansion by layering a new pool on top of ZFS volumes. Enable the zpool_expand_001_pos and /zpool_expand_003_pos test cases which excercise the autoexpand property. Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. Removed vdev_disk_rrpart() functionality which can be abandoned in favour of re-opening the device which trigger a re-read of the partition table as long no other partitions are in use. This will be true as long as we're working with hole disks. As a bonus this allows us to remove to Linux kernel API checks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few customizations for Linux the autoexpand property for whole disk configurations can be supported. Autoexpand works as follows; when a block device is resized a change event is generated by udev with the DISK_MEDIA_CHANGE key. The ZED, which is monitoring udev events detects the event for disks (but not partitions) and hands it off to zfs_deliver_dle(). The zfs_deliver_dle() function appends the exected whole disk partition suffix, and if the partition can be matched against a known pool vdev it re-opens it. Re-opening the vdev with trigger a re-reading of the partition table so the maximum possible expansion size can be reported. Next if the property autoexpand is set to "on" a vdev expansion will be attempted. After performing some sanity checks on the disk to verify it's safe to expand the ZFS partition (-part1) it will be expanded an the partition table updated. The partition is then re-opened again to detect the updated size which allows the new capacity to be used. Added PHYS_PATH="/dev/zvol/dataset" to vdev configuration for ZFS volumes. This was required for the test cases which test expansion by layering a new pool on top of ZFS volumes. Enable the zpool_expand_001_pos and /zpool_expand_003_pos test cases which excercise the autoexpand property. Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. Removed vdev_disk_rrpart() functionality which can be abandoned in favour of re-opening the device which trigger a re-read of the partition table as long no other partitions are in use. This will be true as long as we're working with hole disks. As a bonus this allows us to remove to Linux kernel API checks. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Sara Hartse <sara.hartse@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
While the autoexpand property may seem like a small feature it depends on a significant amount of system infrastructure. Enough of that infrastructure is now in place with a few modifications for Linux it can be supported. Auto-expand works as follows; when a block device is modified (re-sized, closed after being open r/w, etc) a change uevent is generated for udev. The ZED, which is monitoring udev events, passes the change event along to zfs_deliver_dle() if the disk or partition contains a zfs_member as identified by blkid. From here the device is matched against all imported pool vdevs using the vdev_guid which was read from the label by blkid. If a match is found the ZED reopens the pool vdev. This re-opening is important because it allows the vdev to be briefly closed so the disk partition table can be re-read. Otherwise, it wouldn't be possible to report thee maximum possible expansion size. Finally, if the property autoexpand=on a vdev expansion will be attempted. After performing some sanity checks on the disk to verify that it is safe to expand, the primary partition (-part1) will be expanded and the partition table updated. The partition is then re-opened (again) to detect the updated size which allows the new capacity to be used. In order to make all of the above possible the following changes were required: * Updated the zpool_expand_001_pos and zpool_expand_003_pos tests. These tests now create a pool which is layered on a loopback, scsi_debug, and file vdev. This allows for testing of non- partitioned block device (loopback), a partition block device (scsi_debug), and a file which does not receive udev change events. This provided for better test coverage, and by removing the layering on ZFS volumes there issues surrounding layering one pool on another are avoided. * zpool_find_vdev_by_physpath() updated to accept a vdev guid. This allows for matching by guid rather than path which is a more reliable way for the ZED to reference a vdev. * Fixed zfs_zevent_wait() signal handling which could result in the ZED spinning when a signal was not handled. * Removed vdev_disk_rrpart() functionality which can be abandoned in favor of kernel provided blkdev_reread_part() function. * Added a rwlock which is held as a writer while a disk is being reopened. This is important to prevent errors from occurring for any configuration related IOs which bypass the SCL_ZIO lock. The zpool_reopen_007_pos.ksh test case was added to verify IO error are never observed when reopening. This is not expected to impact IO performance. Additional fixes which aren't critical but were discovered and resolved in the course of developing this functionality. * Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for ZFS volumes. This is as good as a unique physical path, while the volumes are not used in the test cases anymore for other reasons this improvement was included. Signed-off-by: Sara Hartse <sara.hartse@delphix.com> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#120 Issue openzfs#2437 Issue openzfs#5771 Issue openzfs#7366 Issue openzfs#7582
Way to reproduce:
Results in:
The text was updated successfully, but these errors were encountered: