Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool expand destroys whole pool if old zpool signature exists #16144

Open
ggzengel opened this issue Apr 29, 2024 · 9 comments
Open

zpool expand destroys whole pool if old zpool signature exists #16144

ggzengel opened this issue Apr 29, 2024 · 9 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@ggzengel
Copy link
Contributor

System information

# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 12 (bookworm)
Release:	12
Codename:	bookworm

# dpkg-query -l "*zfs*" | grep ii
ii  libzfs4linux                      2.2.3-1~bpo12+1 amd64        OpenZFS filesystem library for Linux - general support
ii  python3-pyzfs                     2.2.3-1~bpo12+1 amd64        wrapper for libzfs_core C library
ii  zfs-dkms                          2.2.3-1~bpo12+1 all          OpenZFS filesystem kernel modules for Linux
ii  zfs-zed                           2.2.3-1~bpo12+1 amd64        OpenZFS Event Daemon
ii  zfsutils-linux                    2.2.3-1~bpo12+1 amd64        command-line tools to manage OpenZFS filesystems

# uname -a
Linux zfs1.hq1.zmt.info 6.6.13+bpo-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.6.13-1~bpo12+1 (2024-02-15) x86_64 GNU/Linux

# zfs version
zfs-2.2.3-1~bpo12+1
zfs-kmod-2.2.3-1~bpo12+1

Describe the problem you're observing

I use LVM for VDEVs.
I used a used HDD where an old zpool with the same name was used.
I moved the VDEV with pvmove from a weak HDD to these already used HDD.

After expanding the LVs from 1TiB to 3TiB the whole pool is unavailable.
I think while expanding, the old zpool label was found or while inserting the used disk the old zpool label was found.
At the used disk the old zpool was on a partition.

# zdb -l /dev/VG1/ZFS01
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zpool1'
    state: 0
    txg: 677475
    pool_guid: 10245595246600322219
    errata: 0
    hostid: 1719867327
    hostname: 'zfs1.hq1.zmt.info'
    top_guid: 1078628190459762462
    guid: 15958182755501637013
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 1078628190459762462
        nparity: 3
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 17592110546944
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15958182755501637013
            path: '/dev/mapper/VG1-ZFS01'
            whole_disk: 0
            DTL: 2270
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 1050945417998083787
            path: '/dev/mapper/VG1-ZFS02'
            whole_disk: 0
            DTL: 2269
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 5839789882220114128
            path: '/dev/mapper/VG1-ZFS03'
            whole_disk: 0
            DTL: 2268
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 6390359041192991295
            path: '/dev/mapper/VG1-ZFS04'
            whole_disk: 0
            DTL: 2267
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 12059085455868003514
            path: '/dev/mapper/VG1-ZFS05'
            whole_disk: 0
            DTL: 2266
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 3995831796923460053
            path: '/dev/mapper/VG1-ZFS06'
            whole_disk: 0
            DTL: 2265
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 7856275499043659129
            path: '/dev/mapper/VG1-ZFS07'
            whole_disk: 0
            DTL: 2264
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 3050318963605718613
            path: '/dev/mapper/VG1-ZFS08'
            whole_disk: 0
            DTL: 2263
            create_txg: 4
        children[8]:
            type: 'disk'
            id: 8
            guid: 10029903618149417330
            path: '/dev/mapper/VG1-ZFS09'
            whole_disk: 0
            DTL: 2262
            create_txg: 4
        children[9]:
            type: 'disk'
            id: 9
            guid: 7101734810324033498
            path: '/dev/mapper/VG1-ZFS10'
            whole_disk: 0
            DTL: 2261
            create_txg: 4
        children[10]:
            type: 'disk'
            id: 10
            guid: 3830632294722480229
            path: '/dev/mapper/VG1-ZFS11'
            whole_disk: 0
            DTL: 2260
            create_txg: 4
        children[11]:
            type: 'disk'
            id: 11
            guid: 6175555787272715126
            path: '/dev/mapper/VG1-ZFS12'
            whole_disk: 0
            DTL: 2259
            create_txg: 4
        children[12]:
            type: 'disk'
            id: 12
            guid: 1129704638296903490
            path: '/dev/mapper/VG1-ZFS13'
            whole_disk: 0
            DTL: 2258
            create_txg: 4
        children[13]:
            type: 'disk'
            id: 13
            guid: 5945036595013075363
            path: '/dev/mapper/VG1-ZFS14'
            whole_disk: 0
            DTL: 2257
            create_txg: 4
        children[14]:
            type: 'disk'
            id: 14
            guid: 5098336806023622657
            path: '/dev/mapper/VG1-ZFS15'
            whole_disk: 0
            DTL: 2256
            create_txg: 4
        children[15]:
            type: 'disk'
            id: 15
            guid: 16953092468303889791
            path: '/dev/mapper/VG1-ZFS16'
            whole_disk: 0
            DTL: 2255
            create_txg: 4
            expansion_time: 1713979980
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 
------------------------------------
LABEL 2 
------------------------------------
    version: 5000
    name: 'zpool1'
    state: 0
    txg: 16537931
    pool_guid: 15652696854719846007
    errata: 0
    hostid: 3337622759
    hostname: 'px2.rad-ffm.local'
    top_guid: 6312352239864435547
    guid: 17014789416555187233
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 6312352239864435547
        nparity: 3
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 26388241317888
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17014789416555187233
            path: '/dev/mapper/VG1-ZFS01'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qERn7AiwXBA403m2Nk2oH79EY3JQ88wm3'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/1'
            whole_disk: 0
            DTL: 953
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 14827704622120264462
            path: '/dev/mapper/VG1-ZFS02'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qsQy3WDOwpdMiWzwqgtjeRMB5RAWtDyaB'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/3'
            whole_disk: 0
            DTL: 952
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 15888366119334987690
            path: '/dev/mapper/VG1-ZFS03'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qcd4GE6h3JAoH2MNW6R7V8SVYl278Yqy9'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/5'
            whole_disk: 0
            DTL: 951
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 490686109886100976
            path: '/dev/mapper/VG1-ZFS04'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qzQBC53AKFw9Fm0BM7tEgzbHfFoTtryNP'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/7'
            whole_disk: 0
            DTL: 950
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 6044335431130424596
            path: '/dev/mapper/VG1-ZFS05'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qNqB93aO7R1MIJRJclIcjWjiYtF5YkyYD'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/0'
            whole_disk: 0
            DTL: 949
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 9473278246285965181
            path: '/dev/mapper/VG1-ZFS06'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qxe8FfnBT1YjnySwStMMhbNHoCkgE2emj'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/2'
            whole_disk: 0
            DTL: 948
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 4177742109969957892
            path: '/dev/mapper/VG1-ZFS07'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qCoqScwwRGKrLC9efhN0VmQmD9Kg4aSN3'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/4'
            whole_disk: 0
            DTL: 947
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 16929925576382100918
            path: '/dev/mapper/VG1-ZFS08'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qK18X9YDJgjPkotbH59STaYMzeZpP2MeT'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/6'
            whole_disk: 0
            DTL: 946
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3 

While initializing LVM on the used disk it wiped some of the old signatures:

# pvcreate /dev/disk/by-partlabel/LVM_SLOT16 --force
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Wiping zfs_member signature on /dev/disk/by-partlabel/LVM_SLOT16.
  Physical volume "/dev/disk/by-partlabel/LVM_SLOT16" successfully created.

After expanding all the LVs/VDEVs I got this:

# zpool status
  pool: zpool1
 state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
  scan: scrub repaired 0B in 00:51:56 with 0 errors on Wed Apr 24 19:41:00 2024
config:

	NAME               STATE     READ WRITE CKSUM
	zpool1             UNAVAIL      0     0     0  insufficient replicas
	  raidz3-0         UNAVAIL      0     0     0  insufficient replicas
	    VG1-ZFS01      FAULTED      0     0     0  corrupted data
	    VG1-ZFS02      FAULTED      0     0     0  corrupted data
	    VG1-ZFS03      FAULTED      0     0     0  corrupted data
	    VG1-ZFS04      FAULTED      0     0     0  corrupted data
	    VG1-ZFS05      ONLINE       0     0     0
	    VG1-ZFS06      ONLINE       0     0     0
	    VG1-ZFS07      ONLINE       0     0     0
	    VG1-ZFS08      ONLINE       0     0     0
	    VG1-ZFS09      ONLINE       0     0     0
	    VG1-ZFS10      ONLINE       0     0     0
	    VG1-ZFS11      ONLINE       0     0     0
	    VG1-ZFS12      ONLINE       0     0     0
	    VG1-ZFS13      ONLINE       0     0     0
	    VG1-ZFS14      ONLINE       0     0     0
	    VG1-ZFS15      ONLINE       0     0     0
	    VG1-ZFS16      ONLINE       0     0     0
	logs	
	  mirror-1         ONLINE       0     0     0
	    VG1-ZFS_ZIL01  ONLINE       0     0     0
	    VG1-ZFS_ZIL02  ONLINE       0     0     0
	cache
	  VG1-ZFS_CACHE01  ONLINE       0     0     0
	  VG1-ZFS_CACHE02  ONLINE       0     0     0

But zpool clear or a reboot didn't work.

I renamed /dev/VG1/ZFS01-16 to ZFS_01-16 and tried an import:

# zpool import -d /dev/VG1/
   pool: zpool1
     id: 10245595246600322219
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

	zpool1             UNAVAIL  insufficient replicas
	  raidz3-0         UNAVAIL  insufficient replicas
	    VG1/ZFS_01     FAULTED  corrupted data
	    VG1/ZFS_02     FAULTED  corrupted data
	    VG1/ZFS_03     FAULTED  corrupted data
	    VG1/ZFS_04     FAULTED  corrupted data
	    VG1/ZFS_05     FAULTED  corrupted data
	    VG1/ZFS_06     FAULTED  corrupted data
	    VG1/ZFS_07     FAULTED  corrupted data
	    VG1/ZFS_08     FAULTED  corrupted data
	    VG1/ZFS_09     FAULTED  corrupted data
	    VG1/ZFS_10     FAULTED  corrupted data
	    VG1/ZFS_11     FAULTED  corrupted data
	    VG1/ZFS_12     FAULTED  corrupted data
	    VG1/ZFS_13     FAULTED  corrupted data
	    VG1/ZFS_14     FAULTED  corrupted data
	    VG1/ZFS_15     FAULTED  corrupted data
	    VG1/ZFS_16     ONLINE
	logs	
	  mirror-1         ONLINE
	    VG1/ZFS_ZIL01  ONLINE
	    VG1/ZFS_ZIL02  ONLINE

# zpool import -fFXd /dev/VG1/
   pool: zpool1
     id: 10245595246600322219
  state: UNAVAIL
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

	zpool1             UNAVAIL  insufficient replicas
	  raidz3-0         UNAVAIL  insufficient replicas
	    VG1/ZFS_01     FAULTED  corrupted data
	    VG1/ZFS_02     FAULTED  corrupted data
	    VG1/ZFS_03     FAULTED  corrupted data
	    VG1/ZFS_04     FAULTED  corrupted data
	    VG1/ZFS_05     FAULTED  corrupted data
	    VG1/ZFS_06     FAULTED  corrupted data
	    VG1/ZFS_07     FAULTED  corrupted data
	    VG1/ZFS_08     FAULTED  corrupted data
	    VG1/ZFS_09     FAULTED  corrupted data
	    VG1/ZFS_10     FAULTED  corrupted data
	    VG1/ZFS_11     FAULTED  corrupted data
	    VG1/ZFS_12     FAULTED  corrupted data
	    VG1/ZFS_13     FAULTED  corrupted data
	    VG1/ZFS_14     FAULTED  corrupted data
	    VG1/ZFS_15     FAULTED  corrupted data
	    VG1/ZFS_16     ONLINE
	logs	
	  mirror-1         ONLINE
	    VG1/ZFS_ZIL01  ONLINE
	    VG1/ZFS_ZIL02  ONLINE
# pvs -o +lv_name
  PV             VG  Fmt  Attr PSize   PFree    LV               
  /dev/nvme0n1p3 VG1 lvm2 a--  372.14g  317.14g ZFS_ZIL01        
  /dev/nvme0n1p3 VG1 lvm2 a--  372.14g  317.14g ZFS_CACHE01      
  /dev/nvme0n1p3 VG1 lvm2 a--  372.14g  317.14g                  
  /dev/nvme1n1p3 VG1 lvm2 a--  372.14g  317.14g ZFS_ZIL02        
  /dev/nvme1n1p3 VG1 lvm2 a--  372.14g  317.14g ZFS_CACHE02      
  /dev/nvme1n1p3 VG1 lvm2 a--  372.14g  317.14g                  
  /dev/sda3      VG1 lvm2 a--   <3.64t <606.99g ZFS_02           
  /dev/sda3      VG1 lvm2 a--   <3.64t <606.99g                  
  /dev/sda3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rmeta_1] 
  /dev/sda3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rimage_1]
  /dev/sdb3      VG1 lvm2 a--   <3.64t <606.99g ZFS_03           
  /dev/sdb3      VG1 lvm2 a--   <3.64t <606.99g                  
  /dev/sdb3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rmeta_2] 
  /dev/sdb3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rimage_2]
  /dev/sdc3      VG1 lvm2 a--   <3.64t <606.99g ZFS_01           
  /dev/sdc3      VG1 lvm2 a--   <3.64t <606.99g                  
  /dev/sdc3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rmeta_0] 
  /dev/sdc3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rimage_0]
  /dev/sdd3      VG1 lvm2 a--   <3.64t <606.99g ZFS_04           
  /dev/sdd3      VG1 lvm2 a--   <3.64t <606.99g                  
  /dev/sdd3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rmeta_3] 
  /dev/sdd3      VG1 lvm2 a--   <3.64t <606.99g [SYSTEM_rimage_3]
  /dev/sde3      VG1 lvm2 a--   <3.64t <644.24g ZFS_05           
  /dev/sde3      VG1 lvm2 a--   <3.64t <644.24g                  
  /dev/sde3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rmeta_0]   
  /dev/sde3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rimage_0]  
  /dev/sdf3      VG1 lvm2 a--   <3.64t <644.24g ZFS_06           
  /dev/sdf3      VG1 lvm2 a--   <3.64t <644.24g                  
  /dev/sdf3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rmeta_1]   
  /dev/sdf3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rimage_1]  
  /dev/sdg3      VG1 lvm2 a--   <3.64t <644.24g ZFS_08           
  /dev/sdg3      VG1 lvm2 a--   <3.64t <644.24g                  
  /dev/sdg3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rmeta_3]   
  /dev/sdg3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rimage_3]  
  /dev/sdh3      VG1 lvm2 a--   <3.64t <644.24g ZFS_07           
  /dev/sdh3      VG1 lvm2 a--   <3.64t <644.24g                  
  /dev/sdh3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rmeta_2]   
  /dev/sdh3      VG1 lvm2 a--   <3.64t <644.24g [SWAP_rimage_2]  
  /dev/sdi3      VG1 lvm2 a--   <3.64t  653.55g ZFS_09           
  /dev/sdi3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdj3      VG1 lvm2 a--   <3.64t  653.55g ZFS_10           
  /dev/sdj3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdk3      VG1 lvm2 a--   <3.64t  653.55g ZFS_12           
  /dev/sdk3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdl3      VG1 lvm2 a--   <3.64t  653.55g ZFS_11           
  /dev/sdl3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdm3      VG1 lvm2 a--   <3.64t  653.55g ZFS_13           
  /dev/sdm3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdn3      VG1 lvm2 a--   <3.64t  653.55g ZFS_14           
  /dev/sdn3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdo3      VG1 lvm2 a--   <3.64t  653.55g ZFS_15           
  /dev/sdo3      VG1 lvm2 a--   <3.64t  653.55g                  
  /dev/sdp3      VG1 lvm2 a--   <7.28t   <4.28t ZFS_16           
  /dev/sdp3      VG1 lvm2 a--   <7.28t   <4.28t                  
     
@ggzengel ggzengel added the Type: Defect Incorrect behavior (e.g. crash, hang) label Apr 29, 2024
@ggzengel
Copy link
Contributor Author

ggzengel commented Apr 29, 2024

I just have seen, it wasn't the old data from HDD 16. HDD 1 - 15 were used in a pool before.

@ggzengel
Copy link
Contributor Author

I have signatures from 3 zpools:

# for i in /dev/VG1/ZFS_??; do zdb -l $i; done | grep top_gui | sort -u
    top_guid: 1078628190459762462
    top_guid: 4577387146904413066
    top_guid: 6312352239864435547

@ggzengel
Copy link
Contributor Author

Is it possible to wipe wrong labels from VDEVs?
I think I can import the pool if label 2 and label 3 wiped from the VDEVs.

@rincebrain
Copy link
Contributor

It is possible to wipe, but I wouldn't, because I do not think that is your problem.

Offhand, I would suggest you show the lvs output, not the pvs output.

@ggzengel
Copy link
Contributor Author

I resized back to 1TiB and zdb shows now a single top_guid and the pool was able to be imported.

@ggzengel
Copy link
Contributor Author

# lvs
  LV          VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  SWAP        VG1 rwi-aor---  9.31g                                    100.00          
  SYSTEM      VG1 rwi-aor--- 46.56g                                    100.00          
  ZFS01       VG1 -wi-ao----  1.00t                                                    
  ZFS02       VG1 -wi-ao----  1.00t                                                    
  ZFS03       VG1 -wi-ao----  1.00t                                                    
  ZFS04       VG1 -wi-ao----  1.00t                                                    
  ZFS05       VG1 -wi-ao----  1.00t                                                    
  ZFS06       VG1 -wi-ao----  1.00t                                                    
  ZFS07       VG1 -wi-ao----  1.00t                                                    
  ZFS08       VG1 -wi-ao----  1.00t                                                    
  ZFS09       VG1 -wi-ao----  1.00t                                                    
  ZFS10       VG1 -wi-ao----  1.00t                                                    
  ZFS11       VG1 -wi-ao----  1.00t                                                    
  ZFS12       VG1 -wi-ao----  1.00t                                                    
  ZFS13       VG1 -wi-ao----  1.00t                                                    
  ZFS14       VG1 -wi-ao----  1.00t                                                    
  ZFS15       VG1 -wi-ao----  1.00t                                                    
  ZFS16       VG1 -wi-ao----  1.00t                                                    
  ZFS_CACHE01 VG1 -wi-ao---- 50.00g                                                    
  ZFS_CACHE02 VG1 -wi-ao---- 50.00g                                                    
  ZFS_ZIL01   VG1 -wi-ao----  5.00g                                                    
  ZFS_ZIL02   VG1 -wi-ao----  5.00g           

@rincebrain
Copy link
Contributor

I would imagine that what you would probably like to do, if you want to remove whatever was there, is something like trimming the "unused" space underneath LVM.

The simplest way to do that might be to configure the rest of the space in each PV in a new LV, and then zero the whole thing. (I think you can do a similar game of not showing the underlying contents until first allocation with thin volumes on LVM, but that might be too much hassle to manage.)

You could probably extend zpool labelclear to only clear specific labels, but this shouldn't really come up in most scenarios, since you would need to have a prior pool, on the same storage, with part of the storage not having been visible already, and not having cleanly destroyed the old pool, for this to come up, assuming that's the issue at hand.

I'm not sure whether I think extending zpool online -e to do something more forceful would be a good idea.

My idea of what's happening here is basically this:

  • underlying storage has the remains of old labels 2 and 3 from an active pool at the end of it, but the LV only maps to the first 1T or whatever, so it doesn't get overwritten
  • when zpool online -e runs, it basically goes by:
  1. check that it's labeled as whole_disk so it's allowed to extend things
  2. rewrite the partition table to cover the whole space
  3. reopen the device with the new partition layout, since Linux won't show the new partition layout until the device is closed and reopened
  4. rely on the normal handling of noticing a block device is longer than it used to be and labels 2 and 3 are missing

Unfortunately, in your case, it seems like what happened is that it did 3, then at 4, it errored because it noticed valid, non-destroyed labels for two different pools at the two correct locations on the partition, and promptly errored out.

One could imagine triggering something like labelclear on spots for 2 and 3 before doing the close-then-reopen dance. But this situation is so uncommon I'm worried there might be some case that would cover up breaking.

(I also don't really suggest using LVM under ZFS, but that's a different discussion.)

@ggzengel
Copy link
Contributor Author

ggzengel commented May 2, 2024

@behlendorf ZFS should ignore (and refresh?) wrong top/pool GUIDs on reopening VDEVs if there a multiple labels.

I created a 2TiB spare LV and LVM wiped the zfs signatures while creating.
After releasing the spare and resizing the VDEV with this wiped signatures the VDEVs immediately failed.

There are two facts about LVM:

  1. LVM does not wipe on resizing LVs
  2. Not all zfs signatures are wiped (or labels are never wiped?)

So I have to zero 32TiB for expanding ZFS.

# zpool status
  pool: zpool1
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
	invalid.  Sufficient replicas exist for the pool to continue
	functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 01:02:33 with 0 errors on Mon Apr 29 18:09:07 2024
config:

	NAME               STATE     READ WRITE CKSUM
	zpool1             DEGRADED     0     0     0
	  raidz3-0         DEGRADED     0     0     0
	    VG1-ZFS01      FAULTED      0     0     0  corrupted data
	    VG1-ZFS02      ONLINE       0     0     0
	    VG1-ZFS03      ONLINE       0     0     0
	    VG1-ZFS04      ONLINE       0     0     0
	    VG1-ZFS05      ONLINE       0     0     0
	    VG1-ZFS06      ONLINE       0     0     0
	    VG1-ZFS07      ONLINE       0     0     0
	    VG1-ZFS08      ONLINE       0     0     0
	    VG1-ZFS09      ONLINE       0     0     0
	    VG1-ZFS10      ONLINE       0     0     0
	    VG1-ZFS11      ONLINE       0     0     0
	    VG1-ZFS12      ONLINE       0     0     0
	    VG1-ZFS13      ONLINE       0     0     0
	    VG1-ZFS14      ONLINE       0     0     0
	    VG1-ZFS15      ONLINE       0     0     0
	    VG1-ZFS16      ONLINE       0     0     0
	logs	
	  mirror-1         ONLINE       0     0     0
	    VG1-ZFS_ZIL01  ONLINE       0     0     0
	    VG1-ZFS_ZIL02  ONLINE       0     0     0
	cache
	  VG1-ZFS_CACHE01  ONLINE       0     0     0
	  VG1-ZFS_CACHE02  ONLINE       0     0     0
# zdb -l /dev/VG1/ZFS01
------------------------------------
LABEL 0 
------------------------------------
    version: 5000
    name: 'zpool1'
    state: 0
    txg: 758540
    pool_guid: 10245595246600322219
    errata: 0
    hostid: 1719867327
    hostname: 'zfs1.hq1.zmt.info'
    top_guid: 1078628190459762462
    guid: 15958182755501637013
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 1078628190459762462
        nparity: 3
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 17592110546944
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 15958182755501637013
            path: '/dev/mapper/VG1-ZFS01'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFPMfw7B33geeXfdgg3FGvoAzmycBg8VGt'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2270
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 1050945417998083787
            path: '/dev/mapper/VG1-ZFS02'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFy3WjmuKivlB7dzYsYIM434yfbs1sfcEr'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2269
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 5839789882220114128
            path: '/dev/mapper/VG1-ZFS03'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFfzISa1kspFTOSPbAgJlG7SooHFM0IvX1'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2268
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 6390359041192991295
            path: '/dev/mapper/VG1-ZFS04'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFRW0yTwq5qsvkRId7wexTxSbtJ3dFPvAR'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2267
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 12059085455868003514
            path: '/dev/mapper/VG1-ZFS05'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFLGpC9VKW4PIGeYvF5S3Gy6jPcv6T7RQY'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2266
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 3995831796923460053
            path: '/dev/mapper/VG1-ZFS06'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFqbyKUsTH181oqvh7eBaNVNrYexD9Ot4X'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2265
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 7856275499043659129
            path: '/dev/mapper/VG1-ZFS07'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuF5T7O1Mw3ArUBzJIXndmCcMeXuyMPxMff'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2264
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 3050318963605718613
            path: '/dev/mapper/VG1-ZFS08'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFCKCpnXNMRmmAMnWpI8MfiReRXbGIoVdx'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2263
            create_txg: 4
        children[8]:
            type: 'disk'
            id: 8
            guid: 10029903618149417330
            path: '/dev/mapper/VG1-ZFS09'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFpcKuIBrXmerVkqnHoKLi4Hh28GcHQg0k'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2262
            create_txg: 4
        children[9]:
            type: 'disk'
            id: 9
            guid: 7101734810324033498
            path: '/dev/mapper/VG1-ZFS10'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFhu0Ae0fgTgx3t3sE2aAvLTkg2JnyuD3w'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2261
            create_txg: 4
        children[10]:
            type: 'disk'
            id: 10
            guid: 3830632294722480229
            path: '/dev/mapper/VG1-ZFS11'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFsquwMrEGo4tqV3uxBHcOCSgW3pj22lbk'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2260
            create_txg: 4
        children[11]:
            type: 'disk'
            id: 11
            guid: 6175555787272715126
            path: '/dev/mapper/VG1-ZFS12'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFjLNIlkQr4xc2zHH1LHPVULw1jaCsvtni'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2259
            create_txg: 4
        children[12]:
            type: 'disk'
            id: 12
            guid: 1129704638296903490
            path: '/dev/mapper/VG1-ZFS13'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFtKcs1TByJD2wnBo9nz11eVZgLWWmfD9X'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2258
            create_txg: 4
        children[13]:
            type: 'disk'
            id: 13
            guid: 5945036595013075363
            path: '/dev/mapper/VG1-ZFS14'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuF6NViVLKHJ4a8XNv29Ae1asXGJLePVXRs'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2257
            create_txg: 4
        children[14]:
            type: 'disk'
            id: 14
            guid: 5098336806023622657
            path: '/dev/mapper/VG1-ZFS15'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFydMQZULUydUq1fKGCQVEZxEyqQrCZxiC'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2256
            create_txg: 4
        children[15]:
            type: 'disk'
            id: 15
            guid: 16953092468303889791
            path: '/dev/mapper/VG1-ZFS16'
            devid: 'dm-uuid-LVM-SymEPn1sxYrhcxE2hq3NlJSJlV9kMhuFO0p0nYMakHgWNicZh9ccDEhleJwcQGnV'
            phys_path: '/dev/disk/by-uuid/10245595246600322219'
            whole_disk: 0
            DTL: 2255
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 
------------------------------------
LABEL 2 
------------------------------------
    version: 5000
    name: 'zpool1'
    state: 0
    txg: 16537931
    pool_guid: 15652696854719846007
    errata: 0
    hostid: 3337622759
    hostname: 'px2.rad-ffm.local'
    top_guid: 6312352239864435547
    guid: 17014789416555187233
    vdev_children: 2
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 6312352239864435547
        nparity: 3
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 26388241317888
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17014789416555187233
            path: '/dev/mapper/VG1-ZFS01'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qERn7AiwXBA403m2Nk2oH79EY3JQ88wm3'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/1'
            whole_disk: 0
            DTL: 953
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 14827704622120264462
            path: '/dev/mapper/VG1-ZFS02'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qsQy3WDOwpdMiWzwqgtjeRMB5RAWtDyaB'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/3'
            whole_disk: 0
            DTL: 952
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 15888366119334987690
            path: '/dev/mapper/VG1-ZFS03'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qcd4GE6h3JAoH2MNW6R7V8SVYl278Yqy9'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/5'
            whole_disk: 0
            DTL: 951
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 490686109886100976
            path: '/dev/mapper/VG1-ZFS04'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qzQBC53AKFw9Fm0BM7tEgzbHfFoTtryNP'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/7'
            whole_disk: 0
            DTL: 950
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 6044335431130424596
            path: '/dev/mapper/VG1-ZFS05'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qNqB93aO7R1MIJRJclIcjWjiYtF5YkyYD'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/0'
            whole_disk: 0
            DTL: 949
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 9473278246285965181
            path: '/dev/mapper/VG1-ZFS06'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qxe8FfnBT1YjnySwStMMhbNHoCkgE2emj'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/2'
            whole_disk: 0
            DTL: 948
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 4177742109969957892
            path: '/dev/mapper/VG1-ZFS07'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qCoqScwwRGKrLC9efhN0VmQmD9Kg4aSN3'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/4'
            whole_disk: 0
            DTL: 947
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 16929925576382100918
            path: '/dev/mapper/VG1-ZFS08'
            devid: 'dm-uuid-LVM-xOBikas8n6zfpXGB8iXF2L3p6voDCZ0qK18X9YDJgjPkotbH59STaYMzeZpP2MeT'
            vdev_enc_sysfs_path: '/sys/class/enclosure/0:0:8:0/6'
            whole_disk: 0
            DTL: 946
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3 
# for i in `seq -w 16`; do lvcreate --yes -L 2TiB VG1 -n S$i /dev/disk/by-partlabel/LVM_SLOT$i; done
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Wiping zfs_member signature on /dev/VG1/S01.
  Logical volume "S01" created.
  Wiping zfs_member signature on /dev/VG1/S02.
  Wiping zfs_member signature on /dev/VG1/S02.
<snip>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

2 participants