Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool create assigns same partlabel to all partitions #4076

Closed
rabnab opened this issue Dec 8, 2015 · 5 comments
Closed

zpool create assigns same partlabel to all partitions #4076

rabnab opened this issue Dec 8, 2015 · 5 comments

Comments

@rabnab
Copy link

rabnab commented Dec 8, 2015

zpool create assigns the same partition label ('zfs') to all pristine disks added to a raidvz2.
This triggers an error in systemd<219 due to duplicate entries in sysfs.

zpool create could assign different partition labels to each partition (e.g zfs1, zfs2, ....) to prevent this.

My system is a Fedora FC22 in vmware-player. I installed zfs-0.6.5.3-1.fc22
I create the pool with:

[root@localhost ~]# /sbin/zpool create -f tank raidz2 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_01000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_02000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_03000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_04000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_05000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_06000000000000000001

After which only one link in /dev/disk/by-partlabel is created

[root@localhost ~]# ls -l /dev/disk/by-partlabel
lrwxrwxrwx. 1 root root 10  8. Dez 18:34 zfs -> ../../sdb1

The systemd-journal shows errors, however no malfunction was experienced:

[root@localhost ~]# journalctl --since 18:21 --until 18:21:25
-- Logs begin at Mon 2015-12-07 23:41:04 CET, end at Die 2015-12-08 18:21:58 CET. --
Dez 08 18:21:22 localhost.localdomain sudo[34910]:   john : TTY=tty2 ; PWD=/home/john ; USER=root ; COMMAND=/sbin/zpool create -f tank raidz2 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_01000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_02000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_03000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_04000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_05000000000000000001 /dev/disk/by-id/ata-VMware_Virtual_SATA_Hard_Drive_06000000000000000001
Dez 08 18:21:22 localhost.localdomain audit[34910]: <audit-1123> pid=34910 uid=1000 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/john" cmd=7A706F6F6C20637265617465202D662074616E6B20726169647A32202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3031303030303030303030303030303030303031202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3032303030303030303030303030303030303031202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3033303030303030303030303030303030303031202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3034303030303030303030303030303030303031202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3035303030303030303030303030303030303031202F6465762F6469736B2F62792D69642F6174612D564D776172655F5669727475616C5F534154415F486172645F44726976655F3036303030303030303030303030303030303031 terminal=tty2 res=success'
Dez 08 18:21:22 localhost.localdomain audit[34910]: <audit-1110> pid=34910 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/tty2 res=success'
Dez 08 18:21:22 localhost.localdomain sudo[34910]: pam_unix(sudo:session): session opened for user root by john(uid=0)
Dez 08 18:21:22 localhost.localdomain audit[34910]: <audit-1105> pid=34910 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/tty2 res=success'
Dez 08 18:21:22 localhost.localdomain kernel:  sdb: sdb1 sdb9
Dez 08 18:21:22 localhost.localdomain kernel:  sdb: sdb1 sdb9
Dez 08 18:21:22 localhost.localdomain kernel:  sdb: sdb1 sdb9
Dez 08 18:21:22 localhost.localdomain kernel:  sdc: sdc1 sdc9
Dez 08 18:21:22 localhost.localdomain kernel:  sdc: sdc1 sdc9
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata3/host2/target2:0:0/2:0:0:0/block/sdc/sdc1
Dez 08 18:21:23 localhost.localdomain kernel:  sdd: sdd1 sdd9
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata3/host2/target2:0:0/2:0:0:0/block/sdc/sdc1
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata4/host3/target3:0:0/3:0:0:0/block/sdd/sdd1
Dez 08 18:21:23 localhost.localdomain kernel:  sde: sde1 sde9
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata5/host4/target4:0:0/4:0:0:0/block/sde/sde1
Dez 08 18:21:23 localhost.localdomain kernel:  sdf: sdf1 sdf9
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata5/host4/target4:0:0/4:0:0:0/block/sde/sde1
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata6/host5/target5:0:0/5:0:0:0/block/sdf/sdf1
Dez 08 18:21:23 localhost.localdomain kernel:  sdg: sdg1 sdg9
Dez 08 18:21:23 localhost.localdomain systemd[1]: Device dev-disk-by\x2dpartlabel-zfs.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata2/host1/target1:0:0/1:0:0:0/block/sdb/sdb1 and /sys/devices/pci0000:00/0000:00:11.0/0000:02:05.0/ata7/host6/target6:0:0/6:0:0:0/block/sdg/sdg1
Dez 08 18:21:23 localhost.localdomain kernel: SPL: using hostid 0x00000000
Dez 08 18:21:23 localhost.localdomain zed[35267]: eid=1 class=statechange
Dez 08 18:21:23 localhost.localdomain zed[35269]: eid=2 class=statechange
Dez 08 18:21:24 localhost.localdomain zed[35271]: eid=3 class=statechange
Dez 08 18:21:24 localhost.localdomain zed[35273]: eid=4 class=statechange
Dez 08 18:21:24 localhost.localdomain zed[35275]: eid=5 class=statechange
Dez 08 18:21:24 localhost.localdomain zed[35277]: eid=6 class=statechange
Dez 08 18:21:24 localhost.localdomain zed[35410]: eid=7 class=config.sync pool=tank
Dez 08 18:21:24 localhost.localdomain sudo[34910]: pam_unix(sudo:session): session closed for user root
Dez 08 18:21:24 localhost.localdomain audit[34910]: <audit-1106> pid=34910 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/tty2 res=success'
Dez 08 18:21:24 localhost.localdomain audit[34910]: <audit-1104> pid=34910 uid=0 auid=1000 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/tty2 res=success'

The partition name of /dev/sd[b-f]1 are indeed the same

[root@localhost ~]# find /dev -name "sd[b-f]" -print -exec gdisk -l '{}' ';'

/dev/sdf
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdf: 4194304 sectors, 2.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): E049911B-054A-6D42-908A-C68DE6F3A416
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194270
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4175871   2.0 GiB     BF01  zfs
   9         4175872         4192255   8.0 MiB     BF07  
/dev/sde
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sde: 4194304 sectors, 2.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): F3CA6EEA-937A-654C-B438-DB4623B3172B
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194270
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4175871   2.0 GiB     BF01  zfs
   9         4175872         4192255   8.0 MiB     BF07  
/dev/sdd
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdd: 4194304 sectors, 2.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 023110A4-6F31-F947-9AFC-6D9963244C7C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194270
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4175871   2.0 GiB     BF01  zfs
   9         4175872         4192255   8.0 MiB     BF07  
/dev/sdc
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 4194304 sectors, 2.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 048F3322-174C-3640-B8B6-AAAECD112080
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194270
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4175871   2.0 GiB     BF01  zfs
   9         4175872         4192255   8.0 MiB     BF07  
/dev/sdb
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 4194304 sectors, 2.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 57D84DAA-D742-AF48-B9B7-BCE67C048753
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 4194270
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4175871   2.0 GiB     BF01  zfs
   9         4175872         4192255   8.0 MiB     BF07  
@rabnab
Copy link
Author

rabnab commented Dec 8, 2015

One workaround to avoid the log-clutter is to rename the partitions by hand with

[root@localhost ~]# sgdisk -c 1:<unique-label> /dev/<disk>

However I am not 100% sure if this is safe. It works in my test-environment but I will only do this on a production system if someone knowledgable convinces me, that its save.

Be careful to only rename the partitions which are part of your pool. sgdisk does not ask any questions

@behlendorf
Copy link
Contributor

This is unfortunate and it's not 100% clear we should do anything about this. At the time the zpool create command is making the partitions it doesn't have much information with which to create useful labels. All it really knows is the path to the device and the pool name, not even the guid which will be written to the zfs label has been generated yet. And even if we were to somehow generate good partition names which described the pool we'd then want to keep those names in sync with the ZFS labels. The best we may be able to do are to generate random names which include the pool name.

@rabnab as for changing the partition label that's safe, nothing depends on it.

@rabnab
Copy link
Author

rabnab commented Dec 9, 2015

Thank you for reassuring, that partition labels can be changed. I did so on my systems and everything works as expected, no errors.

Since systemd is the entity creating sysfs entries, it should be handling duplicate gpt.partlabels gracefully. Nobody can guarantee that two partitions will not have the same partlabel.
I may open this issue on the systemd side.

I close this issue and migrate to FC23 which has more recent version of systemd available.

@rabnab rabnab closed this as completed Dec 9, 2015
@tuxoko
Copy link
Contributor

tuxoko commented Mar 23, 2016

@behlendorf
But if nothing depends on the partlabel, why putting it in the first place?

@behlendorf
Copy link
Contributor

@tuxoko just something we inherited from the illumos efi library. We could remove it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants