-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot create zpool using links in /dev/disk/by-id #3708
Comments
Thanks for the bug report. For now, try using the short names, ata-ST2000VN0001-1SF174_Z4H047T6 and ata-ST2000VN0001-1SF174_Z4H033G5. |
The same is happening for me on Debian Jessie, even with short names, without the "/dev/disk/by-id" prefix. I also found a workaround: create the pool with device names (eg. sda, sdb), then destroy the pool and recreate it with the previous command. This way I could use disk ids while creating the pool. |
This is also happening on OpenSuse 13.2. Does not happen on OpenSuse 13.1. Both compiled with latest git. It creates partition 1 and 9 on the drives but I think maybe they are not showing up fast enough in the /dev/disk/by-id/ directory? |
@rlanyi Instead of destroying the ZPool, you can export and then import it like this: Cheers. |
I had this similar issue today. It worked via sdb and sdc but not with disk-ids. I realized if I used lower case for the pool name it worked. ie, did work. |
I'm having the same issue.the name of the pool probably makes no difference for me since mine was lowercase already. Could it be an issue with symbolic links? Since thats what the by-id names are for me. |
What @dracwyrm suggested seems to work though so thanks! |
Just ran into this bug myself today; wasted an hour fighting to create a new zpool for the first time using ZoL with by-id names until I came across this thread and tried creating using the /dev/sd* names instead. Worked fine, and once my migration is done I'll re-import by-id as described by dracwyrm to clean things up. I'm running under Fedora 22 if that helps. |
This also happened to me. Applied the workaround suggested by @dracwyrm and it was ok. I'm running under Arch Linux (2015.11.01) |
Another "one or more devices is currently unavailable" sufferer when trying to create zpool with "by-id" devnames... This on Ubuntu 15.04 (ran into it on two separate systems so far.) Again, @dracwyrm 's workaround solved the problem for me. I did have one case where after a 2nd/3rd attempt to create with by-id devname's did work however (fwiw, used the "-f" flag on create...) Didn't work for me on this last system tho. |
Another one here. Adding raidz1-10 (sdae, sdaf, sdag) in a 36-bay AMD supermicro. Centos 7, kmod-zfs-0.6.5.3-1.el7.centos.x86_64 -- also fixed by @dracwyrm's workaround. |
Another one here.. root@debian:/dev/disk/by-id# zpool create -f mypool2 /dev/disk/by-id/scsi-35000c5008e59858f /dev/disk/by-id/scsi-35000c5008e5ba103 Works with names :+1
errors: No known data errors |
Got a different error message today:
adding
-- note that the latest additions: |
Just thought I'd come in and say @dracwyrm workaround is also working for me on Fedora 22. Thanks! |
This is still happening. I'm trying to write some software that will work generically across different OS's and systems running ZFS, and using the WWN ID to specify the drives is the best way to accomplish that. Is this being looked at at all? I'm running CentOS 7. _EDIT_ |
I'm seeing this as well on CentOS 7 on zpool creates, but to me more critically on zpool replaces. Managing some large JBODs and have scripts to walk the disks looking for failures and start the rebuild after the new disk is inserted. Can't get consistent success on the replace, both by hand or in the scripts when using anything besides the raw /dev/sdxx device. Same error: "no such pool or dataset" It's not too painful create zpools, export and re-import /dev/disk/by-vdev, but I need to replace disks without having to take the system down to export and reimport. |
I just hit this problem myself, on Proxmox 4.1 distrib of Debian on kernel 4.3 with zfs on linux. |
@thomasfrivold: yes, we know. The problem is downtime, there wasn't supposed to be any. |
And secondly, using the workaround in a utility meant to run on different systems of foreign configs is really sloppy and will be prone to failure. We need the issue fixed. I don't have a lot of experience digging in to open source software (just writing my own convenience stuff) but I suppose I could take a crack at it. |
As mentioned above the problem is likely that the partition links under /dev/disk/by-id/ aren't being created. When given a block device Unfortunately, I wasn't able to easy reproduce this issue so I'm hoping someone who can can answer a few questions for me:
|
I'm pretty sure it's less than 30 seconds (and it's |
My code for replacement is in a repo here on github: https://github.com/jdmaloney/zfs_utils Can post some exact output tomorrow as I have this system available for testing. |
Out of order:
So 2) does not apply, of course. Just for completeness:
|
@behlendorf I've seen this bug before, relevant snippet from an IRC discussion:
|
@behlendorf OK, the problem is the interval between stats needs to be longer here:
Not sure what's needed in order to be as robust as possible without being silly but that seems to have been sufficient. |
@behlendorf @ilovezfs I thought the race condition was caused by https://github.com/zfsonlinux/zfs/blob/master/lib/libzfs/libzfs_pool.c#L4282 - we seem to be waiting for the disk's symlinks to get created, not for the disk's first partition. Interestingly the Here's an
Here the same, using a fresh GPT without partitions 1 and 9:
And here after zapping all GPT and MBR structures:
|
@dasjoe I'm not sure I follow you entirely. We should be waiting for the partition symlink (-partX) to be created in the expected place by udev. Are you saying those partition symlinks aren't being created by udev for some reason? Just the device itself? |
@behlendorf Nevermind, I just realized However, it is interesting that |
Hmm... IIRC when I started playing with zfs, |
@dmaziuk as of now |
Ah, that's what it was. Well, I'm maxed out in the chassis I've been adding disks to & it looks like I won't have an opportunity to test it with |
I think this is the same situation as above so I can test it for you :) ` [root@itf]# zpool create dpool05 wwn-0x5000cca23b0e93a8 /dev/disk/by-vdev/wwn-0x5000cca23b0e93a8 contains a corrupt primary EFI label. [root@itf]# ls /dev/disk/by-id/wwn-0x5000cca23b0e93a8* [root@itf]# zpool create -f dpool05 wwn-0x5000cca23b0e93a8 |
I am also affected by this bug. The problem is that I am running Ubuntu 16.04LTS with the official zfs modules, at the moment 0.6.5.6-0ubuntu8. If I try to exchange a disk from a however imported pool, the offlined and removed disk is known to zfs by id, even if I imported the pool by dev. An attempt to replace, either with disk id or device, gives the error. It also doesn't matter if I erase the GPT and file system structure or not. It just takes a few seconds longer until the error message appears. Also, forcing with -f has no effect. To prepare the disk for zfs, it should be enough to erase the first and last 100 MB on it, no? |
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more certain that all the device nodes will existing when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was implemented which includes a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked, however in the rare event it is needed to will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3708 Issue openzfs#4077 Issue openzfs#4144 Issue openzfs#4214 Issue openzfs#4517
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more certain that all the device nodes will existing when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was implemented which includes a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked, however in the rare event it is needed to will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3708 Issue openzfs#4077 Issue openzfs#4144 Issue openzfs#4214 Issue openzfs#4517
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more certain that all the device nodes will existing when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was implemented which includes a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked, however in the rare event it is needed to will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Issue openzfs#3708 Issue openzfs#4077 Issue openzfs#4144 Issue openzfs#4214 Issue openzfs#4517
This issue has been addressed in master by commit 2d82ea8 and we'll look in to back porting it for 0.6.5.7. As always if you're in a position where you can providing additional verification of the fix applied to master it would be appreciated. This ended up being a subtle timing issue so the more real world validation of the fix the better. |
On Ubuntu 16.04 LTS: |
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends on factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the udev rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more likely that all the device nodes will exist when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was updated to include a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked. However, if the rare event it is needed it will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Richard Laager <rlaager@wiktel.com> Closes openzfs#4523 Closes openzfs#3708 Closes openzfs#4077 Closes openzfs#4144 Closes openzfs#4214 Closes openzfs#4517
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends on factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the udev rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more likely that all the device nodes will exist when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was updated to include a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked. However, if the rare event it is needed it will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Richard Laager <rlaager@wiktel.com> Closes openzfs#4523 Closes openzfs#3708 Closes openzfs#4077 Closes openzfs#4144 Closes openzfs#4214 Closes openzfs#4517
I'm currently on Ubuntu 16.04 with the latest available ZFS being 0.6.5.6 (0.6.5.7 is available in 16.10, but I don't feel like mucking with that right now :-) ). I was able to create pools on SSDs without issue, but the HDD were too slow and would fail with this error. On a whim, I decided to try maxing out the CPU to induce artificial delay... and it worked! I can reliably create and destroy the pool with "stress -c 16" running in the background. YMMV. |
When ZFS partitions a block device it must wait for udev to create both a device node and all the device symlinks. This process takes a variable length of time and depends on factors such how many links must be created, the complexity of the rules, etc. Complicating the situation further it is not uncommon for udev to create and then remove a link multiple times while processing the udev rules. Given the above, the existing scheme of waiting for an expected partition to appear by name isn't 100% reliable. At this point udev may still remove and recreate think link resulting in the kernel modules being unable to open the device. In order to address this the zpool_label_disk_wait() function has been updated to use libudev. Until the registered system device acknowledges that it in fully initialized the function will wait. Once fully initialized all device links are checked and allowed to settle for 50ms. This makes it far more likely that all the device nodes will exist when the kernel modules need to open them. For systems without libudev an alternate zpool_label_disk_wait() was updated to include a settle time. In addition, the kernel modules were updated to include retry logic for this ENOENT case. Due to the improved checks in the utilities it is unlikely this logic will be invoked. However, if the rare event it is needed it will prevent a failure. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Richard Laager <rlaager@wiktel.com> Closes openzfs#4523 Closes openzfs#3708 Closes openzfs#4077 Closes openzfs#4144 Closes openzfs#4214 Closes openzfs#4517
The following command failed for me: New parts like xxxxxx-part1 xxxxx-part9 were created after the command execution. This workaround works for me: PS. I'm running Ubuntu 16.04 |
hello there, |
This worked awesome; for some reason I was totally confused and thinking I needed to put the actual disk id of one of the devices in the command line so it would find the devices; like so: $ sudo zpool import -d /dev/disk/by-id/ So if anyone else gets confused by the instructions elsewhere; just let zpool do the work, don't over complicate it. ;-) |
It's not just by-id. Seems the zpool utility can't directly handle anything other than by "standard" devices - eg sdX; other than for import. Once you create a pool; you can then export it and re-import it using just about any format you want - by-id; by-path; by-partid; etc. But if you need to replace a disk; or detach a spare; or whatever; none of those subcommands work using that same disk format. You then have to export and re-import the pool specifying /dev so it gets the "standard" device names for the disks; and then you can run the subcommands. I'm on CentOS 7 using the latest available zfs; which is a higher release than most of what I've seen in this thread; and it still has this "annoyance". Zpool should have some way of determining what format it''s vdevs are using as far as device names go; and allow it's subcommands to use that format. |
Disappointing to see this 2016 bug still around. What worked was adding by /dev/sdx (which we are explicitly advised NOT to do) and then using dracwyrm 7 year work around. Really in 7 years no one has fixed this? specifically I get this error
I set up aliases in /etc/zfs/vdev_id.conf But also using the /dev/disk/by-id/ names fails every time, though I think the error differs slightly:
System is Slackware (Unraid) |
ZFS refuses to create new pool using links in /dev/disk/by-id. Using device nodes directly in /dev/ works fine. See snippet:
Is there some way to make zpool create more verbose?
I am running debian jessie with zfsutils 0.6.4-1.2-1
Thank you
The text was updated successfully, but these errors were encountered: