-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Since kernel 3.2.0-21 I have to import zpool on every reboot. #703
Comments
This sounds like an issue for @dajhorn. |
Ubuntu 12.04 Precise Pangolin shipped with the Linux 3.2.0-23-generic kernel package; build 24 isn't published. The zvol disappearance problem isn't particular to Ubuntu, it seems to be an upstream ZoL problem. Attach a copy of the the custom The import problem might be resolved by recreating the Recent Ubuntu kernels behave poorly when a RAID HBAs are slow to bring drives online, especially if they cause hotplug events. There is no fix for this except to put a long |
this is my zdev.conf: http://pastebin.com/nTmnbpat I can assure you , i'm not using an upstream kernel. on 04/27/2012 I did an update / upgrade and ended up with -24. Apr 27 10:10:29 Pollux kernel: [ 0.000000] Linux version 3.2.0-24-generic (buildd@yellow) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 (Ubuntu 3.2.0-24.37-generic 3.2.14) So you want me to :
Is that correct ? Since I am importing every reboot right now, the only difference would be me deleting the .cache file. Where would I go about placing that pause in mountall exactly ? Also , it's not that my mount points aren't mounting automatically , my pools just aren't there. Would a pause at mountall be the right spot ? wouldn't it need to be done earlier than that ? Booting using -24 doesn't take any less time than booting -20 . on -20 the pools have always shown up , and on -21 to -24 they have never shown up. The rocketraid controller and motherboard controller do their bios init before an areca 1261ml that's in there as well , which holds the boot part of the system. the areca takes 1 minute to do it's spin up , by which time any spin up by the rr or the mb controller are done.(the areca has 2 volumes which are regular ext4) Also , as I mentioned in my initial post , I would really like to know how dangerous it is to import my pools at every reboot if at all. Thnx in advance for any reply. |
Hi all, I seem to be having the same issue. After upgrading to 12.04, I have to import pools on every reboot.
I've cleared the zpool.cache, imported and exported the pool a bunch of times, rebooted, still the same issue. Also tried using /dev/sd* instead of by-id, did not make a difference. I am running this on a HP Microserver, just 2 disks in a mirror. The OS is on a usb stick, so I'd say the HDs have well enough time to spin up before the OS loads. I'm on the ppa:zfs-native/stable
The strange thing is that running the same setup in VirtualBox seems to be working without any issues... |
@bastetone Please post the entire |
Sure thing,
I have not made any changes to ZoL files, the Thanks for looking into this! |
@bastetone First, you need to resolve this error message:
Automatic pool import is disabled if the hostid is missing or incorrect. If the If the The PPA packages should create the Second, the // This is a nice little computer. It boots very fast. |
Ok , reading up on bastetone's issue I checked dmesg . I also had spl using hostid 0xffffffff . Then I remembered that at some point some time ago I edited my initramfs to include the zfs module to try and fix something .I ended up switching to daily which fixed it. I followed what was written here : It was on the initial day I made that computer , and bigger things were going wrong than that .. I ended up forgetting to change it back to what it was.. So today I checked my initramfs setup and removed zfs from modules. Pools are there again after reboot. Sorry for the confusion that resulted from this , Hopefully if someone ends up in the same situation this gives them a hint on how to fix it. Kind regards |
Closed , since initial issue is resolved. If your problem isn't fixed with what dajhorn suggested bastetone , feel free to reopen it. |
@Phoenixxl Okay, good, and thanks for reporting the problem. This ticket is a reason to implement the |
I have to confess, I've previously added zfs to initrd and have completely forgot about it. I've removed the line and rebuilt initrmfs. I now have the pool online and fs available on reboot. Thanks for your help @dajhorn and @Phoenixxl ! |
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Requires-spl: refs/pull/703/head
Always invoke the SPL_AC_DEBUG* macro's when running configure so RPM_DEFINE_COMMON is correctly expanded. A similar change was already applied to ZFS. Reviewed-by: George Melikov <mail@gmelikov.ru> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#703
Using precise Pangolin : In kernel -20 my pools show up when doing a zpool status and survive a reboot.
From -21 onward to -24 , zpools are gone after every reboot.
How dangerous is it to import them every reboot? Does that seem potentially harmful ? Currently it is my only option if I want to use the final release of Ubuntu 12.04.
There is already the issue , in -20 and earlier , that zvols don't survive a reboot , so I had to rename those every reboot when using -20. adding the fact I need to import my pools each reboot that now, i'm really getting worried.
I am using this as a motherboard : http://www.asus.com/Motherboards/Intel_Socket_2011/P9X79_PRO/#specifications
there is a first zpool on 4 of the sata/300 connectors on the MB.
I am also using a rr 2720 sgl with 8 drives connected : http://www.highpoint-tech.com/USA_new/CS-series_rr272x.htm , and have 2 l2arc ssd on my sata/600 connectors on the mb for this second pool.
I am using a zdev.conf file to allow for comprehensive connecting.
/dev/disk/zpool is populated after every reboot though.
But both pools fail to show up after reboot .
Kind regards ,
Phoenixxl.
The text was updated successfully, but these errors were encountered: