Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Since kernel 3.2.0-21 I have to import zpool on every reboot. #703

Closed
Phoenixxl opened this issue Apr 27, 2012 · 11 comments
Closed

Since kernel 3.2.0-21 I have to import zpool on every reboot. #703

Phoenixxl opened this issue Apr 27, 2012 · 11 comments
Milestone

Comments

@Phoenixxl
Copy link

Using precise Pangolin : In kernel -20 my pools show up when doing a zpool status and survive a reboot.

From -21 onward to -24 , zpools are gone after every reboot.

How dangerous is it to import them every reboot? Does that seem potentially harmful ? Currently it is my only option if I want to use the final release of Ubuntu 12.04.

There is already the issue , in -20 and earlier , that zvols don't survive a reboot , so I had to rename those every reboot when using -20. adding the fact I need to import my pools each reboot that now, i'm really getting worried.

I am using this as a motherboard : http://www.asus.com/Motherboards/Intel_Socket_2011/P9X79_PRO/#specifications
there is a first zpool on 4 of the sata/300 connectors on the MB.

I am also using a rr 2720 sgl with 8 drives connected : http://www.highpoint-tech.com/USA_new/CS-series_rr272x.htm , and have 2 l2arc ssd on my sata/600 connectors on the mb for this second pool.

I am using a zdev.conf file to allow for comprehensive connecting.
/dev/disk/zpool is populated after every reboot though.

But both pools fail to show up after reboot .

Kind regards ,
Phoenixxl.

@ryao
Copy link
Contributor

ryao commented Apr 27, 2012

This sounds like an issue for @dajhorn.

@dajhorn
Copy link
Contributor

dajhorn commented Apr 27, 2012

Ubuntu 12.04 Precise Pangolin shipped with the Linux 3.2.0-23-generic kernel package; build 24 isn't published.

The zvol disappearance problem isn't particular to Ubuntu, it seems to be an upstream ZoL problem.

Attach a copy of the the custom /etc/zfs/zdev.conf file.

The import problem might be resolved by recreating the /etc/zfs/zpool.cache file. Just do an export+import on any pool.

Recent Ubuntu kernels behave poorly when a RAID HBAs are slow to bring drives online, especially if they cause hotplug events. There is no fix for this except to put a long sleep command in the /etc/init/mountall.conf file.

@Phoenixxl
Copy link
Author

this is my zdev.conf: http://pastebin.com/nTmnbpat

I can assure you , i'm not using an upstream kernel. on 04/27/2012 I did an update / upgrade and ended up with -24.
root@Pollux:/etc/zfs# uname -r
3.2.0-24-generic

Apr 27 10:10:29 Pollux kernel: [ 0.000000] Linux version 3.2.0-24-generic (buildd@yellow) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 (Ubuntu 3.2.0-24.37-generic 3.2.14)

So you want me to :

  1. export my 2 pools ,
  2. Reboot
  3. delete zpool.cache
  4. import pools

Is that correct ?

Since I am importing every reboot right now, the only difference would be me deleting the .cache file.

Where would I go about placing that pause in mountall exactly ?
Instead of a pause , I could do an ls of /dev/disk/zpool no ?
If they are there at that point my pools should be there as well ?
Maybe check for them at various points in the boot procedure?

Also , it's not that my mount points aren't mounting automatically , my pools just aren't there. Would a pause at mountall be the right spot ? wouldn't it need to be done earlier than that ?

Booting using -24 doesn't take any less time than booting -20 . on -20 the pools have always shown up , and on -21 to -24 they have never shown up.

The rocketraid controller and motherboard controller do their bios init before an areca 1261ml that's in there as well , which holds the boot part of the system. the areca takes 1 minute to do it's spin up , by which time any spin up by the rr or the mb controller are done.(the areca has 2 volumes which are regular ext4)

Also , as I mentioned in my initial post , I would really like to know how dangerous it is to import my pools at every reboot if at all.

Thnx in advance for any reply.

@realG
Copy link

realG commented May 5, 2012

Hi all, I seem to be having the same issue.

After upgrading to 12.04, I have to import pools on every reboot.

root@server:/# zpool status
no pools available
root@server:/# zpool import -d /dev/disk/by-id pool
cannot import 'pool': pool may be in use from other system
use '-f' to import anyway
root@server:/# zpool import -f -d /dev/disk/by-id pool
root@server:/# zpool status
  pool: pool
 state: ONLINE
 scan: resilvered 722G in 2h55m with 0 errors on Sat Apr 21 16:07:16 2012
config:

        NAME                                    STATE     READ WRITE CKSUM
        pool                                    ONLINE       0     0     0
          mirror-0                              ONLINE       0     0     0
            ata-ST2000DL003-9VT166_5YD70R85     ONLINE       0     0     0
            ata-SAMSUNG_HD204UI_S2H7J9HBA03546  ONLINE       0     0     0

errors: No known data errors

I've cleared the zpool.cache, imported and exported the pool a bunch of times, rebooted, still the same issue. Also tried using /dev/sd* instead of by-id, did not make a difference.

I am running this on a HP Microserver, just 2 disks in a mirror. The OS is on a usb stick, so I'd say the HDs have well enough time to spin up before the OS loads.

I'm on the ppa:zfs-native/stable
I've checked that mountall 2.36-zfs1 is installed (I've tried reinstalling as well)
No errors in logs, the only thing I see related to zfs is this line in syslog
[ 2.280606] ZFS: Loaded module v0.6.0.56-rc8, ZFS pool version 28, ZFS filesystem version 5

root@server:/# uname -r
3.2.0-24-generic

The strange thing is that running the same setup in VirtualBox seems to be working without any issues...

@dajhorn
Copy link
Contributor

dajhorn commented May 5, 2012

@bastetone

Please post the entire dmesg output, the entire unmodified /var/log/kern.log file, and mention any changes that you made to /etc for ZoL after installing from the PPA.

@realG
Copy link

realG commented May 6, 2012

Sure thing,

dmesg: http://pastebin.com/MaU5JEz0
kern.log: http://pastebin.com/Zinzam5q

I have not made any changes to ZoL files, the /etc/default/zfs is also default (empty strings)

Thanks for looking into this!

@dajhorn
Copy link
Contributor

dajhorn commented May 6, 2012

@bastetone

First, you need to resolve this error message:

 [    2.118447] SPL: The /etc/hostid file is not found.
 [    2.158552] SPL: Loaded module v0.6.0.56-rc8, using hostid 0xffffffff

Automatic pool import is disabled if the hostid is missing or incorrect.

If the /etc/hostid file already exists, then it means that the ZoL is in the initrd. ZoL should not be in the initrd in this circumstance, which means that you need to find and remove the zfs line from /etc/modules, /etc/modprobe.d, or elsewhere.

If the /etc/hostid file does not exist, then run this command to create it: # dd if=/dev/urandom of=/etc/hostid bs=4 count=1

The PPA packages should create the /etc/hostid file at installation time, which could indicate a different bug.

Second, the /dev/sdd device comes online at time [ 2.997663], which is after the ZFS driver at time [ 2.191556]. This is okay if /dev/sdd is a USB stick for /boot, but it will cause sporadic failures if it contains anything that the system needs during system start.

// This is a nice little computer. It boots very fast.

@Phoenixxl
Copy link
Author

@dajhorn

Ok , reading up on bastetone's issue I checked dmesg . I also had spl using hostid 0xffffffff .

Then I remembered that at some point some time ago I edited my initramfs to include the zfs module to try and fix something .I ended up switching to daily which fixed it.

I followed what was written here :
http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/5b25e2a172cd2616
http://pastebin.com/tdphMq5E

It was on the initial day I made that computer , and bigger things were going wrong than that .. I ended up forgetting to change it back to what it was..

So today I checked my initramfs setup and removed zfs from modules.

Pools are there again after reboot.
This bug can be classed as never having been there.

Sorry for the confusion that resulted from this , Hopefully if someone ends up in the same situation this gives them a hint on how to fix it.

Kind regards
Phoenixxl.

@Phoenixxl
Copy link
Author

Closed , since initial issue is resolved. If your problem isn't fixed with what dajhorn suggested bastetone , feel free to reopen it.
For me this is closed.

@dajhorn
Copy link
Contributor

dajhorn commented May 6, 2012

@Phoenixxl Okay, good, and thanks for reporting the problem.

This ticket is a reason to implement the /etc/zfs/zpool.cache changes described in issues #330, #511, and #711.

@realG
Copy link

realG commented May 7, 2012

I have to confess, I've previously added zfs to initrd and have completely forgot about it. I've removed the line and rebuilt initrmfs. I now have the pool online and fs available on reboot.

Thanks for your help @dajhorn and @Phoenixxl !

behlendorf added a commit to behlendorf/zfs that referenced this issue May 2, 2018
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Requires-spl: refs/pull/703/head
behlendorf added a commit to behlendorf/zfs that referenced this issue May 21, 2018
Always invoke the SPL_AC_DEBUG* macro's when running configure
so RPM_DEFINE_COMMON is correctly expanded.  A similar change
was already applied to ZFS.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes openzfs#703
pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants