Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool.cache not updated when adding a pool #8549

Open
burnsjake opened this issue Mar 29, 2019 · 9 comments

Comments

@burnsjake
Copy link

@burnsjake burnsjake commented Mar 29, 2019

System information

Type Version/Name
Distribution Name Ubuntu
Distribution Version 18.04
Linux Kernel 4.18.0-16-lowlatency #17~18.04.1-Ubuntu SMP PREEMPT Tue Feb 12 16:37:17 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Architecture x86_64
ZFS Version 0.7.9-3ubuntu6
SPL Version 0.7.9-3ubuntu2

Adding a pool newpool/newfs prevents booting.

STEPS TO RECREATE

  1. Install ZFS root per this: https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
  2. Bring up new system.
  3. Add a second drive /dev/sdb and add zpool with "zpool create -f sdb sdb"
  4. add ZFS to zpool sdb with "zfs create sdb/newfs"
  5. set a mountpoint with: zfs set mountpoint=/path/to/mountpoint/ sdb/newfs
  6. reboot.
  7. system does not mount the new sdb/newfs on boot or see the zpool.

Attempts at remediation:

  1. Tried setting mountpoint=legacy and updating fstab and that fails, forcing an unclean boot into maintenance mode that requires running "zpool import -a" to continue.
  2. Tried updating /etc/zfs/zpool.cache by running "zpool set cachefile=/etc/zfs/zpool.cache sdb"
  3. Tried adding /etc/modprobe.d/zfs.conf containing "zfs_autoimport_disable=0"
  4. Tried adding a second systemd service per section 4.10 of https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS
  5. Tried running update-initramfs -u -k all after the pool had been imported to no avail.

nothing shows up in syslog indicating an error.

@Osvit

This comment has been minimized.

Copy link

@Osvit Osvit commented Apr 18, 2019

Followed the same guide https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS, and see the same problem on two systems running Ubuntu-18.04. Spend hours trying to figure out the problem, no success. Pools are not imported after a reboot.

@jwittlincohen

This comment has been minimized.

Copy link
Contributor

@jwittlincohen jwittlincohen commented Apr 18, 2019

/dev/sdX naming is not persistent across reboots and is not recommended for production pools. It may be causing conflicts preventing pool import. You can manually import using unique by-id names.

zpool import poolname -d /dev/disk/by-id/

@Osvit

This comment has been minimized.

Copy link

@Osvit Osvit commented Apr 18, 2019

/dev/sdX naming is not persistent across reboots and is not recommended for production pools. It may be causing conflicts preventing pool import. You can manually import using unique by-id names.

zpool import poolname -d /dev/disk/by-id/

tried that several times, same behavior. Tried also to remove cache file, it was recreated and filled by import command, but the pool is still not imported after reboot.

@ttych

This comment has been minimized.

Copy link

@ttych ttych commented Jul 16, 2019

Also,
I followed the same guide https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS on Ubuntu 19.04 (even if not recommanded ...). Base system is OK, bpool / rpool are correctly imported.
Any pool created after are importable, but not automatically imported at boot.

Tryed also to change settings in /etc/default/zfs to bypass /etc/zfs/zpool.cache, but it had no effects.

@yacoob

This comment has been minimized.

Copy link

@yacoob yacoob commented Aug 3, 2019

FWIW, I'm seeing similar behavior on a Debian 10 system. I was moving from one pool to another (some history in #9107), and after getting everything in order I've realised the old pool isn't being imported.

I'm still using a cache file, and zfs-import-cache.service says:

Aug 03 23:19:48 boxoob systemd[1]: Starting Import ZFS pools by cache file...
Aug 03 23:19:48 boxoob zpool[4331]: no pools available to import
Aug 03 23:19:48 boxoob systemd[1]: Started Import ZFS pools by cache file.

upon starting. My /etc/zfs/zpool.cache has the boot pool (which is imported explicitly earlier) and the root pool (which is imported properly by initramfs) and the old pool. Or at least, I think that last pool is present in the cache file, judging by the presence of the disks hosting it in the output of strings /etc/zfs/zpool.cache. As far as I can tell the corresponding file in the initramd is the same.

I'm going to get rid of the cache file, in favour of explicit import via names in /etc/default/zfs, we'll see how this will go.

@LawfulHacker

This comment has been minimized.

Copy link

@LawfulHacker LawfulHacker commented Aug 4, 2019

Also,
I followed the same guide https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS on Ubuntu 19.04 (even if not recommanded ...). Base system is OK, bpool / rpool are correctly imported.
Any pool created after are importable, but not automatically imported at boot.

Tryed also to change settings in /etc/default/zfs to bypass /etc/zfs/zpool.cache, but it had no effects.

Same for me, I resolved it creating a service to import the pool manually:

[Unit]
Description=Import data pool
Before=zfs-import-scan.service
Before=zfs-import-cache.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -d /dev/disk/by-id data

[Install]
WantedBy=zfs-import.target
@albertmichaelj

This comment has been minimized.

Copy link

@albertmichaelj albertmichaelj commented Aug 13, 2019

I want to say I'm running into this same issue as well (using the same ZFS on root guide). I also solved it by creating a new systemd service to import the pool, but I'm not sure why it's necessary. I think the reason this is happening is that my /etc/zfs/zpool.cache file is recreated on boot for some reason. If I run strings /etc/zfs/zpool.cache immediately after boot I get (with hostname redacted):

rpool
version
name
rpool
state
pool_guid
errata
hostname
*******************
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
children
type
disk
guid
path
/dev/disk/by-id/nvme-PC401_NVMe_SK_hynix_1TB_EJ86N550010106HEB-part4
whole_disk
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_leaf
com.delphix:vdev_zap_top
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data

After I import my other pool (named array), it becomes:

array
version
name
array
state
pool_guid
errata
hostname
*******************
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
children
type
mirror
guid
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_top
children
type
disk
guid
path
/dev/disk/by-id/wwn-0x5000cca252ccbc54-part1
whole_disk
create_txg
com.delphix:vdev_zap_leaf
type
disk
guid
path
/dev/disk/by-id/wwn-0x5000cca252ccbe62-part1
whole_disk
create_txg
com.delphix:vdev_zap_leaf
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data
rpool
version
name
rpool
state
pool_guid
errata
hostname
*******************
com.delphix:has_per_vdev_zaps
vdev_children
vdev_tree
type
root
guid
children
type
disk
guid
path
/dev/disk/by-id/nvme-PC401_NVMe_SK_hynix_1TB_EJ86N550010106HEB-part4
whole_disk
metaslab_array
metaslab_shift
ashift
asize
is_log
create_txg
com.delphix:vdev_zap_leaf
com.delphix:vdev_zap_top
features_for_read
com.delphix:hole_birth
com.delphix:embedded_data

However, after I restart, but before I manually import my array pool the cache file is back to what it was.

I can't figure out why, but I wonder if when the bpool is imported if it is causing the cache file to be cleared. The service to do this from the wiki is:

    # vi /etc/systemd/system/zfs-import-bpool.service
    [Unit]
    DefaultDependencies=no
    Before=zfs-import-scan.service
    Before=zfs-import-cache.service
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/zpool import -N -o cachefile=none bpool
    
    [Install]
    WantedBy=zfs-import.target

    # systemctl enable zfs-import-bpool.service

I have another install that followed this guide before the bpool was included, and I have had no such problems.

I don't know how to edit the zfs-import-bpool.service file to debug without breaking my system, so I can't confirm.

@jadams

This comment has been minimized.

Copy link

@jadams jadams commented Aug 30, 2019

Same issue as described, it seems that when the rpool is imported it overwrites the zpool.cache file and removes any other pools from the cache.
My workaround was to comment out ConditionPathExists=!/etc/zfs/zpool.cache from /lib/systemd/system/zfs-import-scan.service and enable that service

$ sudo vim /lib/systemd/system/zfs-import-scan.service
[Unit]
Description=Import ZFS pools by device scanning
Documentation=man:zpool(8)
DefaultDependencies=no
Requires=systemd-udev-settle.service
Requires=zfs-load-module.service
After=systemd-udev-settle.service
After=zfs-load-module.service
After=cryptsetup.target
Before=dracut-mount.service
Before=zfs-import.target
#ConditionPathExists=!/etc/zfs/zpool.cache

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -aN -o cachefile=none

[Install]
WantedBy=zfs-import.target

$ sudo systemctl enable zfs-import-scan
@RaveNoX

This comment has been minimized.

Copy link

@RaveNoX RaveNoX commented Nov 4, 2019

I have same problem on 18.04

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
10 participants
You can’t perform that action at this time.