Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel panic while browsing snapshots #3257

Closed
TioNisla opened this issue Apr 6, 2015 · 44 comments
Closed

kernel panic while browsing snapshots #3257

TioNisla opened this issue Apr 6, 2015 · 44 comments
Milestone

Comments

@TioNisla
Copy link

TioNisla commented Apr 6, 2015

http://i.imgur.com/hEaJDSy.png

@amitie10g
Copy link

What distro are you using?
Did you installed all the zfs and spl related packages.
Are you using dkms?
The kernel leaved you the busybox shell before the kernel panic?

Try to boot with the backup initrd (that ends with ´´´.old-dkms´´´ if found). Then make a snapshot and backup of the grub directory first! and try to update all the zfs and spl (but not the grub without a backup!). Be sure to the initramfs are created.

@TioNisla
Copy link
Author

TioNisla commented Apr 6, 2015

uname -a
Linux ftp 3.19.2 #2 SMP Mon Mar 23 13:35:36 YEKT 2015 x86_64 Intel(R) Xeon(R) CPU X5660 @ 2.80GHz GenuineIntel GNU/Linux

distro: custom build CRUX 3.1 https://crux.nu
no dkms. Vanilla kernel.
spl-79a0056e13
zfs-bc88866657

P.S.

zpool create -f -m none -o ashift=12 storage mirror /dev/disk/by-id/wwn-0x60060e80102d34f00511c6d700000018 /dev/disk/by-id/wwn-0x60060e80102d35100511c6d90000000b /dev/disk/by-id/wwn-0x60060e80102d37f00511c70700000018
zpool set autoreplace=on storage
zpool set autoexpand=on storage
zfs set atime=off storage
zfs set xattr=sa storage
zfs set aclinherit=passthrough storage
zfs set acltype=posixacl storage
zfs set compression=lz4 storage
zfs create -o mountpoint=/home storage/home
zfs set aclinherit=passthrough storage/home
zfs set recordsize=64k storage/home
zfs create -o mountpoint=/home/ftp storage/ftp

$ zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage  5,97T  2,23T  3,74T         -    28%    37%  1.00x  ONLINE  -

$ zpool status
  pool: storage
 state: ONLINE
  scan: scrub canceled on Sat Mar 28 11:15:39 2015
config:

    NAME                                        STATE     READ WRITE CKSUM
    storage                                     ONLINE       0     0     0
      mirror-0                                  ONLINE       0     0     0
        wwn-0x60060e80102d34f00511c6d700000018  ONLINE       0     0     0
        wwn-0x60060e80102d35100511c6d90000000b  ONLINE       0     0     0
        wwn-0x60060e80102d37f00511c70700000018  ONLINE       0     0     0

errors: No known data errors

@amitie10g
Copy link

In what exact moment you experimented this Panic?
Did you tried to boot directly to initramfs (removing all the rootfs parameters from the boot command line)?
Do you booted up any previous kernel (3.18.x and previous) without this issue?

Please be more specific with this issue, it not may be necessary related to zfs/spl. Try with kernel 3.18.0 and be sure to have all the modules up to date.

See this thread, it may be helpful.

@cooper75
Copy link

cooper75 commented Apr 6, 2015

Here to add nothing more than a "Me too" but i am seeing the same thing

Fedora 21
3.19.3-200.fc21.x86_64
3.5GB RAM
(2) 1TB HDDs in a stripe (raid-0)
zfs/spl pkgs from zfsonlinux.com w/ dkms
zfs-0.6.3-1.3.fc21.x86_64
zfs-release-1-4.fc21.noarch
zfs-dkms-0.6.3-1.3.fc21.noarch
libzfs2-0.6.3-1.3.fc21.x86_64
spl-dkms-0.6.3-1.3.fc21.noarch
spl-0.6.3-1.3.fc21.x86_64

I run no snapshots currently. If i take 1 snapshot on any volume, the kernel panics with this error at 5 min when it trys to expire the snapshot.

also get the 'too many levels of symbolic links' when listing the .zfs/snapshots directory. cd'ing to the directory relieves this, but a pwd shows: (unknown)/SNAPNAME

zpool status & scrubs report no errors. runs fine with no snapshots. take 1 snapshot and it panics within 5 min.

@TioNisla
Copy link
Author

TioNisla commented Apr 7, 2015

mm.. while viewing snapshots via vfs_shadow_copy (samba)

2 Amitie10g
There is no init***fs
This is custom build, like LFS, ZOL / All made by hands/

cd spl-master
./autogen.sh
./configure --prefix=/usr --sbindir=/sbin --sysconfdir=/etc --localstatedir=/var --libdir=/lib --mandir=/usr/man
make && make install

cd ../zfs-master
./autogen.sh
./configure --prefix=/usr --sbindir=/sbin --sysconfdir=/etc --localstatedir=/var --libdir=/lib --mandir=/usr/man --with-udevdir=/lib/udev
make && make install

@cooper75
Copy link

cooper75 commented Apr 7, 2015

per Amitie10g's suggestion above of booting an older kernel i tried that.

Fedora21
3.18.7-200.fc21.x86_64

after taking a snapshot i get:

ZFS: Unable to automount tank/data@2015_04_07_Tues at /tank/data/.zfs/snapshot/2015_04_07_Tues: 512

If I cd into the snapshot directory the 2015_04_07_Tues directory is there, but it has no contents.

I do not get the kernel panic, but i dont get a snapshot either. zfs never attempts to expire the snapshot.

this looks problem like #3030 and #2841 which seem to be related to the 3.18 kernel.

so i am really out of luck right now. kernel 3.18 doesnt automount snapshots and kernel 3.19 panics after taking one.

@TioNisla
Copy link
Author

TioNisla commented Apr 8, 2015

root@ftp:/home/ftp/.zfs/snapshot# ls -laR
.:
total 0
dr-xr-xr-x 7 root root 7 апр  8 00:00 .
dr-xr-xr-x 1 root root 0 апр  7 13:18 ..
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.25-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.26-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.27-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.28-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.29-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.30-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.03.31-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.01-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.02-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.03-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.04-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.05-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.06-19.00.01
dr-xr-xr-x 1 root root 0 апр  8 10:15 GMT-2015.04.07-19.00.01

./GMT-2015.03.25-19.00.01:
ls: cannot access ./GMT-2015.03.25-19.00.01/.: Too many levels of symbolic links
ls: cannot access ./GMT-2015.03.25-19.00.01/..: Too many levels of symbolic links
ls: cannot access ./GMT-2015.03.25-19.00.01/pub: Too many levels of symbolic links
ls: cannot access ./GMT-2015.03.25-19.00.01/incoming: Too many levels of symbolic links
total 0
d????????? ? ? ? ?            ? .
d????????? ? ? ? ?            ? ..
d????????? ? ? ? ?            ? incoming
d????????? ? ? ? ?            ? pub

./GMT-2015.03.25-19.00.01/incoming:
ls: cannot access ./GMT-2015.03.25-19.00.01/incoming/.: Too many levels of symbolic links
ls: cannot access ./GMT-2015.03.25-19.00.01/incoming/..: Too many levels of symbolic links

@TioNisla TioNisla changed the title kernel panic under load kernel panic while brousing snapshots Apr 8, 2015
@TioNisla TioNisla changed the title kernel panic while brousing snapshots kernel panic while browsing snapshots Apr 8, 2015
@TioNisla
Copy link
Author

TioNisla commented Apr 9, 2015

downgrade
root@ftp:/home/ftp/.zfs/snapshot# uname -a
Linux ftp 3.13.9 #2 SMP Thu Apr 9 09:57:02 YEKT 2015 x86_64 Intel(R) Xeon(R) CPU X5660 @ 2.80GHz GenuineIntel GNU/Linux

All OK

@behlendorf behlendorf added this to the 0.6.5 milestone Apr 10, 2015
@cooper75
Copy link

upgraded to 0.6.4-1 pkgs from zol.org , and upgraded the pool, but got same results.

Fedora 21
3.19.3-200.fc21.x86_64
zfs-dkms-0.6.4-1.fc21.noarch
zfs-release-1-4.fc21.noarch
spl-0.6.4-1.fc21.x86_64
libzfs2-0.6.4-1.fc21.x86_64
zfs-0.6.4-1.fc21.x86_64
spl-dkms-0.6.4-1.fc21.noarch

results here:
gist link

5 minutes later kernel panic when zfs attempts to auto umount the snapshot.

@Bronek
Copy link

Bronek commented Apr 12, 2015

I have the same, kernel panic in guest Arch Linux running under kvm.
kernel 3.19.3
zfs 0.6.3.-1.3

@drescherjm
Copy link

I had that a couple weeks ago when browsing snapshots via samba.

datastore4 ~ # cat /sys/module/{spl,zfs}/version
0.6.3-76_g6ab0866
0.6.3-240_g40749aa
datastore4 ~ # uname -a
Linux datastore4 3.19.2-gentoo-datastore4-zfs-20150319 #1 SMP Thu Mar
19 17:12:53 EDT 2015 x86_64 Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
GenuineIntel GNU/Linux

Here is the kernel panic:
[18333.744680] Kernel panic - not syncing: avl_find() succeeded inside avl_add()
[18333.744715] CPU: 1 PID: 3226 Comm: z_unmount/0 Tainted: P W
O 3.19.2-gentoo-datastore4-zfs-20150319 #1
[18333.744740] Hardware name: To be filled by O.E.M. To be filled by
O.E.M./P8B-X series, BIOS 2107 05/04/2012
[18333.744765] ffff8801f995f9c0 ffff8802232c3cb8 ffffffff8166a131
0000000000000001
[18333.744801] ffffffffa02ceba8 ffff8802232c3d38 ffffffff81666104
0000000000000002
[18333.744950] 0000000000000008 ffff8802232c3d48 ffff8802232c3ce8
ffff8802232c3d38
[18333.745082] Call Trace:
[18333.745150] [] dump_stack+0x45/0x57
[18333.745228] [] panic+0xb6/0x1da
[18333.745300] [] avl_add+0x45/0x50 [zavl]
[18333.745371] [] ? mutex_lock+0x11/0x32
[18333.745465] [] zfsctl_unmount_snapshot+0x184/0x1d0 [zfs]
[18333.745558] [] zfsctl_snapdir_remove+0x1c8/0x220 [zfs]
[18333.745635] [] taskq_cancel_id+0x2c8/0x470 [spl]
[18333.746729] [] ? wake_up_process+0x40/0x40
[18333.746799] [] ? taskq_cancel_id+0x120/0x470 [spl]
[18333.746868] [] kthread+0xc4/0xe0
[18333.746935] [] ? kthread_create_on_node+0x170/0x170
[18333.747004] [] ret_from_fork+0x58/0x90
[18333.747091] [] ? kthread_create_on_node+0x170/0x170
[18333.747194] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation
range: 0xffffffff80000000-0xffffffff9fffffff)
[18333.747595] ---[ end Kernel panic - not syncing: avl_find()
succeeded inside avl_add()

@Bronek
Copy link

Bronek commented Apr 13, 2015

In case this is useful: prior to kernel panic, I am able to use the system for a little while and also start browsing snapshots, however obviously there is something amiss, see example below:

root@tmp5 /data/.zfs/snapshot # ls
1/  2/  3/

root@tmp5 /data/.zfs/snapshot # zfs list -t all
NAME       USED  AVAIL  REFER  MOUNTPOINT
ztest2    14.9M  28.6M  12.6M  /data
ztest2@1    88K      -   136K  -
ztest2@2   816K      -   872K  -
ztest2@3    96K      -  12.6M  -

root@tmp5 /data/.zfs/snapshot # cd 3
cd:6: unable to chdir(/data/.zfs/snapshot/3): too many levels of symbolic links

root@tmp5 /data/.zfs/snapshot/3 # ls -al
total 26
drwxr-xr-x  3 root root   3 Apr 12 18:01 ./
drwxr-xr-x  3 root root   3 Apr 12 18:01 ../
drwxr-xr-x 43 root root 110 Apr 12 18:01 etc/

root@tmp5 /data/.zfs/snapshot/3 # cd ..

root@tmp5 / #

Note: error report when changing directory to snapshot/3, which actually does succeed (despite the error!) and the fact that we return to / rather than /data/.zfs/snapshot/ after cd ... Kernel panic follows shortly after cd ..

@Bronek
Copy link

Bronek commented Apr 13, 2015

FWIW I captured the following in serial console

[  602.117686] Kernel panic - not syncing: avl_find() succeeded inside avl_add()
[  602.118927] CPU: 0 PID: 201 Comm: z_unmount/0 Tainted: P           O   3.19.3-3-ARCH #1
[  602.120174] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140617_173313-var-lib-archbuild-testing-x86_64-tobias 04/04
[  602.120998]  0000000000000000 00000000a9475ef9 ffff8800bb373c98 ffffffff8155d19f
[  602.120998]  0000000000000000 ffffffffa0079180 ffff8800bb373d18 ffffffff8155c22b
[  602.120998]  0000000000000008 ffff8800bb373d28 ffff8800bb373cc8 00000000a9475ef9
[  602.120998] Call Trace:
[  602.120998]  [<ffffffff8155d19f>] dump_stack+0x4c/0x6e
[  602.120998]  [<ffffffff8155c22b>] panic+0xd0/0x204
[  602.120998]  [<ffffffffa0078778>] avl_add+0x68/0x70 [zavl]
[  602.120998]  [<ffffffffa0367f6b>] zfsctl_unmount_snapshot+0x10b/0x120 [zfs]
[  602.120998]  [<ffffffff8155ee9e>] ? preempt_schedule+0x3e/0x60
[  602.120998]  [<ffffffffa036814d>] zfsctl_expire_snapshot+0x2d/0x80 [zfs]
[  602.120998]  [<ffffffffa014c557>] taskq_thread+0x247/0x4f0 [spl]
[  602.120998]  [<ffffffff8109f640>] ? wake_up_process+0x50/0x50
[  602.120998]  [<ffffffffa014c310>] ? taskq_cancel_id+0x200/0x200 [spl]
[  602.120998]  [<ffffffff81091828>] kthread+0xd8/0xf0
[  602.120998]  [<ffffffff81091750>] ? kthread_create_on_node+0x1c0/0x1c0
[  602.120998]  [<ffffffff81562918>] ret_from_fork+0x58/0x90
[  602.120998]  [<ffffffff81091750>] ? kthread_create_on_node+0x1c0/0x1c0
[  602.120998] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)
[  602.120998] drm_kms_helper: panic occurred, switching back to text console

Please note, crash happens only some time after I've used ZFS snapshots

@Bronek
Copy link

Bronek commented Apr 13, 2015

This does not happen on kernel version 3.18.11 . Instead in this kernel I am unable to mount the snapshots (but no crash). This is the same issue as #3030 . I will try bisecting the kernel to find which commit changed kernel behaviour from

ZFS: Unable to automount ztest2@2 at /data/.zfs/snapshot/2: 512

to

Kernel panic - not syncing: avl_find() succeeded inside avl_add()

@cooper75 this is the same as you are reporting above

@cooper75
Copy link

@Bronek, yes exact same issue/problem.

behlendorf added a commit to behlendorf/zfs that referenced this issue Apr 13, 2015
It is possible for an automounted snapshot to expire, trigger the
auto-unmount, successfully unmount but still return EBUSY.  This can
occur if the unmount command returns an unexpected error code.  If
concurrent with this another process triggers the snapshot auto-mount
the snapshot will be remounted.  This is correct and desirable
behavior but it breaks that assumption that it's always safe to
re-add the snapshot to the AVL tree.  Therefore, we must check the
AVL snapshot tree before assuming it's safe to add the snapshot.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue openzfs#3257
@behlendorf
Copy link
Contributor

Could someone who's able to trigger this issue please verify that #3287 resolves the issue. I haven't been able to reliably reproduce the issue.

@cooper75
Copy link

@behlendorf I'd love to help, but i am out of my depth with git. how do i go about pulling this commit into a local tree?

@Bronek
Copy link

Bronek commented Apr 14, 2015

@behlendorf I applied the patch on top of 0.6.3-1.3 and ran it with kernel 3.19.3 . It still does not quite work i.e. commands which trigger automount are reporting errors and then some commands wil work and others will not however the crash seems to be fixed

I spoke too soon, some time after browsing few snapshots I got this:

[  964.784163] general protection fault: 0000 [#1] PREEMPT SMP
[  964.785307] Modules linked in: mousedev cfg80211 rfkill ext4 crc16 mbcache jbd2 iosf_mbi crct10dif_pclmul qxl crc32_pclmul ghash_clmulni_intel ttm drm_kms_o
[  964.796167] CPU: 7 PID: 202 Comm: z_unmount/0 Tainted: P           O   3.19.3-3-ARCH #1
[  964.796167] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140617_173313-var-lib-archbuild-testing-x86_64-tobias 04/01/2014
[  964.796167] task: ffff8800ba763ba0 ti: ffff8800bae1c000 task.ti: ffff8800bae1c000
[  964.800611] RIP: 0010:[<ffffffffa037f14d>]  [<ffffffffa037f14d>] zfsctl_expire_snapshot+0x1d/0x80 [zfs]
[  964.801086] RSP: 0018:ffff8800bae1fdd8  EFLAGS: 00010296
[  964.801086] RAX: 2e48000d55550b06 RBX: ffff8800b65b5e00 RCX: 0000000000000000
[  964.804130] RDX: 0000000000000004 RSI: 0007530000000003 RDI: ffff8800b65b5e00
[  964.804336] RBP: ffff8800bae1fde8 R08: 0000000000000000 R09: ffffea0002e5a600
[  964.806419] R10: ffffffffa028e6fb R11: 0000000000000498 R12: ffff8800ba5dbae0
[  964.806419] R13: ffff8800ba763ba0 R14: ffff8800b6881eb0 R15: ffff8800ba5dbaf0
[  964.808773] FS:  0000000000000000(0000) GS:ffff88013fdc0000(0000) knlGS:0000000000000000
[  964.809937] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  964.809937] CR2: 00007f307d026148 CR3: 0000000001811000 CR4: 00000000001406e0
[  964.812126] Stack:
[  964.812126]  ffff8800b969adf0 ffff88013999c700 ffff8800bae1feb8 ffffffffa0295557
[  964.812126]  ffff8800ba2c3ac0 ffff8800ba2c3aa8 0000000000000000 0000000000000003
[  964.812126]  ffff88013999c7c0 ffff8800ba763ba0 ffff8800b6881e40 ffff88013999c778
[  964.816615] Call Trace:
[  964.816615]  [<ffffffffa0295557>] taskq_thread+0x247/0x4f0 [spl]
[  964.816615]  [<ffffffff8109f640>] ? wake_up_process+0x50/0x50
[  964.819991]  [<ffffffffa0295310>] ? taskq_cancel_id+0x200/0x200 [spl]
[  964.820136]  [<ffffffff81091828>] kthread+0xd8/0xf0
[  964.820136]  [<ffffffff81091750>] ? kthread_create_on_node+0x1c0/0x1c0
[  964.823682]  [<ffffffff81562918>] ret_from_fork+0x58/0x90
[  964.823682]  [<ffffffff81091750>] ? kthread_create_on_node+0x1c0/0x1c0
[  964.823682] Code: 5f 0f 85 46 ff ff ff e9 2d ff ff ff 66 90 66 66 66 66 90 55 ba 04 00 00 00 48 89 e5 53 48 89 fb 48 83 ec 08 48 8b 47 10 48 8b 33 <48> 8b
[  964.829562] RIP  [<ffffffffa037f14d>] zfsctl_expire_snapshot+0x1d/0x80 [zfs]
[  964.829562]  RSP <ffff8800bae1fdd8>
[  964.844048] ---[ end trace 95e5bbd9eb8056bd ]---

However the machine is still responsive; it's not a kernel panic this time. Also, when trying to unload zfs module (after I successfully executed zfs umount -a), modprobe -r zfs just hangs.

Just got the same crash (but not kernel panic) with patched version 0.6.4 running under 3.19.3

When testing patched versions 0.6.3-1.3 and 0.6.4 against 3.18.11 I got the same as already documented in #3030

@cooper75
Copy link

@behlendorf I downloaded your tree, and My results were similar to @Bronek. It no longer panics after the 5min auto-umount expires, but it eventually does lock up the machine. I was unable to do anything but hard reset.

Apr 14 07:04:36 localhost kernel: general protection fault: 0000 [#1] SMP 
Apr 14 07:04:36 localhost kernel: Modules linked in: arc4 md4 nls_utf8 cifs dns_resolver fscache hwmon_vid dm_crypt pl2303 powernow_k8 kvm_amd kvm snd_hda_codec_realtek snd_hda_codec_generic snd_hda_codec_hdm
i snd_hda_intel ppdev snd_hda_controller snd_hda_codec usblp serio_raw k8temp edac_core snd_hwdep edac_mce_amd snd_seq snd_seq_device snd_pcm snd_timer sp5100_tco i2c_piix4 snd soundcore shpchp wmi parport_p
c parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc ata_generic pata_acpi radeon firewire_ohci i2c_algo_bit drm_kms_helper firewire_core crc_itu_t ttm pata_atiixp drm r8169 mii zfs(POE) zunicode(POE) zavl(
POE) zcommon(POE) znvpair(POE) spl(OE)

$ uname -r
3.19.3-200.fc21.x86_64

$ modinfo zfs|head
filename: /lib/modules/3.19.3-200.fc21.x86_64/extra/zfs.ko
version: 0.6.4-1
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: A314EDFF478CA6243E16ED5
depends: spl,znvpair,zcommon,zunicode,zavl
vermagic: 3.19.3-200.fc21.x86_64 SMP mod_unload

@behlendorf
Copy link
Contributor

Thanks for the quick test guys. It looks like I'll need to reproduce this locally.

@Bronek
Copy link

Bronek commented Apr 15, 2015

@behlendorf FWIW, I'm using almost vanilla (only patch is change of log severity) kernel from ArchLinux running under kvm/qemu.

@TioNisla
Copy link
Author

I wrote simple wrapper for /bin/mount

#!/bin/sh

me=`/bin/basename $0`
echo -n `date '+%F.%H-%M-%S'`" | "$0 \"$*\"" | " >> /root/$me.log
$me.orig $*
err=$?
echo $err >> /root/$me.log
exit $err

kernel 3.17.8 - "mount" called once, all OK
kernel 3.19.5 - "mount" called multiple times

2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 0
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-15 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-16 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-16 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-16 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-16 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128
2015-04-23.17-39-16 | /bin/mount "-t zfs -n storage/home@GMT-2015.04.23-08.41.52 /home/.zfs/snapshot/GMT-2015.04.23-08.41.52" | 128

Now, on 3.19.x:

root@amnesiac:~# cd /home/.zfs/snapshot/GMT-2015.04.23-08.41.52
root@amnesiac:/home/.zfs/snapshot/GMT-2015.04.23-08.41.52# ls -la
total 9
drwxr-xr-x 3 root root  3 апр 23 13:41 .
dr-xr-xr-x 3 root root  3 апр 23 17:27 ..
drwxr-x--- 9 root root 15 апр 23 13:40 root

all OK if we are INSIDE (after cd ).
If we ls -l OUTSIDE, we got scrap and die within 5 min:

root@amnesiac:~# ls -l /home/.zfs/snapshot/GMT-2015.04.23-08.41.52
ls: cannot access /home/.zfs/snapshot/GMT-2015.04.23-08.41.52/root: Too many levels of symbolic links
total 0
d????????? ? ? ? ?            ? root

mount just disappears if exiting from mounted snapshoot. And again, within 5 minutes we got kernel panic:

root@amnesiac:/home/.zfs/snapshot/GMT-2015.04.23-08.41.52# mount | grep snap
storage/home@GMT-2015.04.23-08.41.52 on /home/.zfs/snapshot/GMT-2015.04.23-08.41.52 type zfs (ro,relatime,xattr,posixacl)
root@amnesiac:/home/.zfs/snapshot/GMT-2015.04.23-08.41.52# cd && mount | grep snap
root@amnesiac:~# 

Digging kernel sources I stuck somewhere in fs/namei.c (

@yvess
Copy link

yvess commented May 6, 2015

I also hade a similar kernel panic with ubuntu vivid Linux 3.19.0-15-generic and zfs 0.6.4.1-1

Kernel panic - not syncing: avl_find() succeeded inside avl_add()

The machine crashed always after around 36 hours without doing anything. The strange thing was I have two new equally setup machines. One crashed the other not.

After digging around I found the difference.

zfs get snapdir -r backup
NAME            PROPERTY  VALUE    SOURCE
backup          snapdir   hidden   default
backup/test     snapdir   visible  local

One zfs filesystem had set the snapdir property set to visible, a leftover from some testing. The other machine don't had set that property on any zfs filesystem.

Still not sure I removing the property will fix the problem, but I hope. Time will tell.

@TioNisla
Copy link
Author

"36 hours" cron job? Some updatedb(8) ? Traversing fs tree, visible is traversed.

@pvaneynd
Copy link

I had the same crash as [https://github.com//issues/3257#issuecomment-92294234] on debian unstable running 4.0.0-1, and snapdir was also set to 'visible' for me.

Now trying if changing this to hidden helps.

@yvess
Copy link

yvess commented May 22, 2015

setting this to hidden don't helped for me. When you go into any .zfs directory the server crashed. As long as you don't do that it doesn't crash. But without going into any .zfs directory zfs is not really usable :-(.

So the combination ubuntu vivid Linux 3.19.0 and zfs 0.6.4.1-1 doesn't really works. I went back to ubuntu trusty Linux 3.16.0 and zfs 0.6.4.1-1. Perhaps it's to early to switch already to 3.19 and systemd.

@Bronek
Copy link

Bronek commented May 24, 2015

I found what seems to be related bug in Linux mainline which has been fixed in torvalds/linux@8f502d5 and merged into 4.0.2 . Can anyone verify that this crash still happens (or not) in this version?

@Bronek
Copy link

Bronek commented May 25, 2015

I tried patch #3344 with kernel 4.0.4 and kernel panic seems to be fixed. There is small problem that any command which tries to access a non-existent snapshot will die, also triggering kernel "Oops" (but not panic), which is also case with older kernel 3.18.14 . I provided more details in #3030

@behlendorf
Copy link
Contributor

Just an quick update. I'm still working on resolving this cleanly but I haven't had a ton of time to devote to it as I'm helping to get some other large changes reviewed, finalized, and merged.

@Bronek
Copy link

Bronek commented May 27, 2015

@behlendorf no worries, your work (also on these other large changes) is hugely appreciated.

@dylanpiergies
Copy link

I am seeing an issue similar to @Bronek, and I have a core dump from one of the crashes. Details are on #3243 (comment).

@TioNisla
Copy link
Author

root@amnesiac:/# modinfo zfs
filename: /lib/modules/4.0.5/extra/zfs/zfs.ko
version: 0.6.4-1
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: 29BA21B62706579B75D5974
depends: spl,znvpair,zcommon,zunicode,zavl
vermagic: 4.0.5 SMP mod_unload

root@amnesiac:/# uname -a
Linux amnesiac 4.0.5 #2 SMP Wed Jun 17 08:45:46 YEKT 2015 x86_64 Intel(R) Xeon(R) CPU X5660 @ 2.80GHz GenuineIntel GNU/Linux

root@amnesiac:/home/ftp/.zfs/snapshot# ls -laR
.:
total 0
dr-xr-xr-x 6 root root 6 июн 17 12:41 .
dr-xr-xr-x 1 root root 0 июн 17 12:40 ..
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.05
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.16
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.26
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.36
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.46
dr-xr-xr-x 1 root root 0 июн 17 12:42 GMT-2015.06.17-07.41.57

./GMT-2015.06.17-07.41.05:
ls: cannot access ./GMT-2015.06.17-07.41.05/.: Too many levels of symbolic links
ls: cannot access ./GMT-2015.06.17-07.41.05/..: Too many levels of symbolic links
total 0
d????????? ? ? ? ? ? .
d????????? ? ? ? ? ? ..

((((((

P.S.
and kernel panic.

@kernelOfTruth
Copy link
Contributor

@Bronek
Copy link

Bronek commented Jul 25, 2015

I was bitten by this today, when I upgraded to kernel 4.1.3 and forgot to include patch #3344 into my build of ZoL. It would be nice to have this fixed already, without the need for extra patches ...

@Shark
Copy link

Shark commented Jul 26, 2015

+1, ran into the issue with a stock Arch Linux 4.1.2 kernel…

@behlendorf
Copy link
Contributor

Agreed. It's one of the few remaining blockers for 0.6.5 so expect it to be fixed fairly soon.

@ioquatix
Copy link

+1 had this issue.

@TioNisla
Copy link
Author

spl-8ac6ffecaf
zfs-9d4f86e825

root@amnesiac:~# uname -a
Linux amnesiac 4.1.6 #4 SMP Wed Aug 19 12:33:24 YEKT 2015 x86_64 Intel(R) Xeon(R) CPU X5660 @ 2.80GHz GenuineIntel GNU/Linux
root@amnesiac:~# 
root@amnesiac:~# modinfo spl | head
filename:       /lib/modules/4.1.6/extra/spl/spl.ko
version:        0.6.4-1
license:        GPL
author:         OpenZFS on Linux
description:    Solaris Porting Layer
srcversion:     7842DD3744A9E7A4F9DF103
depends:        zlib_inflate,zlib_deflate
vermagic:       4.1.6 SMP mod_unload 
parm:           spl_hostid:The system hostid. (ulong)
parm:           spl_hostid_path:The system hostid file (/etc/hostid) (charp)
root@amnesiac:~# 
root@amnesiac:~# modinfo zfs | head
filename:       /lib/modules/4.1.6/extra/zfs/zfs.ko
version:        0.6.4-1
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     5E61E672765C60BA3BB8D7A
depends:        spl,znvpair,zcommon,zunicode,zavl
vermagic:       4.1.6 SMP mod_unload 
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)
parm:           zvol_major:Major number for zvol device (uint)
root@amnesiac:~# 
root@amnesiac:~# ls -l /home/.zfs/snapshot/GMT-2015.08.19-12.03.32/
ls: /home/.zfs/snapshot/GMT-2015.08.19-12.03.32/: Too many levels of symbolic links
total 0
root@amnesiac:~# 

and panic after few minutes

behlendorf added a commit to behlendorf/zfs that referenced this issue Aug 30, 2015
Re-factor the .zfs/snapshot auto-mouting code to take in to account
changes made to the upstream kernels.  And to lay the groundwork for
enabling access to .zfs snapshots via NFS clients.  This patch makes
the following core improvements.

* All actively auto-mounted snapshots are now tracked in two global
trees which are indexed by snapshot name and objset id respectively.
This allows for fast lookups of any auto-mounted snapshot regardless
without needing access to the parent dataset.

* Snapshot entries are added to the tree in zfsctl_snapshot_mount().
However, they are now removed from the tree in the context of the
unmount process.  This eliminates the need complicated error logic
in zfsctl_snapshot_unmount() to handle unmount failures.

* References are now taken on the snapshot entries in the tree to
ensure they always remain valid while a task is outstanding.

* The MNT_SHRINKABLE flag is set on the snapshot vfsmount_t right
after the auto-mount succeeds.  This allows to kernel to unmount
idle auto-mounted snapshots if needed removing the need for the
zfsctl_unmount_snapshots() function.

* Snapshots in active use will not be automatically unmounted.  As
long as at least one dentry is revalidated every zfs_expire_snapshot/2
seconds the auto-unmount expiration timer will be extended.

* Commit torvalds/linux@bafc9b7 caused snapshots auto-mounted by ZFS
to be immediately unmounted when the dentry was revalidated.  This
was a consequence of ZFS invaliding all snapdir dentries to ensure that
negative dentries didn't mask new snapshots.  This patch modifies the
behavior such that only negative dentries are invalidated.  This solves
the issue and may result in a performance improvement.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue openzfs#3589
Issue openzfs#3344
Issue openzfs#3295
Issue openzfs#3257
Issue openzfs#3243
Issue openzfs#3030
Issue openzfs#2841
@Bronek
Copy link

Bronek commented Aug 31, 2015

This seems to be fixed by #3718 , thanks @behlendorf !

behlendorf added a commit to behlendorf/zfs that referenced this issue Aug 31, 2015
Re-factor the .zfs/snapshot auto-mouting code to take in to account
changes made to the upstream kernels.  And to lay the groundwork for
enabling access to .zfs snapshots via NFS clients.  This patch makes
the following core improvements.

* All actively auto-mounted snapshots are now tracked in two global
trees which are indexed by snapshot name and objset id respectively.
This allows for fast lookups of any auto-mounted snapshot regardless
without needing access to the parent dataset.

* Snapshot entries are added to the tree in zfsctl_snapshot_mount().
However, they are now removed from the tree in the context of the
unmount process.  This eliminates the need complicated error logic
in zfsctl_snapshot_unmount() to handle unmount failures.

* References are now taken on the snapshot entries in the tree to
ensure they always remain valid while a task is outstanding.

* The MNT_SHRINKABLE flag is set on the snapshot vfsmount_t right
after the auto-mount succeeds.  This allows to kernel to unmount
idle auto-mounted snapshots if needed removing the need for the
zfsctl_unmount_snapshots() function.

* Snapshots in active use will not be automatically unmounted.  As
long as at least one dentry is revalidated every zfs_expire_snapshot/2
seconds the auto-unmount expiration timer will be extended.

* Commit torvalds/linux@bafc9b7 caused snapshots auto-mounted by ZFS
to be immediately unmounted when the dentry was revalidated.  This
was a consequence of ZFS invaliding all snapdir dentries to ensure that
negative dentries didn't mask new snapshots.  This patch modifies the
behavior such that only negative dentries are invalidated.  This solves
the issue and may result in a performance improvement.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes openzfs#3589
Closes openzfs#3344
Closes openzfs#3295
Closes openzfs#3257
Closes openzfs#3243
Closes openzfs#3030
Closes openzfs#2841
behlendorf added a commit to behlendorf/zfs that referenced this issue Aug 31, 2015
Re-factor the .zfs/snapshot auto-mouting code to take in to account
changes made to the upstream kernels.  And to lay the groundwork for
enabling access to .zfs snapshots via NFS clients.  This patch makes
the following core improvements.

* All actively auto-mounted snapshots are now tracked in two global
trees which are indexed by snapshot name and objset id respectively.
This allows for fast lookups of any auto-mounted snapshot regardless
without needing access to the parent dataset.

* Snapshot entries are added to the tree in zfsctl_snapshot_mount().
However, they are now removed from the tree in the context of the
unmount process.  This eliminates the need complicated error logic
in zfsctl_snapshot_unmount() to handle unmount failures.

* References are now taken on the snapshot entries in the tree to
ensure they always remain valid while a task is outstanding.

* The MNT_SHRINKABLE flag is set on the snapshot vfsmount_t right
after the auto-mount succeeds.  This allows to kernel to unmount
idle auto-mounted snapshots if needed removing the need for the
zfsctl_unmount_snapshots() function.

* Snapshots in active use will not be automatically unmounted.  As
long as at least one dentry is revalidated every zfs_expire_snapshot/2
seconds the auto-unmount expiration timer will be extended.

* Commit torvalds/linux@bafc9b7 caused snapshots auto-mounted by ZFS
to be immediately unmounted when the dentry was revalidated.  This
was a consequence of ZFS invaliding all snapdir dentries to ensure that
negative dentries didn't mask new snapshots.  This patch modifies the
behavior such that only negative dentries are invalidated.  This solves
the issue and may result in a performance improvement.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes openzfs#3589
Closes openzfs#3344
Closes openzfs#3295
Closes openzfs#3257
Closes openzfs#3243
Closes openzfs#3030
Closes openzfs#2841
tomgarcia pushed a commit to tomgarcia/zfs that referenced this issue Sep 11, 2015
Re-factor the .zfs/snapshot auto-mouting code to take in to account
changes made to the upstream kernels.  And to lay the groundwork for
enabling access to .zfs snapshots via NFS clients.  This patch makes
the following core improvements.

* All actively auto-mounted snapshots are now tracked in two global
trees which are indexed by snapshot name and objset id respectively.
This allows for fast lookups of any auto-mounted snapshot regardless
without needing access to the parent dataset.

* Snapshot entries are added to the tree in zfsctl_snapshot_mount().
However, they are now removed from the tree in the context of the
unmount process.  This eliminates the need complicated error logic
in zfsctl_snapshot_unmount() to handle unmount failures.

* References are now taken on the snapshot entries in the tree to
ensure they always remain valid while a task is outstanding.

* The MNT_SHRINKABLE flag is set on the snapshot vfsmount_t right
after the auto-mount succeeds.  This allows to kernel to unmount
idle auto-mounted snapshots if needed removing the need for the
zfsctl_unmount_snapshots() function.

* Snapshots in active use will not be automatically unmounted.  As
long as at least one dentry is revalidated every zfs_expire_snapshot/2
seconds the auto-unmount expiration timer will be extended.

* Commit torvalds/linux@bafc9b7 caused snapshots auto-mounted by ZFS
to be immediately unmounted when the dentry was revalidated.  This
was a consequence of ZFS invaliding all snapdir dentries to ensure that
negative dentries didn't mask new snapshots.  This patch modifies the
behavior such that only negative dentries are invalidated.  This solves
the issue and may result in a performance improvement.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes openzfs#3589
Closes openzfs#3344
Closes openzfs#3295
Closes openzfs#3257
Closes openzfs#3243
Closes openzfs#3030
Closes openzfs#2841
JKDingwall pushed a commit to JKDingwall/zfs that referenced this issue Aug 11, 2016
Re-factor the .zfs/snapshot auto-mouting code to take in to account
changes made to the upstream kernels.  And to lay the groundwork for
enabling access to .zfs snapshots via NFS clients.  This patch makes
the following core improvements.

* All actively auto-mounted snapshots are now tracked in two global
trees which are indexed by snapshot name and objset id respectively.
This allows for fast lookups of any auto-mounted snapshot regardless
without needing access to the parent dataset.

* Snapshot entries are added to the tree in zfsctl_snapshot_mount().
However, they are now removed from the tree in the context of the
unmount process.  This eliminates the need complicated error logic
in zfsctl_snapshot_unmount() to handle unmount failures.

* References are now taken on the snapshot entries in the tree to
ensure they always remain valid while a task is outstanding.

* The MNT_SHRINKABLE flag is set on the snapshot vfsmount_t right
after the auto-mount succeeds.  This allows to kernel to unmount
idle auto-mounted snapshots if needed removing the need for the
zfsctl_unmount_snapshots() function.

* Snapshots in active use will not be automatically unmounted.  As
long as at least one dentry is revalidated every zfs_expire_snapshot/2
seconds the auto-unmount expiration timer will be extended.

* Commit torvalds/linux@bafc9b7 caused snapshots auto-mounted by ZFS
to be immediately unmounted when the dentry was revalidated.  This
was a consequence of ZFS invaliding all snapdir dentries to ensure that
negative dentries didn't mask new snapshots.  This patch modifies the
behavior such that only negative dentries are invalidated.  This solves
the issue and may result in a performance improvement.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes openzfs#3589
Closes openzfs#3344
Closes openzfs#3295
Closes openzfs#3257
Closes openzfs#3243
Closes openzfs#3030
Closes openzfs#2841

Conflicts:
	config/kernel.m4
	module/zfs/zfs_ctldir.c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests