Description
This is the second time I've seen this problem. The first time was a week ago, a day after I updated to 0.6.5.3. Previously, 0.6.4.x had been flawless for me.
I search bug reports and saw a couple of other people with similar errors, but they were all blaming zfs send or similar command. I was not doing any zfs/pool related commands, though there may have been normal user-level R/W going on at the time. This is a single-device dev. After the panic, zpool status reports no errors, and I am still able to access the filesytem normally. However, zpool history or zpool scrub just hangs.
It may also be worth noting, I have another pool on the same system on which zpool history does not hang, so I assume the panic only affected a single pool.
[316806.972922] VERIFY3(sa.sa_magic == 0x2F505A) failed (1446016985 == 3100762)
[316806.972943] PANIC at zfs_vfsops.c:426:zfs_space_delta_cb()
[316806.972949] Showing stack for process 2305
[316806.972952] CPU: 1 PID: 2305 Comm: txg_sync Tainted: P O 4.2.3-1.el7.elrepo.x86_64 #1
[316806.972954] Hardware name: Apple Inc. Macmini4,1/Mac-F2208EC8, BIOS MM41.88Z.0042.B00.1004221740 04/22/10
[316806.972957] 0000000000000000 00000000ccdad7d7 ffff88012fbfb7a8 ffffffff816c73b9
[316806.972960] 0000000000000000 ffffffffa01a89bb ffff88012fbfb7b8 ffffffffa000e5b4
[316806.972963] ffff88012fbfb948 ffffffffa000e67b ffff88012fbfb7d8 ffffffff810aab9c
[316806.972966] Call Trace:
[316806.972975] [] dump_stack+0x45/0x57
[316806.972988] [] spl_dumpstack+0x44/0x50 [spl]
[316806.972994] [] spl_panic+0xbb/0xf0 [spl]
[316806.972998] [] ? __enqueue_entity+0x6c/0x70
[316806.973001] [] ? __slab_free+0x11f/0x217
[316806.973063] [] ? dbuf_rele_and_unlock+0x2dc/0x3e0 [zfs]
[316806.973069] [] ? __schedule+0x2af/0x880
[316806.973077] [] ? spl_kmem_free+0x2a/0x40 [spl]
[316806.973120] [] zfs_space_delta_cb+0xc2/0x190 [zfs]
[316806.973151] [] dmu_objset_userquota_get_ids+0xdc/0x450 [zfs]
[316806.973183] [] dnode_sync+0xed/0x9a0 [zfs]
[316806.973191] [] ? spl_kmem_free+0x2a/0x40 [spl]
[316806.973220] [] ? dnode_add_ref+0x4c/0x100 [zfs]
[316806.973247] [] dmu_objset_sync_dnodes+0x9b/0xc0 [zfs]
[316806.973275] [] dmu_objset_sync+0x1e2/0x2f0 [zfs]
[316806.973302] [] ? recordsize_changed_cb+0x20/0x20 [zfs]
[316806.973329] [] ? dmu_objset_sync+0x2f0/0x2f0 [zfs]
[316806.973359] [] dsl_dataset_sync+0x52/0xa0 [zfs]
[316806.973390] [] dsl_pool_sync+0x9d/0x3f0 [zfs]
[316806.973423] [] spa_sync+0x379/0xb40 [zfs]
[316806.973458] [] txg_sync_thread+0x3bf/0x620 [zfs]
[316806.973492] [] ? txg_fini+0x290/0x290 [zfs]
[316806.973498] [] thread_generic_wrapper+0x71/0x80 [spl]
[316806.973504] [] ? __thread_exit+0x20/0x20 [spl]
[316806.973508] [] kthread+0xd8/0xf0
[316806.973510] [] ? kthread_create_on_node+0x1b0/0x1b0
[316806.973513] [] ret_from_fork+0x3f/0x70
[316806.973516] [] ? kthread_create_on_node+0x1b0/0x1b0