-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
System information
| Type | Version/Name |
|---|---|
| Distribution Name | FreeBSD |
| Distribution Version | 14.1 and 14.0 |
| Kernel Version | 14.1 and 14.0 |
| Architecture | amd64 |
| OpenZFS Version | zfs-kmod-2.2.4-FreeBSD_g256659204 and zfs-kmod-2.2.0-FreeBSD_g95785196f |
Describe the problem you're observing
Our servers occasionally crash due to what looks like in-memory corruption of the block pointer. The crashes are not repeatable. That is, the same server never crashes in exactly the same way twice. So we don't think that on-disk corruption is to blame. But they happen often enough that we can't dismiss them as one-off events. We see these crashes approximately once for every 10 PiB of ZFS writes. ZFS encryption is NOT in use on any of the affected servers and they all have ECC RAM.
We would be very grateful to anybody with any insights into this problem.
Describe how to reproduce the problem
Unfortunately, we don't have any reproduction case. There is no known trigger. The crashing servers have a complicated workload that involves writes coming from userspace processes, from zfs recv, and a little bit from ctld.
Include any warning/errors/backtraces from the system logs
A selection of panic messages:
Solaris(panic): <POOL> blkptr at 0xfffffe15d9de2c00x has invalid TYPE 127
Solaris(panic): blkptr at 0xfffffe0f58099280 has invalid BLK_BIRTH 0
Solaris(panic): <POOL>: blkptr at 0xfffffe144f5f1960x has invalid TYPE 209
Solaris(panic): <POOL>: blkptr at 0xfffffe159796f080x has invalid COMPRESS 72
Solaris(panic): <POOL>: blkptr at 0xfffffe1d1048a480x has invalid TYPE 211
Solaris(panic): <POOL>: blkptr at 0xfffffe1463003700x has invalid COMPRESS 115
Solaris(panic): <POOL>: blkptr at 0xfffffe16a19d6980x has invalid TYPE 66
Solaris(panic): <POOL>: blkptr at 0xfffffe1895d19480x has invalid TYPE 212
Solaris(panic): <POOL>: blkptr at 0xfffffe21b502f4a0x DVA 0 has invalid VDEV 8323199
And a representative stack trace
[5269025] panic: Solaris(panic): <POOL>: blkptr at 0xfffffe21b502f4a0x DVA 0 has invalid VDEV 8323199
[5269025] cpuid = 11
[5269025] time = 1727012534
[5269025] KDB: stack backtrace:
[5269025] db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0ddfed7780
[5269025] vpanic() at vpanic+0x132/frame 0xfffffe0ddfed78b0
[5269025] panic() at panic+0x43/frame 0xfffffe0ddfed7910
[5269025] vcmn_err() at vcmn_err+0xeb/frame 0xfffffe0ddfed7a40
[5269025] zfs_panic_recover() at zfs_panic_recover+0x59/frame 0xfffffe0ddfed7aa0
[5269025] zfs_blkptr_verify_log() at zfs_blkptr_verify_log+0x130/frame 0xfffffe0ddfed7c30
[5269025] zfs_blkptr_verify() at zfs_blkptr_verify+0x261/frame 0xfffffe0ddfed7c90
[5269025] zio_free() at zio_free+0x25/frame 0xfffffe0ddfed7cc0
[5269025] dsl_dataset_block_kill() at dsl_dataset_block_kill+0x19d/frame 0xfffffe0ddfed7d20
[5269025] dbuf_write_done() at dbuf_write_done+0x4f/frame 0xfffffe0ddfed7d60
[5269025] arc_write_done() at arc_write_done+0x378/frame 0xfffffe0ddfed7da0
[5269025] zio_done() at zio_done+0xc6e/frame 0xfffffe0ddfed7e10
[5269025] zio_execute() at zio_execute+0x3c/frame 0xfffffe0ddfed7e40
[5269025] taskqueue_run_locked() at taskqueue_run_locked+0x182/frame 0xfffffe0ddfed7ec0
[5269025] taskqueue_thread_loop() at taskqueue_thread_loop+0xc2/frame 0xfffffe0ddfed7ef0
[5269025] fork_exit() at fork_exit+0x7f/frame 0xfffffe0ddfed7f30
[5269025] fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0ddfed7f30
[5269025] --- trap 0xc, rip = 0x826b7fd1a, rsp = 0x857ce3a88, rbp = 0x857ce3aa0 ---
[5269025] KDB: enter: panic
Activity
snajpa commentedon Nov 28, 2024
I saw a very similar stack on Linux with 6.10 when I tried docker pull of n*more than what could in RAM with zfs docker driver. It went away after upgrading to 6.11, but this it was exactly
arc_write_done, the same path. It looked like some kind of race withdbuf_evict- vpsfreecz#1 (comment)snajpa commentedon Nov 28, 2024
looks like we'll need to hunt this one down :D it might be responsible for some of the unexplained crashes on nodes where it's impractical to take a full memory dump
asomers commentedon Mar 4, 2025
Progress update: we're still seeing these crashes. If anything they've grown more frequent over time. A new theory is that the cause may be code that accesses
db_datawithout takingdb_rwlock. There are many functions that lockdb_mtxinstead. Anddbuf_assign_arcbufdoesn't lock either. But I have hundreds more locations to check.asomers commentedon Mar 18, 2025
See discussion of one proposed solution in #17118 .
Be more careful with locking db.db_mtx
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
Be more careful with locking around db.db_data
cperciva commentedon Sep 27, 2025
I have no opinion on how to fix this issue but as FreeBSD release engineer I certainly hope it can be resolved before 15.0-RELEASE. ;-)
Be more careful with locking around db.db_data
Merge pull request #26 from KlaraSystems/wasabi_232p2
Be more careful with locking around db.db_data