Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel panic on write #14463

Closed
tongfw opened this issue Feb 6, 2023 · 5 comments
Closed

kernel panic on write #14463

tongfw opened this issue Feb 6, 2023 · 5 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@tongfw
Copy link

tongfw commented Feb 6, 2023

System information

Type Version/Name
Distribution Name CentOS Linux
Distribution Version 8.4.2105
Kernel Version 4.18.0-305.3.1.el8
Architecture x86_64
OpenZFS Version 2.1.5

Describe the problem you're observing

kernel crashed with intel QAT 8960 compress feature test

Describe how to reproduce the problem

database backup data write to zfs with large throughput,
with zfs_qat_compress_disable=1, zfs_qat_checksum_disable=1, throughput is low, everything is ok,
with zfs_qat_compress_disable=0, zfs_qat_checksum_disable=0, throughput boost, up, kernel panic twice with the same coredump backtrace

Include any warning/errors/backtraces from the system logs

vmcore-dmesg.txt
[1715883.239105] cpaCySymRemoveSession() - : There are 1 requests pending
[1715883.244156] cpaCySymRemoveSession() - : There are 1 requests pending
[1715884.737480] cpaCySymRemoveSession() - : There are 1 requests pending
[1715884.947721] cpaCySymRemoveSession() - : There are 1 requests pending
[1715885.278642] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[1715885.279475] PGD 3b0f49d067 P4D 3b0f49d067 PUD 3e1f3cd067 PMD 0
[1715885.280207] Oops: 0000 [#1] SMP NOPTI
[1715885.280920] CPU: 23 PID: 19884 Comm: ios-io Kdump: loaded Tainted: P IOE --------- - - 4.18.0-305.3.1.el8.x86_64 #1
[1715885.282371] Hardware name: Dell Inc. PowerEdge R940/0GCTJ1, BIOS 2.16.1 08/17/2022
[1715885.283174] RIP: 0010:arc_release+0x1a/0x6e0 [zfs]
[1715885.283888] Code: ac ab 5e e8 66 66 2e 0f 1f 84 00 00 00 00 00 90 0f 1f 44 00 00 41 57 41 56 41 55 41 54 4c 8d 67 10 55 53 48 89 fb 48 83 ec 48 <4c> 8b 37 4c 89 e7 e8 cb 45 e5 e8 48 8d 43 30 65 4c 8b 04
25 40 5c
[1715885.285427] RSP: 0018:ffffb0472cd7faa8 EFLAGS: 00010292
[1715885.286259] RAX: dead000000000200 RBX: 0000000000000000 RCX: 0000000000000001
[1715885.286996] RDX: dead000000000100 RSI: ffff9c8a6ea1dc80 RDI: 0000000000000000
[1715885.287643] RBP: ffff9ca375697c00 R08: ffffb0472cd7faa8 R09: ffff9ca375697c00
[1715885.288366] R10: ffff9c82aeaec100 R11: 0000000000000001 R12: 0000000000000010
[1715885.288941] R13: 0000000000000000 R14: ffff9c8a6ea1dc80 R15: ffff9c2c8156e800
[1715885.289595] FS: 00007f47f5351700(0000) GS:ffff9cab7f540000(0000) knlGS:0000000000000000
[1715885.290289] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1715885.291058] CR2: 0000000000000000 CR3: 0000003e63b14006 CR4: 00000000007726e0
[1715885.291699] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[1715885.292368] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[1715885.293210] PKRU: 55555554
[1715885.293718] Call Trace:
[1715885.294309] dbuf_dirty+0x7dc/0x950 [zfs]
[1715885.295007] dmu_write_uio_dnode+0x6b/0x140 [zfs]
[1715885.295650] dmu_write_uio_dbuf+0x47/0x60 [zfs]
[1715885.296508] zfs_write+0x493/0xc90 [zfs]
[1715885.297193] ? terminate_walk+0xcc/0xe0
[1715885.297850] zpl_iter_write+0x100/0x160 [zfs]
[1715885.298646] new_sync_write+0x112/0x160
[1715885.299223] vfs_write+0xa5/0x1a0
[1715885.299747] ksys_pwrite64+0x61/0xa0
[1715885.300387] do_syscall_64+0x5b/0x1a0
[1715885.301091] entry_SYSCALL_64_after_hwframe+0x65/0xca
[1715885.301842] RIP: 0033:0x7f5177d2d2b7
[1715885.302347] Code: 41 54 49 89 d4 55 48 89 f5 53 89 fb 48 83 ec 18 e8 ae f2 ff ff 4d 89 ea 4c 89 e2 48 89 ee 41 89 c0 89 df b8 12 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 35 44 89 c7 48 89 44 24 08 e8 e4 f2
ff ff 48
[1715885.303465] RSP: 002b:00007f47f534a690 EFLAGS: 00000293 ORIG_RAX: 0000000000000012
[1715885.304166] RAX: ffffffffffffffda RBX: 000000000000015c RCX: 00007f5177d2d2b7
[1715885.304844] RDX: 0000000000008000 RSI: 00007f3eb5f1d000 RDI: 000000000000015c
[1715885.305427] RBP: 00007f3eb5f1d000 R08: 0000000000000000 R09: 00007f47f534a4cb
[1715885.305910] R10: 0000000000040000 R11: 0000000000000293 R12: 0000000000008000
[1715885.306486] R13: 0000000000040000 R14: 000000000000015c R15: 000000000000015c

crash> bt
PID: 19884 TASK: ffff9be85c1adc40 CPU: 23 COMMAND: "ios-io"
#0 [ffffb0472cd7f7c8] machine_kexec at ffffffffaa26156e
#1 [ffffb0472cd7f820] __crash_kexec at ffffffffaa38f99d
#2 [ffffb0472cd7f8e8] crash_kexec at ffffffffaa39088d
#3 [ffffb0472cd7f900] oops_end at ffffffffaa22434d
#4 [ffffb0472cd7f920] no_context at ffffffffaa27262f
#5 [ffffb0472cd7f978] __bad_area_nosemaphore at ffffffffaa27298c
#6 [ffffb0472cd7f9c0] do_page_fault at ffffffffaa273267
#7 [ffffb0472cd7f9f0] page_fault at ffffffffaac010fe
[exception RIP: arc_release+26]
RIP: ffffffffc1cf5e2a RSP: ffffb0472cd7faa8 RFLAGS: 00010292
RAX: dead000000000200 RBX: 0000000000000000 RCX: 0000000000000001
RDX: dead000000000100 RSI: ffff9c8a6ea1dc80 RDI: 0000000000000000
RBP: ffff9ca375697c00 R8: ffffb0472cd7faa8 R9: ffff9ca375697c00
R10: ffff9c82aeaec100 R11: 0000000000000001 R12: 0000000000000010
R13: 0000000000000000 R14: ffff9c8a6ea1dc80 R15: ffff9c2c8156e800
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#8 [ffffb0472cd7fb20] dbuf_dirty at ffffffffc1d08c7c [zfs]
#9 [ffffb0472cd7fba8] dmu_write_uio_dnode at ffffffffc1d1157b [zfs]
#10 [ffffb0472cd7fc08] dmu_write_uio_dbuf at ffffffffc1d11697 [zfs]
#11 [ffffb0472cd7fc30] zfs_write at ffffffffc1de9383 [zfs]
#12 [ffffb0472cd7fdc8] zpl_iter_write at ffffffffc1e25720 [zfs]
#13 [ffffb0472cd7fe48] new_sync_write at ffffffffaa516452
#14 [ffffb0472cd7fed0] vfs_write at ffffffffaa519a45
#15 [ffffb0472cd7ff00] ksys_pwrite64 at ffffffffaa519ea1
#16 [ffffb0472cd7ff38] do_syscall_64 at ffffffffaa20420b
#17 [ffffb0472cd7ff50] entry_SYSCALL_64_after_hwframe at ffffffffaac000ad
RIP: 00007f5177d2d2b7 RSP: 00007f47f534a690 RFLAGS: 00000293
RAX: ffffffffffffffda RBX: 000000000000015c RCX: 00007f5177d2d2b7
RDX: 0000000000008000 RSI: 00007f3eb5f1d000 RDI: 000000000000015c

crash> struct dmu_buf_impl_t ffff9c8a6ea1dc80

struct dmu_buf_impl_t {
db = {
db_object = 23103756,
db_offset = 262144,
db_size = 131072,
db_data = 0x0
},
db_objset = 0xffff9c2c8156e800,
db_dnode_handle = 0xffff9c9628ce0780,
db_parent = 0xffff9bd484a16400,
db_hash_next = 0x0,
db_link = {
avl_child = {0x0, 0x0},
avl_pcb = 18446633935793382465
},
db_blkid = 2,
db_blkptr = 0xffffb0483bbfb100,
db_level = 0 '\000',
db_rwlock = {
rw_rwlock = {
count = {
counter = 0
},
wait_list = {
next = 0xffff9c8a6ea1dcf8,
prev = 0xffff9c8a6ea1dcf8
},
wait_lock = {
raw_lock = {
{
val = {
counter = 0
},
{
locked = 0 '\000',
pending = 0 '\000'
},
{
locked_pending = 0,
tail = 0
}
}
}
},
osq = {
tail = {
counter = 0
}
},
{
owner = {
counter = 0
},
rh_kabi_hidden_39 = {
owner = 0x0
},
{}
}
},
rw_owner = 0x0
},
db_buf = 0x0,
db_mtx = {
m_mutex = {
owner = {
counter = -110052696728512
},
wait_lock = {
{
rlock = {
raw_lock = {
{
val = {
counter = 0
},
{
locked = 0 '\000',
pending = 0 '\000'
},
{
locked_pending = 0,
tail = 0
}
}
}
}
}
},
osq = {
tail = {
counter = 0
}
},
wait_list = {
next = 0xffff9c8a6ea1dd38,
prev = 0xffff9c8a6ea1dd38
}
},
m_lock = {
{
rlock = {
raw_lock = {
{
val = {
counter = 0
},
{
locked = 0 '\000',
pending = 0 '\000'
},
{
locked_pending = 0,
tail = 0
}
}
}
}
}
},
m_owner = 0xffff9be85c1adc40
},
db_state = DB_UNCACHED,
db_holds = {
rc_count = 1
},
db_changed = {
cv_magic = 879052276,
cv_event = {
lock = {
{
rlock = {
raw_lock = {
{
val = {
counter = 0
},
{
locked = 0 '\000',
pending = 0 '\000'
},
{
locked_pending = 0,
tail = 0
}
}
}
}
}
},
head = {
next = 0xffff9c8a6ea1dd78,
prev = 0xffff9c8a6ea1dd78
}
},
cv_destroy = {
lock = {
{
rlock = {
raw_lock = {
{
val = {
counter = 0
},
{
locked = 0 '\000',
pending = 0 '\000'
},
{
locked_pending = 0,
tail = 0
}
}
}
}
}
},
head = {
next = 0xffff9c8a6ea1dd90,
prev = 0xffff9c8a6ea1dd90
}
},
cv_refs = {
counter = 1
},
cv_waiters = {
counter = 0
},
cv_mutex = 0x0
},
db_data_pending = 0x0,
db_dirty_records = {
list_size = 400,
list_offset = 40,
list_head = {
next = 0xffff9c8a6ea1ddc8,
prev = 0xffff9c8a6ea1ddc8
}
},
db_cache_link = {
next = 0xdead000000000100,
prev = 0xdead000000000200
},
db_caching_status = DB_NO_CACHE,
db_user = 0x0,
db_user_immediate_evict = 0 '\000',
db_freed_in_flight = 0 '\000',
db_pending_evict = 0 '\000',
db_dirtycnt = 0 '\000'
}

The current version is not compiled with debug enable, debug enable version test is on the way.
I think it seem like internal cache logic bug, and it's hard for me to fix it.
Hope u guys take a look at it, thank you.

@tongfw tongfw added the Type: Defect Incorrect behavior (e.g. crash, hang) label Feb 6, 2023
@tongfw tongfw changed the title kernel panic on kernel panic on write Feb 6, 2023
@naivekun
Copy link
Contributor

naivekun commented Mar 10, 2023

Same issue here, I'm using Xeon D-2177NT with c6xx qat device, write/read operation sometimes lead to error message kernel: [ 1236.242677] cpaCySymRemoveSession() - : There are 1 requests pending. When using fio for heavy load test, it just panic on write.

Code at

cpaCySymRemoveSession(cy_inst_handle, cy_session_ctx);
, maybe the cpaCySymRemoveSession return value was not handled correctly, according to intel document qat-api-reference-cryptographic.pdf

image

return value maybe CPA_STATUS_RETRY and lead to message kernel: [ 1236.242677] cpaCySymRemoveSession() - : There are 1 requests pending

full panic


Mar 10 23:11:55 debian-celmec kernel: [ 1237.554106] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:11:55 debian-celmec kernel: [ 1237.605675] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:11:55 debian-celmec kernel: [ 1237.608246] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:14:31 debian-celmec kernel: [ 1392.958575] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:14:31 debian-celmec kernel: [ 1392.980387] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:14:31 debian-celmec kernel: [ 1392.984388] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:14:31 debian-celmec kernel: [ 1393.022167] cpaCySymRemoveSession() - : There are 1 requests pending
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747214] PGD 0 P4D 0 
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747226] Oops: 0000 [#1] SMP PTI
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747240] CPU: 10 PID: 490947 Comm: fio Tainted: P           OE     5.10.0-21-amd64 #1 Debian 5.10.162-1
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747271] Hardware name: CELESTICA-CSS MEC Server/MEC Server MB, BIOS MEC3.00.00 03/23/2020
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747345] RIP: 0010:arc_release+0x23/0x7c0 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747363] Code: 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 56 41 55 4c 8d 6f 10 41 54 49 89 fc 55 53 65 48 8b 1c 25 c0 fb 01 00 48 83 ec 48 <4c> 8b 3f 4c 89 ef e8 32 de f6 da 49 8d 44 24 30 49 89 5c 24 38 48
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747421] RSP: 0018:ffffc22bc0edf988 EFLAGS: 00010296
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747440] RAX: dead000000000122 RBX: ffff9c60e9fc5c40 RCX: 0000000000000001
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747463] RDX: dead000000000100 RSI: ffff9c626cf08600 RDI: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747486] RBP: ffff9c631d81eba0 R08: 0000000000000200 R09: ffff9c6456cf6200
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747510] R10: ffff9c63dee04920 R11: 0000000000000001 R12: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747533] R13: 0000000000000010 R14: ffff9c631d81ecc8 R15: ffff9c626cf08600
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747557] FS:  00007f088d641700(0000) GS:ffff9c681fc80000(0000) knlGS:0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747583] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747603] CR2: 0000000000000000 CR3: 00000003910e4001 CR4: 00000000007706e0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747626] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747650] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747672] PKRU: 55555554
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747683] Call Trace:
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747744]  dbuf_dirty+0x81d/0x9d0 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747794]  dmu_write_uio_dnode+0x71/0x1a0 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747845]  dmu_write_uio_dbuf+0x4a/0x70 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747915]  zfs_write+0x496/0xc80 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.747984]  zpl_iter_write+0x103/0x170 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748004]  aio_write+0xf4/0x1f0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748019]  ? __check_object_size+0x5a/0x160
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748036]  ? io_submit_one+0x6d/0x870
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748052]  ? kmem_cache_alloc+0xed/0x1f0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748067]  io_submit_one+0x47f/0x870
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748084]  __x64_sys_io_submit+0x82/0x180
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748100]  ? __x64_sys_io_getevents+0x59/0xc0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748118]  do_syscall_64+0x30/0x80
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748133]  entry_SYSCALL_64_after_hwframe+0x61/0xc6
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748151] RIP: 0033:0x7f08970452e9
Mar 10 23:14:42 debian-celmec kernel: [ 1403.748166] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 77 8b 0d 00 f7 d8 64 89 01 48
Mar 10 23:14:42 debian-celmec kernel: [ 1403.749523] RSP: 002b:00007ffedc61eea8 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
Mar 10 23:14:42 debian-celmec kernel: [ 1403.750216] RAX: ffffffffffffffda RBX: 00007f088d63f890 RCX: 00007f08970452e9
Mar 10 23:14:42 debian-celmec kernel: [ 1403.750905] RDX: 0000562b1815cb48 RSI: 0000000000000001 RDI: 00007f088d61c000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.751584] RBP: 00007f088d61c000 R08: 00007f0884c91000 R09: 00000000000005c8
Mar 10 23:14:42 debian-celmec kernel: [ 1403.752268] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
Mar 10 23:14:42 debian-celmec kernel: [ 1403.752951] R13: 0000000000000000 R14: 0000562b1815cb48 R15: 0000562b1814a3a0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.753635] Modules linked in: zfs(POE) qat_api(OE) usdm_drv(OE) qat_c62x(OE) intel_qat(OE) uio authenc ipmi_ssif intel_rapl_msr intel_rapl_common zunicode(POE) zzstd(OE) sha512_ssse3 skx_edac sha512_generic nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp zlua(OE) coretemp zavl(POE) icp(POE) kvm_intel kvm zcommon(POE) irqbypass ast znvpair(POE) nls_ascii nls_cp437 drm_vram_helper spl(OE) rapl vfat drm_ttm_helper fat intel_cstate ttm drm_kms_helper iTCO_wdt mei_me intel_pmc_bxt iTCO_vendor_support intel_uncore efi_pstore pcspkr joydev cec watchdog sg mei ioatdma acpi_ipmi evdev ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter acpi_pad button drm fuse configfs efivarfs ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 crc32c_generic hid_generic usbhid hid sd_mod xhci_pci crc32_pclmul crc32c_intel xhci_hcd ahci libahci ghash_clmulni_intel aesni_intel i40e usbcore libaes libata crypto_simd cryptd glue_helper nvme hinic igb nvme_core scsi_mod t10_pi i2c_algo_bit crc_t10dif dca
Mar 10 23:14:42 debian-celmec kernel: [ 1403.753708]  crct10dif_generic i2c_i801 crct10dif_pclmul ptp crct10dif_common i2c_smbus pps_core usb_common wmi [last unloaded: uio]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.761519] CR2: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.762315] ---[ end trace 92b369c08bc620d9 ]---
Mar 10 23:14:42 debian-celmec kernel: [ 1403.840109] RIP: 0010:arc_release+0x23/0x7c0 [zfs]
Mar 10 23:14:42 debian-celmec kernel: [ 1403.840906] Code: 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 56 41 55 4c 8d 6f 10 41 54 49 89 fc 55 53 65 48 8b 1c 25 c0 fb 01 00 48 83 ec 48 <4c> 8b 3f 4c 89 ef e8 32 de f6 da 49 8d 44 24 30 49 89 5c 24 38 48
Mar 10 23:14:42 debian-celmec kernel: [ 1403.842557] RSP: 0018:ffffc22bc0edf988 EFLAGS: 00010296
Mar 10 23:14:42 debian-celmec kernel: [ 1403.843394] RAX: dead000000000122 RBX: ffff9c60e9fc5c40 RCX: 0000000000000001
Mar 10 23:14:42 debian-celmec kernel: [ 1403.844235] RDX: dead000000000100 RSI: ffff9c626cf08600 RDI: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.845064] RBP: ffff9c631d81eba0 R08: 0000000000000200 R09: ffff9c6456cf6200
Mar 10 23:14:42 debian-celmec kernel: [ 1403.845890] R10: ffff9c63dee04920 R11: 0000000000000001 R12: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.846711] R13: 0000000000000010 R14: ffff9c631d81ecc8 R15: ffff9c626cf08600
Mar 10 23:14:42 debian-celmec kernel: [ 1403.847543] FS:  00007f088d641700(0000) GS:ffff9c681fc80000(0000) knlGS:0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.848387] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 10 23:14:42 debian-celmec kernel: [ 1403.849225] CR2: 0000000000000000 CR3: 00000003910e4001 CR4: 00000000007706e0
Mar 10 23:14:42 debian-celmec kernel: [ 1403.850079] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 10 23:14:42 debian-celmec kernel: [ 1403.850935] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Mar 10 23:14:42 debian-celmec kernel: [ 1403.851785] PKRU: 55555554

@naivekun
Copy link
Contributor

By add some logs with qat_compress.c, the reason of kernel panic maybe invalid gzip checksum. I noticed that when there is a qat decompression error and zlib software decompression error, it immediately lead to kernel panic

Mar 12 04:12:56 debian-celmec kernel: [  271.020294] ZFS: QAT: write checksum fd7571a8
Mar 12 04:12:56 debian-celmec kernel: [  271.020371] ZFS: QAT: write checksum 975b375d
Mar 12 04:12:56 debian-celmec kernel: [  271.020411] ZFS: QAT: write checksum 7eddec3f
Mar 12 04:12:56 debian-celmec kernel: [  271.020437] ZFS: QAT: write checksum ee56877
Mar 12 04:12:56 debian-celmec kernel: [  271.020439] ZFS: QAT: write checksum b56edfcb
Mar 12 04:12:56 debian-celmec kernel: [  271.020498] ZFS: QAT: write checksum 8672df40
Mar 12 04:12:56 debian-celmec kernel: [  271.020544] ZFS: QAT: write checksum a72687ff
Mar 12 04:12:56 debian-celmec kernel: [  271.020573] ZFS: QAT: write checksum dd80386c
Mar 12 04:12:56 debian-celmec kernel: [  271.020581] ZFS: QAT: write checksum c30d1fbe
Mar 12 04:12:56 debian-celmec kernel: [  271.020601] ZFS: QAT: write checksum fc777b36
Mar 12 04:12:56 debian-celmec kernel: [  271.020604] ZFS: QAT: write checksum 627cb344
Mar 12 04:12:56 debian-celmec kernel: [  271.020628] ZFS: QAT: write checksum ba74d97e
Mar 12 04:12:56 debian-celmec kernel: [  271.020696] ZFS: QAT: write checksum b2643f64
Mar 12 04:12:56 debian-celmec kernel: [  271.020718] ZFS: QAT: write checksum 8c58504c
Mar 12 04:12:56 debian-celmec kernel: [  271.020746] ZFS: QAT: write checksum eb369771
Mar 12 04:12:56 debian-celmec kernel: [  271.020770] ZFS: QAT: write checksum f1fef15a
Mar 12 04:12:56 debian-celmec kernel: [  271.020798] ZFS: QAT: write checksum 59692bfd
Mar 12 04:12:56 debian-celmec kernel: [  271.020856] ZFS: QAT: write checksum 25496a67
Mar 12 04:12:56 debian-celmec kernel: [  271.021651] ZFS: QAT: write checksum 5443a1cb
Mar 12 04:12:57 debian-celmec kernel: [  271.115884] ZFS: QAT: cpaDcDecompressData checksum not success but -1, adler32=d3e94779, dc_results.checksum=cb4c3b11
Mar 12 04:12:57 debian-celmec kernel: [  271.115886] qat decompress failed: -1, do it again with software
Mar 12 04:12:57 debian-celmec kernel: [  271.116321] qat software decompress failed: -1
Mar 12 04:12:57 debian-celmec kernel: [  271.116478] ZFS: QAT: cpaDcDecompressData checksum not success but -1, adler32=d3e94779, dc_results.checksum=cb4c3b11
Mar 12 04:12:57 debian-celmec kernel: [  271.116479] qat decompress failed: -1, do it again with software
Mar 12 04:12:57 debian-celmec kernel: [  271.116633] qat software decompress failed: -1
Mar 12 04:12:57 debian-celmec kernel: [  271.116718] PGD 0 P4D 0
Mar 12 04:12:57 debian-celmec kernel: [  271.116731] Oops: 0000 [#1] SMP PTI
Mar 12 04:12:57 debian-celmec kernel: [  271.116747] CPU: 3 PID: 28265 Comm: fio Tainted: P           OE     5.10.0-21-amd64 #1 Debian 5.10.162-1
Mar 12 04:12:57 debian-celmec kernel: [  271.116780] Hardware name: CELESTICA-CSS MEC Server/MEC Server MB, BIOS MEC3.00.00 03/23/2020
Mar 12 04:12:57 debian-celmec kernel: [  271.116858] RIP: 0010:arc_release+0x23/0x7c0 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.116878] Code: 1f 84 00 00 00 00 00 0f 1f 44 00 00 41 57 41 56 41 55 4c 8d 6f 10 41 54 49 89 fc 55 53 65 48 8b 1c 25 c0 fb 01 00 48 83 ec 48 <4c> 8b 3f 4c 89 ef e8 32 ce 11 e6 49 8d 44 24 30 49 89 5c 24 38 48
Mar 12 04:12:57 debian-celmec kernel: [  271.116942] RSP: 0018:ffffa867c514f988 EFLAGS: 00010296
Mar 12 04:12:57 debian-celmec kernel: [  271.116963] RAX: dead000000000122 RBX: ffff9d08b671dc40 RCX: 0000000000000001
Mar 12 04:12:57 debian-celmec kernel: [  271.116988] RDX: dead000000000100 RSI: ffff9d08b8298f00 RDI: 0000000000000000
Mar 12 04:12:57 debian-celmec kernel: [  271.117014] RBP: ffff9d085ebcd0b8 R08: 0000000000000200 R09: ffff9d09e4236600
Mar 12 04:12:57 debian-celmec kernel: [  271.117040] R10: ffff9d0919d09380 R11: 0000000000000001 R12: 0000000000000000
Mar 12 04:12:57 debian-celmec kernel: [  271.117065] R13: 0000000000000010 R14: ffff9d085ebcd1e0 R15: ffff9d08b8298f00
Mar 12 04:12:57 debian-celmec kernel: [  271.117092] FS:  00007f651f9ed700(0000) GS:ffff9d0f1fac0000(0000) knlGS:0000000000000000
Mar 12 04:12:57 debian-celmec kernel: [  271.117121] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar 12 04:12:57 debian-celmec kernel: [  271.117142] CR2: 0000000000000000 CR3: 0000000225888001 CR4: 00000000007706e0
Mar 12 04:12:57 debian-celmec kernel: [  271.117168] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Mar 12 04:12:57 debian-celmec kernel: [  271.117194] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Mar 12 04:12:57 debian-celmec kernel: [  271.117219] PKRU: 55555554
Mar 12 04:12:57 debian-celmec kernel: [  271.117230] Call Trace:
Mar 12 04:12:57 debian-celmec kernel: [  271.117298]  dbuf_dirty+0x81d/0x9d0 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.117353]  dmu_write_uio_dnode+0x71/0x1a0 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.117409]  dmu_write_uio_dbuf+0x4a/0x70 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.117485]  zfs_write+0x496/0xc80 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.117506]  ? enqueue_entity+0x163/0x760
Mar 12 04:12:57 debian-celmec kernel: [  271.117579]  zpl_iter_write+0x103/0x170 [zfs]
Mar 12 04:12:57 debian-celmec kernel: [  271.117600]  aio_write+0xf4/0x1f0
Mar 12 04:12:57 debian-celmec kernel: [  271.117616]  ? __check_object_size+0x5a/0x160
Mar 12 04:12:57 debian-celmec kernel: [  271.117635]  ? io_submit_one+0x6d/0x870
Mar 12 04:12:57 debian-celmec kernel: [  271.117651]  ? kmem_cache_alloc+0xed/0x1f0
Mar 12 04:12:57 debian-celmec kernel: [  271.117669]  io_submit_one+0x47f/0x870
Mar 12 04:12:57 debian-celmec kernel: [  271.117687]  ? nvme_process_cq+0x14d/0x220 [nvme]
Mar 12 04:12:57 debian-celmec kernel: [  271.117707]  __x64_sys_io_submit+0x82/0x180
Mar 12 04:12:57 debian-celmec kernel: [  271.117725]  ? __x64_sys_io_getevents+0x59/0xc0

I traced the invalid checksum value of d3e94779 from previous dmesg log cpaDcDecompressData checksum not success but -1, adler32=d3e94779, dc_results.checksum=cb4c3b11. As shown below, the invalid checksum always begin from qat compression CPA_DC_OVERFLOW(-11) error. I think we have to reset the checksum session or the qat will follow the checksum in src[0:dc_status.consumed] and finally write a invalid checksum on disk.

Mar 12 04:12:56 debian-celmec kernel: [  270.511056] ZFS: QAT: write checksum 46debbb0
Mar 12 04:12:56 debian-celmec kernel: [  270.511089] ZFS: QAT: write checksum 55637393
Mar 12 04:12:56 debian-celmec kernel: [  270.511093] ZFS: QAT: write checksum da84ade
Mar 12 04:12:56 debian-celmec kernel: [  270.511102] ZFS: QAT: write checksum 1ed41e01
Mar 12 04:12:56 debian-celmec kernel: [  270.511111] ZFS: QAT: write checksum 197b413f
Mar 12 04:12:56 debian-celmec kernel: [  270.511160] ZFS: QAT: write checksum 6470ecfd
Mar 12 04:12:56 debian-celmec kernel: [  270.511177] ZFS: QAT: write checksum 3d70dcf5
Mar 12 04:12:56 debian-celmec kernel: [  270.511232] ZFS: QAT: write checksum 4cd364dc
Mar 12 04:12:56 debian-celmec kernel: [  270.511237] ZFS: QAT: write checksum bc4cfb42
Mar 12 04:12:56 debian-celmec kernel: [  270.511334] ZFS: QAT: cpaDcCompressData dc_results.status not success but -11
Mar 12 04:12:56 debian-celmec kernel: [  270.511336] ZFS: QAT: dc_results.status dc_results.produced=65535, dc_results.consumed=65530
Mar 12 04:12:56 debian-celmec kernel: [  270.511338] ZFS: QAT: dc_results.status src=e5d63000, dst=8f5d000
Mar 12 04:12:56 debian-celmec kernel: [  270.511340] ZFS: QAT: dc_results.status src[0]=0, dst[0]=120
Mar 12 04:12:56 debian-celmec kernel: [  270.511347] ZFS: QAT: write checksum 845a272a
Mar 12 04:12:56 debian-celmec kernel: [  270.511408] ZFS: QAT: write checksum 1460df6e
Mar 12 04:12:56 debian-celmec kernel: [  270.511518] ZFS: QAT: write checksum 8274369e
Mar 12 04:12:56 debian-celmec kernel: [  270.511582] ZFS: QAT: write checksum bebd7440
Mar 12 04:12:56 debian-celmec kernel: [  270.511629] ZFS: QAT: write checksum f4736805
Mar 12 04:12:56 debian-celmec kernel: [  270.511659] qat compress failed and changed dst data!!!
Mar 12 04:12:56 debian-celmec kernel: [  270.511661] qat compress failed with return code -127
Mar 12 04:12:56 debian-celmec kernel: [  270.511663] qat compress incompressible, d_start=8f5d000, d_len=114688, dstlen=114688, s_start=e5d63000, s_len=131072, src[0]=0, dst[0]=78
Mar 12 04:12:56 debian-celmec kernel: [  270.511671] ZFS: QAT: write checksum 126a8040
Mar 12 04:12:56 debian-celmec kernel: [  270.511683] ZFS: QAT: write checksum 4b75c517
Mar 12 04:12:56 debian-celmec kernel: [  270.511705] ZFS: QAT: write checksum 419598fa
Mar 12 04:12:56 debian-celmec kernel: [  270.511728] ZFS: QAT: write checksum a4c89ec6
Mar 12 04:12:56 debian-celmec kernel: [  270.511760] ZFS: QAT: write checksum e73d9643
Mar 12 04:12:56 debian-celmec kernel: [  270.511817] ZFS: QAT: write checksum a6770267
Mar 12 04:12:56 debian-celmec kernel: [  270.511822] ZFS: QAT: write checksum d3e94779
Mar 12 04:12:56 debian-celmec kernel: [  270.511874] ZFS: QAT: write checksum 697b2789
Mar 12 04:12:56 debian-celmec kernel: [  270.511894] ZFS: QAT: write checksum a9624552
Mar 12 04:12:56 debian-celmec kernel: [  270.511898] ZFS: QAT: write checksum c9752d44
Mar 12 04:12:56 debian-celmec kernel: [  270.511942] ZFS: QAT: write checksum d7ed15c
Mar 12 04:12:56 debian-celmec kernel: [  270.511962] ZFS: QAT: write checksum 106e5761

I checked the intel qat document 743912-qat-programmers-guide-v2.0.pdf page 75.

It is necessary for an application to seed the checksum whether it wishes to continue the series of requests or to start a new one, only in the latter case, the seed is 0 for CRC32 and 1 for Adler32.

The no session API will work with all versions of QAT hardware but does not support stateful operation as
without a session no state can be maintained between requests.
It is still possible to seed checksums for CRC32 and Adler32 by setting the checksum field of the CpaDcRqResults to the seed checksum value before submission. This will allow an overall checksum to be
maintained across multiple submissions. For LZ4, checksum seeding is not supported. If checksums
need to be maintained between LZ4 requests then the session based API must be used. The no session
API supports data integrity checksums but as stateful operation is not supported the integrity check-
sums will always be for the current request only.
The no session API does support stateless overflow in the compression direction only like the session
based API. In that case consumed, produced and checksum fields within the CpaDcRqResults structure will
be valid when a status of CPA_DC_OVERFLOW is returned. It is the application’s responsibility to arrange data
Programmer's Guide
buffers for the next submission, ensure the checksum is seeded and maintain an overall count of the bytes
consumed if footer generation is required. For performance reasons it is recommended that the com-
pressBound API is used to size the destination buffer correctly to avoid overflow. It is necessary for an
application to seed the checksum whether it wishes to continue the series of requests or to start a new
one, only in the latter case, the seed is 0 for CRC32 and 1 for Adler32.

@naivekun
Copy link
Contributor

naivekun@97737a7

Works for about 30 minutes now. Without this patch, it can't work and will panic in 10-60 seconds. I will continue to use fio for IO stress testing.

image

naivekun added a commit to naivekun/zfs that referenced this issue Mar 11, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32. Otherwise when error CPA_DC_OVERFLOW occured, the next compress operation will continue on previously part-compressed data, and write invalid checksum data. When zfs decompress the compressed data, a invalid checksum will occured and lead to openzfs#14463

Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes: openzfs#14463
naivekun added a commit to naivekun/zfs that referenced this issue Mar 11, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32.
Otherwise when error CPA_DC_OVERFLOW occured, the next compress
operation will continue on previously part-compressed data, and write
invalid checksum data. When zfs decompress the compressed data, a
invalid checksum will occured and lead to openzfs#14463

Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes: openzfs#14463
@naivekun
Copy link
Contributor

It works for 3 days with heavy IO load.

behlendorf pushed a commit to behlendorf/zfs that referenced this issue Mar 16, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32.
Otherwise when error CPA_DC_OVERFLOW occurred, the next compress
operation will continue on previously part-compressed data, and write
invalid checksum data. When zfs decompress the compressed data, a
invalid checksum will occurred and lead to openzfs#14463

Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Reviewed-by: Weigang Li <weigang.li@intel.com>
Reviewed-by: Chengfei Zhu <chengfeix.zhu@intel.com>
Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes openzfs#14632
Closes openzfs#14463
behlendorf pushed a commit that referenced this issue Mar 17, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32.
Otherwise when error CPA_DC_OVERFLOW occurred, the next compress
operation will continue on previously part-compressed data, and write
invalid checksum data. When zfs decompress the compressed data, a
invalid checksum will occurred and lead to #14463

Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Reviewed-by: Weigang Li <weigang.li@intel.com>
Reviewed-by: Chengfei Zhu <chengfeix.zhu@intel.com>
Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes #14632
Closes #14463
andrewc12 pushed a commit to andrewc12/openzfs that referenced this issue Mar 17, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32.
Otherwise when error CPA_DC_OVERFLOW occurred, the next compress
operation will continue on previously part-compressed data, and write
invalid checksum data. When zfs decompress the compressed data, a
invalid checksum will occurred and lead to openzfs#14463

Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Reviewed-by: Weigang Li <weigang.li@intel.com>
Reviewed-by: Chengfei Zhu <chengfeix.zhu@intel.com>
Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes openzfs#14632
Closes openzfs#14463
pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
CpaDcRqResults have to be initialized with checksum=1 for adler32.
Otherwise when error CPA_DC_OVERFLOW occurred, the next compress
operation will continue on previously part-compressed data, and write
invalid checksum data. When zfs decompress the compressed data, a
invalid checksum will occurred and lead to openzfs#14463

Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de>
Reviewed-by: Weigang Li <weigang.li@intel.com>
Reviewed-by: Chengfei Zhu <chengfeix.zhu@intel.com>
Signed-off-by: naivekun <naivekun0817@gmail.com>
Closes openzfs#14632
Closes openzfs#14463
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

2 participants