Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash during fxbench #1

Open
stevenjswanson opened this issue Jun 28, 2017 · 5 comments
Open

Crash during fxbench #1

stevenjswanson opened this issue Jun 28, 2017 · 5 comments
Assignees

Comments

@stevenjswanson
Copy link
Member

From @stevenjswanson on June 13, 2017 23:2

I'm getting a consistent crash while running fxbench on the test machine.

commit f5ccaacfda12289ace1f53109ad3104bbc43e3ca

dmesg:

[ 938.940146] nova: Current epoch id: 0
[ 938.940205] nova: nova_save_inode_list_to_log: 18 inode nodes, pi head 0x333835000, tail 0x333835120
[ 938.940269] nova: nova_save_blocknode_mappings_to_log: 279 blocknodes, 2 log pages, pi head 0x333836000, tail 0x333837190
[ 938.949153] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 938.949155] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 938.949259] nova: Start NOVA snapshot cleaner thread.
[ 938.949267] nova: Running snapshot cleaner thread
[ 938.949294] nova: nova_init_inode_list_from_inode: 18 inode nodes
[ 938.949296] nova: Recovered 0 snapshots, latest epoch ID 0
[ 938.949296] nova: NOVA: Normal shutdown
[ 938.951513] nova: Current epoch id: 0
[ 938.968640] run fstests generic/311 at 2017-06-13 21:24:57
[ 939.127874] run fstests generic/312 at 2017-06-13 21:24:57
[ 939.286753] run fstests generic/313 at 2017-06-13 21:24:58
[ 943.482926] nova: Current epoch id: 0
[ 943.482978] nova: nova_save_inode_list_to_log: 18 inode nodes, pi head 0x333835000, tail 0x333835120
[ 943.483038] nova: nova_save_blocknode_mappings_to_log: 279 blocknodes, 2 log pages, pi head 0x333836000, tail 0x333837190
[ 943.491356] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 943.491359] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 943.491449] nova: Start NOVA snapshot cleaner thread.
[ 943.491455] nova: Running snapshot cleaner thread
[ 943.491483] nova: nova_init_inode_list_from_inode: 18 inode nodes
[ 943.491486] nova: Recovered 0 snapshots, latest epoch ID 0
[ 943.491486] nova: NOVA: Normal shutdown
[ 943.493635] nova: Current epoch id: 0
[ 943.510833] run fstests generic/314 at 2017-06-13 21:25:02
[ 943.706188] run fstests generic/315 at 2017-06-13 21:25:02
[ 943.887646] run fstests generic/316 at 2017-06-13 21:25:02
[ 944.069018] run fstests generic/317 at 2017-06-13 21:25:02
[ 944.246876] nova: Current epoch id: 0
[ 944.246960] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2de65f000, tail 0x2de65f080
[ 944.246987] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x2de660000, tail 0x2de660390
[ 944.312210] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 944.312212] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 944.312326] nova: Start NOVA snapshot cleaner thread.
[ 944.312333] nova: Running snapshot cleaner thread
[ 944.312348] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 944.312350] nova: Recovered 0 snapshots, latest epoch ID 0
[ 944.312350] nova: NOVA: Normal shutdown
[ 944.315444] nova: Current epoch id: 0
[ 944.338880] nova: Current epoch id: 0
[ 944.338930] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 944.338945] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f390
[ 944.344853] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 944.344855] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 944.344945] nova: Start NOVA snapshot cleaner thread.
[ 944.344959] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 944.344960] nova: Recovered 0 snapshots, latest epoch ID 0
[ 944.344961] nova: NOVA: Normal shutdown
[ 944.344980] nova: Running snapshot cleaner thread
[ 944.348049] nova: Current epoch id: 0
[ 944.386798] nova: Current epoch id: 0
[ 944.386843] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 944.386858] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 944.392985] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 944.392987] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 944.393121] nova: Start NOVA snapshot cleaner thread.
[ 944.393128] nova: Running snapshot cleaner thread
[ 944.393135] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 944.393137] nova: Recovered 0 snapshots, latest epoch ID 0
[ 944.393137] nova: NOVA: Normal shutdown
[ 944.396217] nova: Current epoch id: 0
[ 944.418799] nova: Current epoch id: 0
[ 944.418857] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 944.418872] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 944.459280] run fstests generic/318 at 2017-06-13 21:25:03
[ 944.680721] run fstests generic/319 at 2017-06-13 21:25:03
[ 944.877109] run fstests generic/320 at 2017-06-13 21:25:03
[ 945.077249] run fstests generic/321 at 2017-06-13 21:25:03
[ 945.245400] run fstests generic/322 at 2017-06-13 21:25:04
[ 945.406189] run fstests generic/323 at 2017-06-13 21:25:04
[ 1065.919146] nova: Current epoch id: 0
[ 1065.919204] nova: nova_save_inode_list_to_log: 17 inode nodes, pi head 0xb37c1000, tail 0xb37c1110
[ 1065.919262] nova: nova_save_blocknode_mappings_to_log: 266 blocknodes, 2 log pages, pi head 0xb37c3000, tail 0xb37c40c0
[ 1065.928506] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 1065.928508] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1065.928591] nova: Start NOVA snapshot cleaner thread.
[ 1065.928599] nova: Running snapshot cleaner thread
[ 1065.928625] nova: nova_init_inode_list_from_inode: 17 inode nodes
[ 1065.928628] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1065.928628] nova: NOVA: Normal shutdown
[ 1065.930833] nova: Current epoch id: 0
[ 1065.948750] run fstests generic/324 at 2017-06-13 21:27:04
[ 1066.136179] run fstests generic/325 at 2017-06-13 21:27:04
[ 1066.321253] run fstests generic/326 at 2017-06-13 21:27:05
[ 1066.511037] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1066.511039] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1066.511108] nova: Start NOVA snapshot cleaner thread.
[ 1066.511114] nova: Running snapshot cleaner thread
[ 1066.511122] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1066.511125] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1066.511125] nova: NOVA: Normal shutdown
[ 1066.514297] nova: Current epoch id: 0
[ 1066.546957] nova: Current epoch id: 0
[ 1066.547003] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1066.547019] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1066.552817] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1066.552818] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1066.552951] nova: Start NOVA snapshot cleaner thread.
[ 1066.552960] nova: Running snapshot cleaner thread
[ 1066.552964] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1066.552966] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1066.552967] nova: NOVA: Normal shutdown
[ 1066.556085] nova: Current epoch id: 0
[ 1066.582945] nova: Current epoch id: 0
[ 1066.583019] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x356669000, tail 0x356669080
[ 1066.583040] nova: nova_save_blocknode_mappings_to_log: 55 blocknodes, 1 log pages, pi head 0x35666a000, tail 0x35666a370
[ 1066.605550] run fstests generic/327 at 2017-06-13 21:27:05
[ 1066.785492] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1066.785495] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1066.785617] nova: Start NOVA snapshot cleaner thread.
[ 1066.785628] nova: Running snapshot cleaner thread
[ 1066.785639] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1066.785642] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1066.785643] nova: NOVA: Normal shutdown
[ 1066.789768] nova: Current epoch id: 0
[ 1066.814976] nova: Current epoch id: 0
[ 1066.815046] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x356656000, tail 0x356656080
[ 1066.815068] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x356657000, tail 0x356657390
[ 1066.821008] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1066.821010] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1066.821110] nova: Start NOVA snapshot cleaner thread.
[ 1066.821118] nova: Running snapshot cleaner thread
[ 1066.821129] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1066.821132] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1066.821133] nova: NOVA: Normal shutdown
[ 1066.824722] nova: Current epoch id: 0
[ 1066.850850] nova: Current epoch id: 0
[ 1066.850892] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 1066.850907] nova: nova_save_blocknode_mappings_to_log: 55 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f370
[ 1066.873287] run fstests generic/328 at 2017-06-13 21:27:05
[ 1067.056916] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1067.056920] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1067.057021] nova: Start NOVA snapshot cleaner thread.
[ 1067.057030] nova: Running snapshot cleaner thread
[ 1067.057050] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1067.057053] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1067.057054] nova: NOVA: Normal shutdown
[ 1067.060511] nova: Current epoch id: 0
[ 1067.087064] nova: Current epoch id: 0
[ 1067.087124] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x356656000, tail 0x356656080
[ 1067.087150] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x356657000, tail 0x356657390
[ 1067.092794] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1067.092797] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1067.092945] nova: Start NOVA snapshot cleaner thread.
[ 1067.092960] nova: Running snapshot cleaner thread
[ 1067.092961] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1067.092963] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1067.092964] nova: NOVA: Normal shutdown
[ 1067.096110] nova: Current epoch id: 0
[ 1067.126872] nova: Current epoch id: 0
[ 1067.126938] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1067.126952] nova: nova_save_blocknode_mappings_to_log: 51 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658330
[ 1067.148881] run fstests generic/329 at 2017-06-13 21:27:05
[ 1067.397610] run fstests generic/330 at 2017-06-13 21:27:06
[ 1067.576008] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1067.576010] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1067.576136] nova: Start NOVA snapshot cleaner thread.
[ 1067.576144] nova: Running snapshot cleaner thread
[ 1067.576155] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1067.576157] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1067.576157] nova: NOVA: Normal shutdown
[ 1067.579252] nova: Current epoch id: 0
[ 1067.614897] nova: Current epoch id: 0
[ 1067.614951] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2de65f000, tail 0x2de65f080
[ 1067.614966] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x2de660000, tail 0x2de660390
[ 1067.620483] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1067.620485] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1067.620646] nova: Start NOVA snapshot cleaner thread.
[ 1067.620651] nova: Running snapshot cleaner thread
[ 1067.620661] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1067.620663] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1067.620664] nova: NOVA: Normal shutdown
[ 1067.624021] nova: Current epoch id: 0
[ 1067.642863] nova: Current epoch id: 0
[ 1067.642910] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xfe656000, tail 0xfe656080
[ 1067.642925] nova: nova_save_blocknode_mappings_to_log: 54 blocknodes, 1 log pages, pi head 0xfe658000, tail 0xfe658360
[ 1067.665110] run fstests generic/331 at 2017-06-13 21:27:06
[ 1067.843563] run fstests generic/332 at 2017-06-13 21:27:06
[ 1068.039758] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.039761] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.039859] nova: Start NOVA snapshot cleaner thread.
[ 1068.039867] nova: Running snapshot cleaner thread
[ 1068.039872] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.039874] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.039874] nova: NOVA: Normal shutdown
[ 1068.042975] nova: Current epoch id: 0
[ 1068.071052] nova: Current epoch id: 0
[ 1068.071125] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xfe656000, tail 0xfe656080
[ 1068.071147] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0xfe658000, tail 0xfe658380
[ 1068.077578] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.077582] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.077667] nova: Start NOVA snapshot cleaner thread.
[ 1068.077675] nova: Running snapshot cleaner thread
[ 1068.077688] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.077692] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.077692] nova: NOVA: Normal shutdown
[ 1068.081128] nova: Current epoch id: 0
[ 1068.102863] nova: Current epoch id: 0
[ 1068.102910] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1068.102930] nova: nova_save_blocknode_mappings_to_log: 52 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658340
[ 1068.125787] run fstests generic/333 at 2017-06-13 21:27:06
[ 1068.319498] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.319500] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.319616] nova: Start NOVA snapshot cleaner thread.
[ 1068.319624] nova: Running snapshot cleaner thread
[ 1068.319629] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.319632] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.319632] nova: NOVA: Normal shutdown
[ 1068.322663] nova: Current epoch id: 0
[ 1068.346894] nova: Current epoch id: 0
[ 1068.346952] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 1068.346966] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f390
[ 1068.352817] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.352819] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.352903] nova: Start NOVA snapshot cleaner thread.
[ 1068.352912] nova: Running snapshot cleaner thread
[ 1068.352916] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.352918] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.352919] nova: NOVA: Normal shutdown
[ 1068.356263] nova: Current epoch id: 0
[ 1068.378913] nova: Current epoch id: 0
[ 1068.378954] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1068.378969] nova: nova_save_blocknode_mappings_to_log: 55 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658370
[ 1068.400231] run fstests generic/334 at 2017-06-13 21:27:07
[ 1068.592265] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.592267] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.592361] nova: Start NOVA snapshot cleaner thread.
[ 1068.592369] nova: Running snapshot cleaner thread
[ 1068.592378] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.592380] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.592380] nova: NOVA: Normal shutdown
[ 1068.595511] nova: Current epoch id: 0
[ 1068.614923] nova: Current epoch id: 0
[ 1068.614948] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2de65f000, tail 0x2de65f080
[ 1068.614964] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x2de660000, tail 0x2de660390
[ 1068.620148] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1068.620172] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1068.620248] nova: Start NOVA snapshot cleaner thread.
[ 1068.620256] nova: Running snapshot cleaner thread
[ 1068.620261] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1068.620263] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1068.620263] nova: NOVA: Normal shutdown
[ 1068.623323] nova: Current epoch id: 0
[ 1068.650869] nova: Current epoch id: 0
[ 1068.650921] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 1068.650936] nova: nova_save_blocknode_mappings_to_log: 55 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f370
[ 1068.671959] run fstests generic/335 at 2017-06-13 21:27:07
[ 1068.869682] run fstests generic/336 at 2017-06-13 21:27:07
[ 1069.045560] run fstests generic/337 at 2017-06-13 21:27:07
[ 1069.240938] run fstests generic/338 at 2017-06-13 21:27:08
[ 1069.416588] run fstests generic/339 at 2017-06-13 21:27:08
[ 1069.599025] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1069.599027] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1069.599103] nova: Start NOVA snapshot cleaner thread.
[ 1069.599111] nova: Running snapshot cleaner thread
[ 1069.599117] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1069.599119] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1069.599119] nova: NOVA: Normal shutdown
[ 1069.602230] nova: Current epoch id: 0
[ 1069.622894] nova: Current epoch id: 0
[ 1069.622945] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1069.622961] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1069.629093] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1069.629095] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1069.629175] nova: Start NOVA snapshot cleaner thread.
[ 1069.629184] nova: Running snapshot cleaner thread
[ 1069.629199] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1069.629202] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1069.629202] nova: NOVA: Normal shutdown
[ 1069.632320] nova: Current epoch id: 0
[ 1070.151657] nova: Current epoch id: 0
[ 1070.151782] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1070.151809] nova: nova_save_blocknode_mappings_to_log: 50 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658320
[ 1070.222316] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1070.222319] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1070.222456] nova: Start NOVA snapshot cleaner thread.
[ 1070.222465] nova: Running snapshot cleaner thread
[ 1070.222474] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1070.222477] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1070.222478] nova: NOVA: Normal shutdown
[ 1070.225938] nova: Current epoch id: 0
[ 1070.538885] nova: Current epoch id: 0
[ 1070.538935] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1070.538950] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1070.547711] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1070.547713] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1070.547783] nova: Start NOVA snapshot cleaner thread.
[ 1070.547789] nova: Running snapshot cleaner thread
[ 1070.547803] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1070.547805] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1070.547806] nova: NOVA: Normal shutdown
[ 1070.551066] nova: Current epoch id: 0
[ 1070.572797] run fstests generic/340 at 2017-06-13 21:27:09
[ 1070.774863] nova: Current epoch id: 0
[ 1070.774917] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1070.774932] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1070.784350] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1070.784352] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1070.784477] nova: Start NOVA snapshot cleaner thread.
[ 1070.784486] nova: Running snapshot cleaner thread
[ 1070.784492] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1070.784494] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1070.784495] nova: NOVA: Normal shutdown
[ 1070.787629] nova: Current epoch id: 0
[ 1070.818912] nova: Current epoch id: 0
[ 1070.818955] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1070.818971] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 1070.824718] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1070.824720] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1070.824846] nova: Start NOVA snapshot cleaner thread.
[ 1070.824849] nova: Running snapshot cleaner thread
[ 1070.824866] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1070.824869] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1070.824869] nova: NOVA: Normal shutdown
[ 1070.828069] nova: Current epoch id: 0
[ 1072.966874] nova: Current epoch id: 0
[ 1072.966974] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xfe656000, tail 0xfe656080
[ 1072.966990] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0xfe658000, tail 0xfe658380
[ 1072.978028] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1072.978030] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1072.978187] nova: Start NOVA snapshot cleaner thread.
[ 1072.978196] nova: Running snapshot cleaner thread
[ 1072.978219] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1072.978223] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1072.978223] nova: NOVA: Normal shutdown
[ 1072.982117] nova: Current epoch id: 0
[ 1073.004191] run fstests generic/341 at 2017-06-13 21:27:11
[ 1073.193290] run fstests generic/342 at 2017-06-13 21:27:12
[ 1073.366135] run fstests generic/343 at 2017-06-13 21:27:12
[ 1073.548467] run fstests generic/344 at 2017-06-13 21:27:12
[ 1073.734861] nova: Current epoch id: 0
[ 1073.734911] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 1073.734927] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f390
[ 1073.769472] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1073.769475] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1073.769600] nova: Start NOVA snapshot cleaner thread.
[ 1073.769607] nova: Running snapshot cleaner thread
[ 1073.769614] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1073.769616] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1073.769617] nova: NOVA: Normal shutdown
[ 1073.772768] nova: Current epoch id: 0
[ 1073.806892] nova: Current epoch id: 0
[ 1073.806942] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1073.806957] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658380
[ 1073.813063] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1073.813065] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1073.813164] nova: Start NOVA snapshot cleaner thread.
[ 1073.813170] nova: Running snapshot cleaner thread
[ 1073.813177] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1073.813179] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1073.813180] nova: NOVA: Normal shutdown
[ 1073.816341] nova: Current epoch id: 0
[ 1079.166886] nova: Current epoch id: 0
[ 1079.166948] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1079.166972] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 1079.178443] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1079.178447] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1079.178532] nova: Start NOVA snapshot cleaner thread.
[ 1079.178541] nova: Running snapshot cleaner thread
[ 1079.178557] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1079.178560] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1079.178560] nova: NOVA: Normal shutdown
[ 1079.182573] nova: Current epoch id: 0
[ 1079.205618] run fstests generic/345 at 2017-06-13 21:27:18
[ 1079.394860] nova: Current epoch id: 0
[ 1079.394918] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xe65e000, tail 0xe65e080
[ 1079.394934] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0xe65f000, tail 0xe65f390
[ 1079.497841] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1079.497843] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1079.497946] nova: Start NOVA snapshot cleaner thread.
[ 1079.497953] nova: Running snapshot cleaner thread
[ 1079.497970] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1079.497972] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1079.497973] nova: NOVA: Normal shutdown
[ 1079.501139] nova: Current epoch id: 0
[ 1079.546880] nova: Current epoch id: 0
[ 1079.546924] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x356656000, tail 0x356656080
[ 1079.546939] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x356657000, tail 0x356657390
[ 1079.552858] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1079.552860] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1079.552967] nova: Start NOVA snapshot cleaner thread.
[ 1079.552976] nova: Running snapshot cleaner thread
[ 1079.552982] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1079.552984] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1079.552984] nova: NOVA: Normal shutdown
[ 1079.556118] nova: Current epoch id: 0
[ 1085.162846] nova: Current epoch id: 0
[ 1085.162895] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1085.162912] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 1085.171620] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1085.171622] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1085.171716] nova: Start NOVA snapshot cleaner thread.
[ 1085.171726] nova: Running snapshot cleaner thread
[ 1085.171729] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1085.171732] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1085.171732] nova: NOVA: Normal shutdown
[ 1085.174782] nova: Current epoch id: 0
[ 1085.194676] run fstests generic/346 at 2017-06-13 21:27:24
[ 1085.366871] nova: Current epoch id: 0
[ 1085.366927] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1085.366943] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 1085.374375] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1085.374377] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1085.374452] nova: Start NOVA snapshot cleaner thread.
[ 1085.374459] nova: Running snapshot cleaner thread
[ 1085.374466] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1085.374468] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1085.374469] nova: NOVA: Normal shutdown
[ 1085.377579] nova: Current epoch id: 0
[ 1085.410901] nova: Current epoch id: 0
[ 1085.410952] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1085.410981] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1085.416437] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1085.416439] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1085.416522] nova: Start NOVA snapshot cleaner thread.
[ 1085.416530] nova: Running snapshot cleaner thread
[ 1085.416536] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1085.416539] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1085.416539] nova: NOVA: Normal shutdown
[ 1085.419722] nova: Current epoch id: 0
[ 1087.614854] nova: Current epoch id: 0
[ 1087.614890] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1087.614905] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658390
[ 1087.623534] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1087.623536] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1087.623607] nova: Start NOVA snapshot cleaner thread.
[ 1087.623614] nova: Running snapshot cleaner thread
[ 1087.623620] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1087.623622] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1087.623623] nova: NOVA: Normal shutdown
[ 1087.626739] nova: Current epoch id: 0
[ 1087.697705] run fstests generic/347 at 2017-06-13 21:27:26
[ 1087.902833] nova: Current epoch id: 0
[ 1087.902885] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xfe656000, tail 0xfe656080
[ 1087.902901] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0xfe658000, tail 0xfe658380
[ 1087.925655] run fstests generic/348 at 2017-06-13 21:27:26
[ 1088.082516] run fstests generic/349 at 2017-06-13 21:27:26
[ 1088.236982] run fstests generic/350 at 2017-06-13 21:27:27
[ 1088.390177] run fstests generic/351 at 2017-06-13 21:27:27
[ 1088.545075] run fstests generic/352 at 2017-06-13 21:27:27
[ 1088.708163] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1088.708165] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1088.708263] nova: Start NOVA snapshot cleaner thread.
[ 1088.708271] nova: Running snapshot cleaner thread
[ 1088.708277] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1088.708280] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1088.708280] nova: NOVA: Normal shutdown
[ 1088.711496] nova: Current epoch id: 0
[ 1088.738906] nova: Current epoch id: 0
[ 1088.738959] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1088.738974] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658380
[ 1088.744574] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1088.744576] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1088.744651] nova: Start NOVA snapshot cleaner thread.
[ 1088.744659] nova: Running snapshot cleaner thread
[ 1088.744664] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1088.744667] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1088.744667] nova: NOVA: Normal shutdown
[ 1088.747792] nova: Current epoch id: 0
[ 1088.770909] nova: Current epoch id: 0
[ 1088.770953] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1088.770967] nova: nova_save_blocknode_mappings_to_log: 53 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658350
[ 1088.790859] run fstests generic/353 at 2017-06-13 21:27:27
[ 1088.957662] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1088.957664] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1088.957774] nova: Start NOVA snapshot cleaner thread.
[ 1088.957782] nova: Running snapshot cleaner thread
[ 1088.957787] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1088.957789] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1088.957789] nova: NOVA: Normal shutdown
[ 1088.960987] nova: Current epoch id: 0
[ 1088.998904] nova: Current epoch id: 0
[ 1088.998959] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1088.998979] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1089.004556] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1089.004558] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1089.004644] nova: Start NOVA snapshot cleaner thread.
[ 1089.004658] nova: Running snapshot cleaner thread
[ 1089.004663] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1089.004666] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1089.004666] nova: NOVA: Normal shutdown
[ 1089.007765] nova: Current epoch id: 0
[ 1089.034844] nova: Current epoch id: 0
[ 1089.034882] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1ee657000, tail 0x1ee657080
[ 1089.034897] nova: nova_save_blocknode_mappings_to_log: 55 blocknodes, 1 log pages, pi head 0x1ee658000, tail 0x1ee658370
[ 1089.055372] run fstests generic/354 at 2017-06-13 21:27:27
[ 1089.207157] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1089.207159] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1089.207268] nova: Start NOVA snapshot cleaner thread.
[ 1089.207275] nova: Running snapshot cleaner thread
[ 1089.207282] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1089.207284] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1089.207284] nova: NOVA: Normal shutdown
[ 1089.210490] nova: Current epoch id: 0
[ 1089.234889] nova: Current epoch id: 0
[ 1089.234992] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x266657000, tail 0x266657080
[ 1089.235007] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x266658000, tail 0x266658390
[ 1089.240227] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1089.240229] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1089.240326] nova: Start NOVA snapshot cleaner thread.
[ 1089.240332] nova: Running snapshot cleaner thread
[ 1089.240340] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1089.240342] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1089.240342] nova: NOVA: Normal shutdown
[ 1089.243491] nova: Current epoch id: 0
[ 1092.574846] nova: Current epoch id: 0
[ 1092.574902] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1092.574918] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658380
[ 1092.583834] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1092.583836] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1092.583933] nova: Start NOVA snapshot cleaner thread.
[ 1092.583941] nova: Running snapshot cleaner thread
[ 1092.583947] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1092.583950] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1092.583950] nova: NOVA: Normal shutdown
[ 1092.587081] nova: Current epoch id: 0
[ 1092.607768] run fstests generic/355 at 2017-06-13 21:27:31
[ 1092.887050] nova: Current epoch id: 0
[ 1092.887110] nova: nova_save_inode_list_to_log: 17 inode nodes, pi head 0x1379c000, tail 0x1379c110
[ 1092.887182] nova: nova_save_blocknode_mappings_to_log: 269 blocknodes, 2 log pages, pi head 0x1379e000, tail 0x1379f0f0
[ 1092.897892] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 1092.897895] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1092.898029] nova: Start NOVA snapshot cleaner thread.
[ 1092.898037] nova: Running snapshot cleaner thread
[ 1092.898094] nova: nova_init_inode_list_from_inode: 17 inode nodes
[ 1092.898097] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1092.898098] nova: NOVA: Normal shutdown
[ 1092.901557] nova: Current epoch id: 0
[ 1092.924629] run fstests generic/356 at 2017-06-13 21:27:31
[ 1093.122838] nova: Current epoch id: 0
[ 1093.122909] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1093.122925] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1093.135354] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.135356] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.135554] nova: Start NOVA snapshot cleaner thread.
[ 1093.135560] nova: Running snapshot cleaner thread
[ 1093.135575] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.135577] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.135577] nova: NOVA: Normal shutdown
[ 1093.138839] nova: Current epoch id: 0
[ 1093.158985] nova: Current epoch id: 0
[ 1093.159036] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x266657000, tail 0x266657080
[ 1093.159052] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x266658000, tail 0x266658390
[ 1093.164656] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.164658] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.164773] nova: Start NOVA snapshot cleaner thread.
[ 1093.164782] nova: Running snapshot cleaner thread
[ 1093.164787] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.164789] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.164789] nova: NOVA: Normal shutdown
[ 1093.167915] nova: Current epoch id: 0
[ 1093.198861] nova: Current epoch id: 0
[ 1093.198900] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x266657000, tail 0x266657080
[ 1093.198918] nova: nova_save_blocknode_mappings_to_log: 52 blocknodes, 1 log pages, pi head 0x266658000, tail 0x266658340
[ 1093.223050] run fstests generic/357 at 2017-06-13 21:27:32
[ 1093.401695] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.401697] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.401788] nova: Start NOVA snapshot cleaner thread.
[ 1093.401804] nova: Running snapshot cleaner thread
[ 1093.401805] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.401807] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.401807] nova: NOVA: Normal shutdown
[ 1093.404932] nova: Current epoch id: 0
[ 1093.430893] nova: Current epoch id: 0
[ 1093.430954] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1093.430970] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1093.436267] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.436269] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.436345] nova: Start NOVA snapshot cleaner thread.
[ 1093.436354] nova: Running snapshot cleaner thread
[ 1093.436359] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.436361] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.436362] nova: NOVA: Normal shutdown
[ 1093.439512] nova: Current epoch id: 0
[ 1093.458845] nova: Current epoch id: 0
[ 1093.458888] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x86656000, tail 0x86656080
[ 1093.458902] nova: nova_save_blocknode_mappings_to_log: 50 blocknodes, 1 log pages, pi head 0x86658000, tail 0x86658320
[ 1093.480025] run fstests generic/358 at 2017-06-13 21:27:32
[ 1093.643898] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.643900] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.643988] nova: Start NOVA snapshot cleaner thread.
[ 1093.643997] nova: Running snapshot cleaner thread
[ 1093.644002] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.644004] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.644004] nova: NOVA: Normal shutdown
[ 1093.647140] nova: Current epoch id: 0
[ 1093.674890] nova: Current epoch id: 0
[ 1093.674944] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2de65f000, tail 0x2de65f080
[ 1093.674960] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x2de660000, tail 0x2de660390
[ 1093.680164] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.680166] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.680243] nova: Start NOVA snapshot cleaner thread.
[ 1093.680249] nova: Running snapshot cleaner thread
[ 1093.680262] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.680264] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.680264] nova: NOVA: Normal shutdown
[ 1093.683428] nova: Current epoch id: 0
[ 1093.702870] nova: Current epoch id: 0
[ 1093.702917] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x266657000, tail 0x266657080
[ 1093.702931] nova: nova_save_blocknode_mappings_to_log: 52 blocknodes, 1 log pages, pi head 0x266658000, tail 0x266658340
[ 1093.722662] run fstests generic/359 at 2017-06-13 21:27:32
[ 1093.886219] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.886221] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.886334] nova: Start NOVA snapshot cleaner thread.
[ 1093.886343] nova: Running snapshot cleaner thread
[ 1093.886348] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.886350] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.886350] nova: NOVA: Normal shutdown
[ 1093.889553] nova: Current epoch id: 0
[ 1093.914913] nova: Current epoch id: 0
[ 1093.914961] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x176656000, tail 0x176656080
[ 1093.914975] nova: nova_save_blocknode_mappings_to_log: 57 blocknodes, 1 log pages, pi head 0x176657000, tail 0x176657380
[ 1093.920158] nova: nova_get_nvmm_info: dev pmem1, phys_addr 0x780000000, virt_addr ffff960100000000, size 16106127360
[ 1093.920160] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1093.920227] nova: Start NOVA snapshot cleaner thread.
[ 1093.920235] nova: Running snapshot cleaner thread
[ 1093.920239] nova: nova_init_inode_list_from_inode: 8 inode nodes
[ 1093.920242] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1093.920242] nova: NOVA: Normal shutdown
[ 1093.923354] nova: Current epoch id: 0
[ 1093.950847] nova: Current epoch id: 0
[ 1093.950894] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x266657000, tail 0x266657080
[ 1093.950910] nova: nova_save_blocknode_mappings_to_log: 56 blocknodes, 1 log pages, pi head 0x266658000, tail 0x266658380
[ 1093.970844] run fstests generic/360 at 2017-06-13 21:27:32
[ 1094.186987] nova: Current epoch id: 0
[ 1094.187038] nova: nova_save_inode_list_to_log: 17 inode nodes, pi head 0x1379c000, tail 0x1379c110
[ 1094.187097] nova: nova_save_blocknode_mappings_to_log: 269 blocknodes, 2 log pages, pi head 0x1379e000, tail 0x1379f0f0
[ 1094.195687] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 1094.195689] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 1094.195793] nova: Start NOVA snapshot cleaner thread.
[ 1094.195799] nova: Running snapshot cleaner thread
[ 1094.195845] nova: nova_init_inode_list_from_inode: 17 inode nodes
[ 1094.195847] nova: Recovered 0 snapshots, latest epoch ID 0
[ 1094.195848] nova: NOVA: Normal shutdown
[ 1094.198051] nova: Current epoch id: 0
[ 1094.217723] run fstests generic/361 at 2017-06-13 21:27:33
[ 1094.390332] run fstests generic/362 at 2017-06-13 21:27:33
[ 1094.555244] run fstests generic/363 at 2017-06-13 21:27:33
[ 1094.732231] run fstests generic/364 at 2017-06-13 21:27:33
[ 1094.918696] run fstests generic/365 at 2017-06-13 21:27:33
[ 1095.112954] run fstests generic/366 at 2017-06-13 21:27:33
[ 1095.313351] run fstests generic/367 at 2017-06-13 21:27:34
[ 1095.525719] run fstests generic/368 at 2017-06-13 21:27:34
[ 1095.765772] run fstests generic/369 at 2017-06-13 21:27:34
[ 1095.979956] run fstests generic/370 at 2017-06-13 21:27:34
[ 1096.203190] run fstests shared/001 at 2017-06-13 21:27:35
[ 1096.411449] run fstests shared/002 at 2017-06-13 21:27:35
[ 1096.599096] run fstests shared/003 at 2017-06-13 21:27:35
[ 1096.785760] run fstests shared/004 at 2017-06-13 21:27:35
[ 1096.968559] run fstests shared/006 at 2017-06-13 21:27:35
[ 1097.149328] run fstests shared/032 at 2017-06-13 21:27:35
[ 1097.349243] run fstests shared/051 at 2017-06-13 21:27:36
[ 1097.535033] run fstests shared/272 at 2017-06-13 21:27:36
[ 1097.705330] run fstests shared/289 at 2017-06-13 21:27:36
[ 1097.885625] run fstests shared/298 at 2017-06-13 21:27:36
[ 1098.114981] nova: Current epoch id: 0
[ 1098.115039] nova: nova_save_inode_list_to_log: 17 inode nodes, pi head 0x333837000, tail 0x333837110
[ 1098.115097] nova: nova_save_blocknode_mappings_to_log: 270 blocknodes, 2 log pages, pi head 0x333838000, tail 0x333839100
[ 2233.291562] EXT4-fs (loop1): mounting ext3 file system using the ext4 subsystem
[ 2233.294797] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: usrquota,grpquota
[ 2233.329583] EXT4-fs (loop1): re-mounted. Opts: (null)
[ 2233.333116] EXT4-fs (loop1): re-mounted. Opts: (null)
[ 2240.923460] drop-caches (25336): drop_caches: 3
[ 2242.464826] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2242.464829] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2242.464981] nova: Start NOVA snapshot cleaner thread.
[ 2242.464983] nova: creating an empty nova of size 21474836480
[ 2242.464989] nova: Running snapshot cleaner thread
[ 2242.467095] nova: NOVA initialization finish
[ 2242.467106] nova: Current epoch id: 0
[ 2242.509084] drop-caches (25456): drop_caches: 3
[ 2251.797477] nova: nova_cow_file_write alloc blocks failed -28
[ 2251.797479] nova: nova_cow_file_write alloc blocks failed -28
[ 2251.797481] nova: nova_cow_file_write alloc blocks failed -28
[ 2251.797482] nova: nova_cow_file_write alloc blocks failed -28
[ 2252.206307] drop-caches (25489): drop_caches: 3
[ 2253.331276] nova: Current epoch id: 0
[ 2253.331314] nova error:
[ 2253.331315] ERROR: no inode log page available: 1 -28
[ 2253.331316] nova: Error saving inode list: -28
[ 2253.331316] nova error:
[ 2253.331317] ERROR: no inode log page available: 0 -22
[ 2253.331317] nova: Error saving blocknode mappings: -22
[ 2253.411165] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2253.411167] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2253.411278] nova: Start NOVA snapshot cleaner thread.
[ 2253.411281] nova: creating an empty nova of size 21474836480
[ 2253.411286] nova: Running snapshot cleaner thread
[ 2253.413416] nova: NOVA initialization finish
[ 2253.413429] nova: Current epoch id: 0
[ 2253.465026] drop-caches (25511): drop_caches: 3
[ 2262.734509] nova: nova_cow_file_write alloc blocks failed -28
[ 2262.734513] nova: nova_cow_file_write alloc blocks failed -28
[ 2263.136655] drop-caches (25540): drop_caches: 3
[ 2264.539293] nova: Current epoch id: 0
[ 2264.539374] nova error:
[ 2264.539375] ERROR: no inode log page available: 1 -28
[ 2264.539376] nova: Error saving inode list: -28
[ 2264.539376] nova error:
[ 2264.539377] ERROR: no inode log page available: 0 -22
[ 2264.539377] nova: Error saving blocknode mappings: -22
[ 2264.600054] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2264.600057] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2264.600179] nova: Start NOVA snapshot cleaner thread.
[ 2264.600181] nova: creating an empty nova of size 21474836480
[ 2264.600187] nova: Running snapshot cleaner thread
[ 2264.602296] nova: NOVA initialization finish
[ 2264.602304] nova: Current epoch id: 0
[ 2264.679735] drop-caches (25562): drop_caches: 3
[ 2281.201397] drop-caches (25587): drop_caches: 3
[ 2282.451272] nova: Current epoch id: 0
[ 2282.451353] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1cdd56000, tail 0x1cdd56080
[ 2282.451360] nova: nova_save_blocknode_mappings_to_log: 7 blocknodes, 1 log pages, pi head 0x26dd56000, tail 0x26dd56070
[ 2282.505200] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2282.505203] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2282.505315] nova: Start NOVA snapshot cleaner thread.
[ 2282.505317] nova: creating an empty nova of size 21474836480
[ 2282.505323] nova: Running snapshot cleaner thread
[ 2282.507450] nova: NOVA initialization finish
[ 2282.507459] nova: Current epoch id: 0
[ 2282.551944] drop-caches (25609): drop_caches: 3
[ 2298.598691] drop-caches (25642): drop_caches: 3
[ 2299.791289] nova: Current epoch id: 0
[ 2299.791343] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327c000, tail 0x1327c080
[ 2299.791358] nova: nova_save_blocknode_mappings_to_log: 48 blocknodes, 1 log pages, pi head 0x1327f000, tail 0x1327f300
[ 2299.877244] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2299.877247] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2299.877327] nova: Start NOVA snapshot cleaner thread.
[ 2299.877329] nova: creating an empty nova of size 21474836480
[ 2299.877334] nova: Running snapshot cleaner thread
[ 2299.879495] nova: NOVA initialization finish
[ 2299.879503] nova: Current epoch id: 0
[ 2299.942202] drop-caches (25664): drop_caches: 3
[ 2316.245515] drop-caches (25691): drop_caches: 3
[ 2317.627292] nova: Current epoch id: 0
[ 2317.627366] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327c000, tail 0x1327c080
[ 2317.627377] nova: nova_save_blocknode_mappings_to_log: 28 blocknodes, 1 log pages, pi head 0x1327d000, tail 0x1327d1c0
[ 2317.698630] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2317.698632] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2317.698780] nova: Start NOVA snapshot cleaner thread.
[ 2317.698782] nova: creating an empty nova of size 21474836480
[ 2317.698790] nova: Running snapshot cleaner thread
[ 2317.701162] nova: NOVA initialization finish
[ 2317.701173] nova: Current epoch id: 0
[ 2317.747317] drop-caches (25713): drop_caches: 3
[ 2333.756977] drop-caches (25737): drop_caches: 3
[ 2335.039283] nova: Current epoch id: 0
[ 2335.039346] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13284000, tail 0x13284080
[ 2335.039354] nova: nova_save_blocknode_mappings_to_log: 17 blocknodes, 1 log pages, pi head 0x13288000, tail 0x13288110
[ 2335.095789] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2335.095791] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2335.095910] nova: Start NOVA snapshot cleaner thread.
[ 2335.095913] nova: creating an empty nova of size 21474836480
[ 2335.095918] nova: Running snapshot cleaner thread
[ 2335.098018] nova: NOVA initialization finish
[ 2335.098026] nova: Current epoch id: 0
[ 2335.161150] drop-caches (25759): drop_caches: 3
[ 2351.378847] drop-caches (25786): drop_caches: 3
[ 2352.651816] nova: Current epoch id: 0
[ 2352.651893] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13283000, tail 0x13283080
[ 2352.651901] nova: nova_save_blocknode_mappings_to_log: 16 blocknodes, 1 log pages, pi head 0x142d4000, tail 0x142d40f0
[ 2352.717980] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2352.717984] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2352.718111] nova: Start NOVA snapshot cleaner thread.
[ 2352.718114] nova: creating an empty nova of size 21474836480
[ 2352.718122] nova: Running snapshot cleaner thread
[ 2352.720716] nova: NOVA initialization finish
[ 2352.720733] nova: Current epoch id: 0
[ 2352.768312] drop-caches (25808): drop_caches: 3
[ 2368.852799] drop-caches (25833): drop_caches: 3
[ 2370.035534] nova: Current epoch id: 0
[ 2370.035594] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327a000, tail 0x1327a080
[ 2370.035601] nova: nova_save_blocknode_mappings_to_log: 13 blocknodes, 1 log pages, pi head 0x13963000, tail 0x139630d0
[ 2370.106559] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2370.106561] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2370.106714] nova: Start NOVA snapshot cleaner thread.
[ 2370.106717] nova: creating an empty nova of size 21474836480
[ 2370.106720] nova: Running snapshot cleaner thread
[ 2370.109079] nova: NOVA initialization finish
[ 2370.109089] nova: Current epoch id: 0
[ 2370.158135] drop-caches (25855): drop_caches: 3
[ 2386.414767] drop-caches (25879): drop_caches: 3
[ 2387.739418] nova: Current epoch id: 0
[ 2387.739511] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1394a000, tail 0x1394a080
[ 2387.739518] nova: nova_save_blocknode_mappings_to_log: 14 blocknodes, 1 log pages, pi head 0x1394b000, tail 0x1394b0e0
[ 2387.803004] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2387.803006] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2387.803197] nova: Start NOVA snapshot cleaner thread.
[ 2387.803199] nova: creating an empty nova of size 21474836480
[ 2387.803211] nova: Running snapshot cleaner thread
[ 2387.805574] nova: NOVA initialization finish
[ 2387.805585] nova: Current epoch id: 0
[ 2387.846192] drop-caches (25901): drop_caches: 3
[ 2403.932562] drop-caches (25934): drop_caches: 3
[ 2404.971327] nova: Current epoch id: 0
[ 2404.971383] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13284000, tail 0x13284080
[ 2404.971399] nova: nova_save_blocknode_mappings_to_log: 51 blocknodes, 1 log pages, pi head 0x13286000, tail 0x13286330
[ 2405.024727] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2405.024729] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2405.024807] nova: Start NOVA snapshot cleaner thread.
[ 2405.024810] nova: creating an empty nova of size 21474836480
[ 2405.024815] nova: Running snapshot cleaner thread
[ 2405.026934] nova: NOVA initialization finish
[ 2405.026943] nova: Current epoch id: 0
[ 2405.075656] drop-caches (25956): drop_caches: 3
[ 2421.259372] drop-caches (25983): drop_caches: 3
[ 2422.399302] nova: Current epoch id: 0
[ 2422.399351] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327c000, tail 0x1327c080
[ 2422.399362] nova: nova_save_blocknode_mappings_to_log: 29 blocknodes, 1 log pages, pi head 0x1327d000, tail 0x1327d1d0
[ 2422.471888] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2422.471890] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2422.472104] nova: Start NOVA snapshot cleaner thread.
[ 2422.472106] nova: creating an empty nova of size 21474836480
[ 2422.472114] nova: Running snapshot cleaner thread
[ 2422.474232] nova: NOVA initialization finish
[ 2422.474242] nova: Current epoch id: 0
[ 2422.537519] drop-caches (26005): drop_caches: 3
[ 2438.662492] drop-caches (26029): drop_caches: 3
[ 2439.847322] nova: Current epoch id: 0
[ 2439.847385] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13284000, tail 0x13284080
[ 2439.847393] nova: nova_save_blocknode_mappings_to_log: 18 blocknodes, 1 log pages, pi head 0x13286000, tail 0x13286120
[ 2439.926939] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2439.926941] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2439.927064] nova: Start NOVA snapshot cleaner thread.
[ 2439.927066] nova: creating an empty nova of size 21474836480
[ 2439.927075] nova: Running snapshot cleaner thread
[ 2439.929428] nova: NOVA initialization finish
[ 2439.929439] nova: Current epoch id: 0
[ 2439.982758] drop-caches (26051): drop_caches: 3
[ 2456.360535] drop-caches (26084): drop_caches: 3
[ 2457.707338] nova: Current epoch id: 0
[ 2457.707413] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13285000, tail 0x13285080
[ 2457.707426] nova: nova_save_blocknode_mappings_to_log: 43 blocknodes, 1 log pages, pi head 0x1328c000, tail 0x1328c2b0
[ 2457.815631] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2457.815632] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2457.815723] nova: Start NOVA snapshot cleaner thread.
[ 2457.815725] nova: creating an empty nova of size 21474836480
[ 2457.815727] nova: Running snapshot cleaner thread
[ 2457.817848] nova: NOVA initialization finish
[ 2457.817856] nova: Current epoch id: 0
[ 2457.862048] drop-caches (26106): drop_caches: 3
[ 2474.025741] drop-caches (26133): drop_caches: 3
[ 2475.219331] nova: Current epoch id: 0
[ 2475.219382] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13286000, tail 0x13286080
[ 2475.219391] nova: nova_save_blocknode_mappings_to_log: 25 blocknodes, 1 log pages, pi head 0x13287000, tail 0x13287190
[ 2475.279049] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2475.279052] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2475.279143] nova: Start NOVA snapshot cleaner thread.
[ 2475.279146] nova: creating an empty nova of size 21474836480
[ 2475.279153] nova: Running snapshot cleaner thread
[ 2475.281287] nova: NOVA initialization finish
[ 2475.281296] nova: Current epoch id: 0
[ 2475.336790] drop-caches (26155): drop_caches: 3
[ 2491.716136] drop-caches (26179): drop_caches: 3
[ 2493.143340] nova: Current epoch id: 0
[ 2493.143392] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327c000, tail 0x1327c080
[ 2493.143399] nova: nova_save_blocknode_mappings_to_log: 16 blocknodes, 1 log pages, pi head 0x13283000, tail 0x13283100
[ 2493.200688] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2493.200690] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2493.200849] nova: Start NOVA snapshot cleaner thread.
[ 2493.200851] nova: creating an empty nova of size 21474836480
[ 2493.200860] nova: Running snapshot cleaner thread
[ 2493.202965] nova: NOVA initialization finish
[ 2493.202974] nova: Current epoch id: 0
[ 2493.266869] drop-caches (26201): drop_caches: 3
[ 2537.582502] drop-caches (26250): drop_caches: 3
[ 2540.965803] nova: Current epoch id: 0
[ 2540.966260] nova: nova_save_inode_list_to_log: 1921 inode nodes, pi head 0x297073000, tail 0x29707a8f0
[ 2540.966266] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x29707b000, tail 0x29707b080
[ 2542.061231] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2542.061232] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2542.061319] nova: Start NOVA snapshot cleaner thread.
[ 2542.061321] nova: creating an empty nova of size 21474836480
[ 2542.061327] nova: Running snapshot cleaner thread
[ 2542.063506] nova: NOVA initialization finish
[ 2542.063515] nova: Current epoch id: 0
[ 2542.114263] drop-caches (26272): drop_caches: 3
[ 2583.718982] drop-caches (26300): drop_caches: 3
[ 2587.712608] nova: Current epoch id: 0
[ 2587.712729] nova: nova_save_inode_list_to_log: 259 inode nodes, pi head 0x296473000, tail 0x296474050
[ 2587.712735] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x296475000, tail 0x296475080
[ 2588.784277] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2588.784280] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2588.784401] nova: Start NOVA snapshot cleaner thread.
[ 2588.784404] nova: creating an empty nova of size 21474836480
[ 2588.784408] nova: Running snapshot cleaner thread
[ 2588.786572] nova: NOVA initialization finish
[ 2588.786585] nova: Current epoch id: 0
[ 2588.840582] drop-caches (26322): drop_caches: 3
[ 2625.442381] drop-caches (26346): drop_caches: 3
[ 2628.275028] nova: Current epoch id: 0
[ 2628.275100] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2b3327000, tail 0x2b3327080
[ 2628.275105] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x2b3328000, tail 0x2b3328080
[ 2629.334814] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2629.334816] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2629.334929] nova: Start NOVA snapshot cleaner thread.
[ 2629.334931] nova: creating an empty nova of size 21474836480
[ 2629.334938] nova: Running snapshot cleaner thread
[ 2629.337166] nova: NOVA initialization finish
[ 2629.337175] nova: Current epoch id: 0
[ 2629.375510] drop-caches (26368): drop_caches: 3
[ 2642.112246] fxmark invoked oom-killer: gfp_mask=0x16040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK), nodemask=0, order=0, oom_score_adj=0
[ 2642.112247] fxmark cpuset=/ mems_allowed=0
[ 2642.112251] CPU: 3 PID: 26373 Comm: fxmark Tainted: G OE 4.10.0-rc8-nova #4
[ 2642.112252] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
[ 2642.112253] Call Trace:
[ 2642.112259] dump_stack+0x63/0x81
[ 2642.112262] dump_header+0x7b/0x1fd
[ 2642.112266] ? apparmor_capable+0x2f/0x40
[ 2642.112268] oom_kill_process+0x208/0x3e0
[ 2642.112269] out_of_memory+0x120/0x4c0
[ 2642.112272] __alloc_pages_slowpath+0xb37/0xba0
[ 2642.112274] __alloc_pages_nodemask+0x209/0x260
[ 2642.112276] alloc_pages_current+0x95/0x140
[ 2642.112279] new_slab+0x425/0x6f0
[ 2642.112280] ___slab_alloc+0x3a0/0x4b0
[ 2642.112283] ? get_empty_filp+0x5c/0x1c0
[ 2642.112284] ? get_empty_filp+0x5c/0x1c0
[ 2642.112286] __slab_alloc+0x20/0x40
[ 2642.112287] kmem_cache_alloc+0x15e/0x1a0
[ 2642.112289] get_empty_filp+0x5c/0x1c0
[ 2642.112290] path_openat+0x40/0x14f0
[ 2642.112292] do_filp_open+0x91/0x100
[ 2642.112293] ? list_lru_add+0x5a/0x120
[ 2642.112295] ? __alloc_fd+0x46/0x170
[ 2642.112296] do_sys_open+0x130/0x220
[ 2642.112297] SyS_open+0x1e/0x20
[ 2642.112299] entry_SYSCALL_64_fastpath+0x1e/0xad
[ 2642.112300] RIP: 0033:0x7f8907ab7a70
[ 2642.112301] RSP: 002b:00007ffc1d18b0e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
[ 2642.112302] RAX: ffffffffffffffda RBX: 0000000000116ca5 RCX: 00007f8907ab7a70
[ 2642.112302] RDX: 00000000000001c0 RSI: 0000000000000042 RDI: 00007ffc1d18c100
[ 2642.112303] RBP: 00007ffc1d18c100 R08: 0000000000000001 R09: 0000000000000028
[ 2642.112303] R10: 0000000000000075 R11: 0000000000000246 R12: 00007f8907f9f000
[ 2642.112304] R13: 00007ffc1d18b100 R14: 00007f8907fa81c0 R15: 000056316fe65532
[ 2642.112305] Mem-Info:
[ 2642.112309] active_anon:467465 inactive_anon:2221 isolated_anon:0
active_file:4546 inactive_file:4386 isolated_file:0
unevictable:912 dirty:2 writeback:0 unstable:0
slab_reclaimable:3659301 slab_unreclaimable:16059
mapped:9567 shmem:2227 pagetables:2059 bounce:0
free:34891 free_pcp:0 free_cma:0
[ 2642.112312] Node 0 active_anon:1869860kB inactive_anon:8884kB active_file:18184kB inactive_file:17544kB unevictable:3648kB isolated(anon):0kB isolated(file):0kB mapped:38268kB dirty:8kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 1705984kB anon_thp: 8908kB writeback_tmp:0kB unstable:0kB pages_scanned:0 all_unreclaimable? no
[ 2642.112312] Node 0 DMA free:15908kB min:60kB low:72kB high:84kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 2642.112315] lowmem_reserve[]: 0 2984 17026 17026 17026
[ 2642.112317] Node 0 DMA32 free:67996kB min:11836kB low:14892kB high:17948kB active_anon:2048kB inactive_anon:0kB active_file:4kB inactive_file:4kB unevictable:0kB writepending:0kB present:3129332kB managed:3063764kB mlocked:0kB slab_reclaimable:2993092kB slab_unreclaimable:164kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 2642.112319] lowmem_reserve[]: 0 0 14041 14041 14041
[ 2642.112321] Node 0 Normal free:55660kB min:55684kB low:70060kB high:84436kB active_anon:1867812kB inactive_anon:8884kB active_file:18180kB inactive_file:17540kB unevictable:3648kB writepending:8kB present:14680064kB managed:14382792kB mlocked:3648kB slab_reclaimable:11644112kB slab_unreclaimable:64072kB kernel_stack:3936kB pagetables:8236kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 2642.112323] lowmem_reserve[]: 0 0 0 0 0
[ 2642.112325] Node 0 DMA: 14kB (U) 08kB 016kB 132kB (U) 264kB (U) 1128kB (U) 1256kB (U) 0512kB 11024kB (U) 12048kB (M) 34096kB (M) = 15908kB
[ 2642.112331] Node 0 DMA32: 14
4kB (UEH) 238kB (UEH) 2616kB (UMEH) 2132kB (UE) 2364kB (UMEH) 18128kB (UMEH) 12256kB (UMH) 9512kB (UMEH) 41024kB (UMH) 32048kB (EH) 114096kB (M) = 68080kB
[ 2642.112338] Node 0 Normal: 28564kB (UMEH) 32238kB (UMEH) 30516kB (UME) 16032kB (UME) 14164kB (ME) 0128kB 0256kB 0512kB 01024kB 02048kB 0*4096kB = 56232kB
[ 2642.112345] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 2642.112345] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 2642.112346] 11728 total pagecache pages
[ 2642.112347] 0 pages in swap cache
[ 2642.112347] Swap cache stats: add 0, delete 0, find 0/0
[ 2642.112348] Free swap = 0kB
[ 2642.112348] Total swap = 0kB
[ 2642.112349] 4456347 pages RAM
[ 2642.112349] 0 pages HighMem/MovableOnly
[ 2642.112350] 90731 pages reserved
[ 2642.112350] 0 pages cma reserved
[ 2642.112350] 0 pages hwpoisoned
[ 2642.112351] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[ 2642.112355] [ 699] 0 699 11554 1748 25 3 0 0 systemd-journal
[ 2642.112357] [ 748] 0 748 25746 353 18 3 0 0 lvmetad
[ 2642.112359] [ 788] 0 788 11048 990 22 3 0 -1000 systemd-udevd
[ 2642.112360] [ 1404] 0 1404 4029 227 11 3 0 0 dhclient
[ 2642.112361] [ 1564] 0 1564 7036 552 19 3 0 0 atd
[ 2642.112362] [ 1574] 106 1574 11327 875 26 3 0 -900 dbus-daemon
[ 2642.112363] [ 1594] 102 1594 12476 967 29 3 0 0 systemd-resolve
[ 2642.112364] [ 1612] 0 1612 1097 333 8 3 0 0 acpid
[ 2642.112366] [ 1618] 0 1618 7460 686 19 3 0 0 cron
[ 2642.112367] [ 1631] 104 1631 66414 899 32 4 0 0 rsyslogd
[ 2642.112368] [ 1660] 0 1660 11638 841 26 3 0 0 systemd-logind
[ 2642.112369] [ 1667] 0 1667 70987 1377 42 4 0 0 accounts-daemon
[ 2642.112370] [ 1687] 0 1687 87738 2944 30 6 0 0 snapd
[ 2642.112371] [ 1716] 0 1716 40223 318 16 3 0 0 lxcfs
[ 2642.112373] [ 1726] 0 1726 1124 199 8 3 0 0 sshguard-journa
[ 2642.112374] [ 1728] 0 1728 11950 1335 25 3 0 0 journalctl
[ 2642.112375] [ 1729] 0 1729 4046 626 11 3 0 0 sshguard
[ 2642.112376] [ 1754] 0 1754 1124 185 8 4 0 0 sshg-fw
[ 2642.112377] [ 1829] 0 1829 71578 1306 43 3 0 0 polkitd
[ 2642.112378] [ 1963] 0 1963 1304 30 7 3 0 0 iscsid
[ 2642.112379] [ 1965] 0 1965 1429 875 8 3 0 -17 iscsid
[ 2642.112380] [ 2033] 0 2033 3623 489 12 3 0 0 agetty
[ 2642.112382] [ 2076] 0 2076 3679 425 12 3 0 0 agetty
[ 2642.112383] [ 2105] 112 2105 24490 1157 21 3 0 0 ntpd
[ 2642.112384] [ 2108] 0 2108 17103 5361 37 3 0 0 google_ip_forwa
[ 2642.112385] [ 2115] 0 2115 17086 5391 37 3 0 0 google_accounts
[ 2642.112386] [ 2122] 0 2122 17502 1270 38 3 0 -1000 sshd
[ 2642.112388] [ 2124] 0 2124 17099 5329 39 3 0 0 google_clock_sk
[ 2642.112389] [ 2143] 113 2143 16280 1031 35 3 0 0 systemd
[ 2642.112390] [ 2151] 113 2151 21784 487 44 3 0 0 (sd-pam)
[ 2642.112391] [ 2170] 113 2170 4713 45 13 3 0 0 daemon
[ 2642.112392] [ 2184] 113 2184 2556257 449971 1031 12 0 0 java
[ 2642.112394] [ 2197] 0 2197 25359 1369 54 3 0 0 sshd
[ 2642.112395] [ 2215] 1001 2215 16280 1045 35 3 0 0 systemd
[ 2642.112396] [ 2216] 1001 2216 21784 487 44 3 0 0 (sd-pam)
[ 2642.112397] [ 2367] 1001 2367 25359 754 50 3 0 0 sshd
[ 2642.112398] [ 2377] 1001 2377 5401 1012 15 3 0 0 bash
[ 2642.112400] [25320] 113 25320 1124 199 8 3 0 0 sh
[ 2642.112401] [25323] 113 25323 1124 188 7 3 0 0 script.sh
[ 2642.112402] [25324] 113 25324 12034 3341 26 3 0 0 python3
[ 2642.112403] [26369] 113 26369 1124 190 8 3 0 0 sh
[ 2642.112404] [26370] 113 26370 1075 180 7 3 0 0 fxmark
[ 2642.112405] [26371] 113 26371 1075 20 7 3 0 0 fxmark
[ 2642.112406] [26372] 113 26372 1075 20 7 3 0 0 fxmark
[ 2642.112407] [26373] 113 26373 1075 20 7 3 0 0 fxmark
[ 2642.112408] Out of memory: Kill process 2184 (java) score 103 or sacrifice child
[ 2642.129387] Killed process 25320 (sh) total-vm:4496kB, anon-rss:72kB, file-rss:724kB, shmem-rss:0kB
[ 2655.645777] drop-caches (26406): drop_caches: 3
[ 2658.723383] nova: Current epoch id: 0
[ 2658.723428] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1dc6d000, tail 0x1dc6d080
[ 2658.723434] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x1dc6e000, tail 0x1dc6e080
[ 2658.796291] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2658.796293] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2658.796377] nova: Start NOVA snapshot cleaner thread.
[ 2658.796380] nova: creating an empty nova of size 21474836480
[ 2658.796385] nova: Running snapshot cleaner thread
[ 2658.798510] nova: NOVA initialization finish
[ 2658.798520] nova: Current epoch id: 0
[ 2658.844683] drop-caches (26428): drop_caches: 3
[ 2685.079639] drop-caches (26455): drop_caches: 3
[ 2688.339457] nova: Current epoch id: 0
[ 2688.339527] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1db76000, tail 0x1db76080
[ 2688.339533] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x1db77000, tail 0x1db77080
[ 2688.416087] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2688.416090] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2688.416218] nova: Start NOVA snapshot cleaner thread.
[ 2688.416220] nova: creating an empty nova of size 21474836480
[ 2688.416239] nova: Running snapshot cleaner thread
[ 2688.418357] nova: NOVA initialization finish
[ 2688.418365] nova: Current epoch id: 0
[ 2688.457291] drop-caches (26477): drop_caches: 3
[ 2710.324384] drop-caches (26501): drop_caches: 3
[ 2711.963443] nova: Current epoch id: 0
[ 2711.963494] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1db76000, tail 0x1db76080
[ 2711.963500] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x1db77000, tail 0x1db77080
[ 2712.036692] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2712.036694] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2712.036774] nova: Start NOVA snapshot cleaner thread.
[ 2712.036776] nova: creating an empty nova of size 21474836480
[ 2712.036782] nova: Running snapshot cleaner thread
[ 2712.038913] nova: NOVA initialization finish
[ 2712.038922] nova: Current epoch id: 0
[ 2712.093655] drop-caches (26523): drop_caches: 3
[ 2725.322054] nova error:
[ 2725.322057] ERROR: no inode log page available: 1 -28
[ 2725.322057] nova error:
[ 2725.322058] nova_initialize_inode_log ERROR: no inode log page available
[ 2725.322059] nova error:
[ 2725.322060] nova error:
[ 2725.322061] nova_append_file_write_entry failed
[ 2725.322062] nova: nova_cow_file_write: append inode entry failed
[ 2725.322063] ERROR: no inode log page available: 1 -28
[ 2725.322064] nova error:
[ 2725.322065] nova_initialize_inode_log ERROR: no inode log page available
[ 2725.322065] nova error:
[ 2725.322066] nova_append_file_write_entry failed
[ 2725.322066] nova: nova_cow_file_write: append inode entry failed
[ 2725.322068] nova error:
[ 2725.322069] ERROR: no inode log page available: 1 -28
[ 2725.322070] nova error:
[ 2725.322071] nova_initialize_inode_log ERROR: no inode log page available
[ 2725.322071] nova error:
[ 2725.322071] nova_append_file_write_entry failed
[ 2725.322072] nova: nova_cow_file_write: append inode entry failed
[ 2725.322074] nova error:
[ 2725.322075] ERROR: no inode log page available: 1 -28
[ 2725.322076] nova error:
[ 2725.322077] nova_initialize_inode_log ERROR: no inode log page available
[ 2725.322077] nova error:
[ 2725.322078] nova_append_file_write_entry failed
[ 2725.322079] nova: nova_cow_file_write: append inode entry failed
[ 2728.385978] drop-caches (26556): drop_caches: 3
[ 2731.949009] nova: Current epoch id: 0
[ 2731.949066] nova error:
[ 2731.949069] ERROR: no inode log page available: 1 -28
[ 2731.949070] nova: Error saving inode list: -28
[ 2731.949070] nova error:
[ 2731.949071] ERROR: no inode log page available: 1 -28
[ 2731.949071] nova: Error saving blocknode mappings: -28
[ 2733.333761] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2733.333763] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2733.333863] nova: Start NOVA snapshot cleaner thread.
[ 2733.333865] nova: creating an empty nova of size 21474836480
[ 2733.333875] nova: Running snapshot cleaner thread
[ 2733.336135] nova: NOVA initialization finish
[ 2733.336145] nova: Current epoch id: 0
[ 2733.391500] drop-caches (26578): drop_caches: 3
[ 2749.268320] nova error:
[ 2749.268322] ERROR: no inode log page available: 1 -28
[ 2749.268323] nova error:
[ 2749.268324] nova_initialize_inode_log ERROR: no inode log page available
[ 2749.268324] nova error:
[ 2749.268325] nova_append_file_write_entry failed
[ 2749.268326] nova: nova_cow_file_write: append inode entry failed
[ 2749.268331] nova error:
[ 2749.268333] ERROR: no inode log page available: 1 -28
[ 2749.268334] nova error:
[ 2749.268335] nova_initialize_inode_log ERROR: no inode log page available
[ 2749.268335] nova error:
[ 2749.268336] nova_append_file_write_entry failed
[ 2749.268336] nova: nova_cow_file_write: append inode entry failed
[ 2751.805447] drop-caches (26605): drop_caches: 3
[ 2755.311343] nova: Current epoch id: 0
[ 2755.311453] nova error:
[ 2755.311455] ERROR: no inode log page available: 1 -28
[ 2755.311456] nova: Error saving inode list: -28
[ 2755.311456] nova error:
[ 2755.311457] ERROR: no inode log page available: 1 -28
[ 2755.311457] nova: Error saving blocknode mappings: -28
[ 2756.925687] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2756.925689] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2756.925850] nova: Start NOVA snapshot cleaner thread.
[ 2756.925852] nova: creating an empty nova of size 21474836480
[ 2756.925876] nova: Running snapshot cleaner thread
[ 2756.928148] nova: NOVA initialization finish
[ 2756.928158] nova: Current epoch id: 0
[ 2756.975251] drop-caches (26627): drop_caches: 3
[ 2775.394442] drop-caches (26651): drop_caches: 3
[ 2778.264496] nova: Current epoch id: 0
[ 2778.264606] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x2d2ede000, tail 0x2d2ede080
[ 2778.264613] nova: nova_save_blocknode_mappings_to_log: 7 blocknodes, 1 log pages, pi head 0x2d2edf000, tail 0x2d2edf070
[ 2779.230753] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2779.230755] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2779.230830] nova: Start NOVA snapshot cleaner thread.
[ 2779.230832] nova: creating an empty nova of size 21474836480
[ 2779.230835] nova: Running snapshot cleaner thread
[ 2779.233162] nova: NOVA initialization finish
[ 2779.233173] nova: Current epoch id: 0
[ 2779.284815] drop-caches (26673): drop_caches: 3
[ 2783.522614] nova: nova_cow_file_write alloc blocks failed -28
[ 2783.522616] nova: nova_cow_file_write alloc blocks failed -28
[ 2783.522617] nova: nova_cow_file_write alloc blocks failed -28
[ 2783.522619] nova: nova_cow_file_write alloc blocks failed -28
[ 2785.002630] nova error:
[ 2785.002633] ERROR: no inode log page available: 256 -28
[ 2785.002634] nova error:
[ 2785.002634] nova_extend_inode_log ERROR: no inode log page available
[ 2785.002635] nova: curr_p 0x3b8823fc0, 36326 pages
[ 2785.002635] nova error:
[ 2785.002636] nova_append_setattr_entry failed
[ 2785.002636] nova: nova_handle_setattr_operation: append setattr entry failure
[ 2785.330585] drop-caches (26698): drop_caches: 3
[ 2785.684623] nova: Current epoch id: 0
[ 2785.684699] nova error:
[ 2785.684700] ERROR: no inode log page available: 1 -28
[ 2785.684701] nova: Error saving inode list: -28
[ 2785.684702] nova error:
[ 2785.684702] ERROR: no inode log page available: 1 -28
[ 2785.684703] nova: Error saving blocknode mappings: -28
[ 2785.746347] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2785.746349] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2785.746472] nova: Start NOVA snapshot cleaner thread.
[ 2785.746475] nova: creating an empty nova of size 21474836480
[ 2785.746482] nova: Running snapshot cleaner thread
[ 2785.748739] nova: NOVA initialization finish
[ 2785.748750] nova: Current epoch id: 0
[ 2785.774429] drop-caches (26720): drop_caches: 3
[ 2793.894362] nova: nova_cow_file_write alloc blocks failed -28
[ 2793.894363] nova: nova_cow_file_write alloc blocks failed -28
[ 2804.367848] drop-caches (26752): drop_caches: 3
[ 2805.407422] nova: Current epoch id: 0
[ 2805.407475] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13285000, tail 0x13285080
[ 2805.409530] nova: nova_save_blocknode_mappings_to_log: 9569 blocknodes, 38 log pages, pi head 0x13286000, tail 0x132abab0
[ 2805.460200] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2805.460202] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2805.460297] nova: Start NOVA snapshot cleaner thread.
[ 2805.460300] nova: creating an empty nova of size 21474836480
[ 2805.460308] nova: Running snapshot cleaner thread
[ 2805.462428] nova: NOVA initialization finish
[ 2805.462436] nova: Current epoch id: 0
[ 2805.519638] drop-caches (26774): drop_caches: 3
[ 2822.206448] nova: nova_cow_file_write alloc blocks failed -28
[ 2838.835059] drop-caches (26796): drop_caches: 3
[ 2840.055465] nova: Current epoch id: 0
[ 2840.055518] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327c000, tail 0x1327c080
[ 2840.055589] nova: nova_save_blocknode_mappings_to_log: 323 blocknodes, 2 log pages, pi head 0x1327d000, tail 0x1327e450
[ 2840.099738] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2840.099740] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2840.099821] nova: Start NOVA snapshot cleaner thread.
[ 2840.099823] nova: creating an empty nova of size 21474836480
[ 2840.099831] nova: Running snapshot cleaner thread
[ 2840.101927] nova: NOVA initialization finish
[ 2840.101936] nova: Current epoch id: 0
[ 2840.155346] drop-caches (26818): drop_caches: 3
[ 2859.892227] drop-caches (26847): drop_caches: 3
[ 2861.219713] nova: Current epoch id: 0
[ 2861.219772] nova: nova_save_inode_list_to_log: 39 inode nodes, pi head 0x13283000, tail 0x13283270
[ 2861.219854] nova: nova_save_blocknode_mappings_to_log: 383 blocknodes, 2 log pages, pi head 0x13284000, tail 0x13285810
[ 2861.287898] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2861.287900] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2861.288038] nova: Start NOVA snapshot cleaner thread.
[ 2861.288040] nova: creating an empty nova of size 21474836480
[ 2861.288047] nova: Running snapshot cleaner thread
[ 2861.290149] nova: NOVA initialization finish
[ 2861.290158] nova: Current epoch id: 0
[ 2861.335476] drop-caches (26869): drop_caches: 3
[ 2881.517843] drop-caches (26896): drop_caches: 3
[ 2882.939684] nova: Current epoch id: 0
[ 2882.939750] nova: nova_save_inode_list_to_log: 19 inode nodes, pi head 0x1327c000, tail 0x1327c130
[ 2882.939825] nova: nova_save_blocknode_mappings_to_log: 213 blocknodes, 1 log pages, pi head 0x1327d000, tail 0x1327dd50
[ 2883.021245] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2883.021248] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2883.021382] nova: Start NOVA snapshot cleaner thread.
[ 2883.021386] nova: creating an empty nova of size 21474836480
[ 2883.021391] nova: Running snapshot cleaner thread
[ 2883.023992] nova: NOVA initialization finish
[ 2883.024009] nova: Current epoch id: 0
[ 2883.075468] drop-caches (26918): drop_caches: 3
[ 2902.748827] drop-caches (26944): drop_caches: 3
[ 2904.219493] nova: Current epoch id: 0
[ 2904.219579] nova: nova_save_inode_list_to_log: 15 inode nodes, pi head 0x13286000, tail 0x132860f0
[ 2904.219606] nova: nova_save_blocknode_mappings_to_log: 118 blocknodes, 1 log pages, pi head 0x13287000, tail 0x13287760
[ 2904.288181] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2904.288184] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2904.288283] nova: Start NOVA snapshot cleaner thread.
[ 2904.288285] nova: creating an empty nova of size 21474836480
[ 2904.288293] nova: Running snapshot cleaner thread
[ 2904.290437] nova: NOVA initialization finish
[ 2904.290446] nova: Current epoch id: 0
[ 2904.345080] drop-caches (26966): drop_caches: 3
[ 2921.042869] drop-caches (26999): drop_caches: 3
[ 2922.271483] nova: Current epoch id: 0
[ 2922.271529] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327e000, tail 0x1327e080
[ 2922.271535] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x1327f000, tail 0x1327f080
[ 2922.344002] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2922.344004] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2922.344153] nova: Start NOVA snapshot cleaner thread.
[ 2922.344156] nova: creating an empty nova of size 21474836480
[ 2922.344163] nova: Running snapshot cleaner thread
[ 2922.346326] nova: NOVA initialization finish
[ 2922.346335] nova: Current epoch id: 0
[ 2922.391796] drop-caches (27021): drop_caches: 3
[ 2938.642862] drop-caches (27048): drop_caches: 3
[ 2940.031474] nova: Current epoch id: 0
[ 2940.031532] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x1327e000, tail 0x1327e080
[ 2940.031538] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x1327f000, tail 0x1327f080
[ 2940.092196] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2940.092198] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2940.092325] nova: Start NOVA snapshot cleaner thread.
[ 2940.092328] nova: creating an empty nova of size 21474836480
[ 2940.092334] nova: Running snapshot cleaner thread
[ 2940.094879] nova: NOVA initialization finish
[ 2940.094895] nova: Current epoch id: 0
[ 2940.157403] drop-caches (27070): drop_caches: 3
[ 2956.617647] drop-caches (27094): drop_caches: 3
[ 2957.951518] nova: Current epoch id: 0
[ 2957.951560] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x13287000, tail 0x13287080
[ 2957.951565] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x13288000, tail 0x13288080
[ 2958.030480] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2958.030482] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2958.030592] nova: Start NOVA snapshot cleaner thread.
[ 2958.030595] nova: creating an empty nova of size 21474836480
[ 2958.030602] nova: Running snapshot cleaner thread
[ 2958.032885] nova: NOVA initialization finish
[ 2958.032895] nova: Current epoch id: 0
[ 2958.144081] drop-caches (27116): drop_caches: 3
[ 2978.882500] drop-caches (2995): drop_caches: 3
[ 2980.183500] nova: Current epoch id: 0
[ 2980.183551] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x144cb000, tail 0x144cb080
[ 2980.183556] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x144cc000, tail 0x144cc080
[ 2980.235850] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 2980.235851] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 2980.235950] nova: Start NOVA snapshot cleaner thread.
[ 2980.235953] nova: creating an empty nova of size 21474836480
[ 2980.235958] nova: Running snapshot cleaner thread
[ 2980.238145] nova: NOVA initialization finish
[ 2980.238155] nova: Current epoch id: 0
[ 2980.281838] drop-caches (3017): drop_caches: 3
[ 3001.177680] drop-caches (11233): drop_caches: 3
[ 3002.551488] nova: Current epoch id: 0
[ 3002.551556] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x144cb000, tail 0x144cb080
[ 3002.551562] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x144cc000, tail 0x144cc080
[ 3002.615808] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3002.615810] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3002.615912] nova: Start NOVA snapshot cleaner thread.
[ 3002.615915] nova: creating an empty nova of size 21474836480
[ 3002.615918] nova: Running snapshot cleaner thread
[ 3002.618045] nova: NOVA initialization finish
[ 3002.618055] nova: Current epoch id: 0
[ 3002.669871] drop-caches (11255): drop_caches: 3
[ 3023.368878] drop-caches (19469): drop_caches: 3
[ 3024.563492] nova: Current epoch id: 0
[ 3024.563545] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x144cb000, tail 0x144cb080
[ 3024.563551] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x144cc000, tail 0x144cc080
[ 3024.629555] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3024.629557] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3024.629675] nova: Start NOVA snapshot cleaner thread.
[ 3024.629677] nova: creating an empty nova of size 21474836480
[ 3024.629683] nova: Running snapshot cleaner thread
[ 3024.631854] nova: NOVA initialization finish
[ 3024.631865] nova: Current epoch id: 0
[ 3024.694309] drop-caches (19491): drop_caches: 3
[ 3045.510916] drop-caches (27716): drop_caches: 3
[ 3046.895522] nova: Current epoch id: 0
[ 3046.895578] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x293272000, tail 0x293272080
[ 3046.895585] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x293273000, tail 0x293273080
[ 3046.968075] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3046.968077] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3046.968166] nova: Start NOVA snapshot cleaner thread.
[ 3046.968169] nova: creating an empty nova of size 21474836480
[ 3046.968172] nova: Running snapshot cleaner thread
[ 3046.970296] nova: NOVA initialization finish
[ 3046.970307] nova: Current epoch id: 0
[ 3047.011819] drop-caches (27738): drop_caches: 3
[ 3067.572487] drop-caches (3615): drop_caches: 3
[ 3068.911516] nova: Current epoch id: 0
[ 3068.911562] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0xb3272000, tail 0xb3272080
[ 3068.911567] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0xb3273000, tail 0xb3273080
[ 3068.989754] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3068.989756] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3068.989839] nova: Start NOVA snapshot cleaner thread.
[ 3068.989841] nova: creating an empty nova of size 21474836480
[ 3068.989844] nova: Running snapshot cleaner thread
[ 3068.992106] nova: NOVA initialization finish
[ 3068.992116] nova: Current epoch id: 0
[ 3069.059535] drop-caches (3637): drop_caches: 3
[ 3090.137839] drop-caches (11852): drop_caches: 3
[ 3091.323532] nova: Current epoch id: 0
[ 3091.323603] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x144cb000, tail 0x144cb080
[ 3091.323608] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x144cc000, tail 0x144cc080
[ 3091.378641] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3091.378643] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3091.378768] nova: Start NOVA snapshot cleaner thread.
[ 3091.378771] nova: creating an empty nova of size 21474836480
[ 3091.378775] nova: Running snapshot cleaner thread
[ 3091.380990] nova: NOVA initialization finish
[ 3091.381000] nova: Current epoch id: 0
[ 3091.430271] drop-caches (11874): drop_caches: 3
[ 3126.072719] drop-caches (11917): drop_caches: 3
[ 3130.002784] nova: Current epoch id: 0
[ 3130.002858] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x294c72000, tail 0x294c72080
[ 3130.002864] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x294c73000, tail 0x294c73080
[ 3131.706109] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3131.706111] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3131.706241] nova: Start NOVA snapshot cleaner thread.
[ 3131.706243] nova: creating an empty nova of size 21474836480
[ 3131.706249] nova: Running snapshot cleaner thread
[ 3131.708522] nova: NOVA initialization finish
[ 3131.708531] nova: Current epoch id: 0
[ 3131.747794] drop-caches (11939): drop_caches: 3
[ 3167.382034] drop-caches (11966): drop_caches: 3
[ 3171.704286] nova: Current epoch id: 0
[ 3171.704340] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x295072000, tail 0x295072080
[ 3171.704345] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x295073000, tail 0x295073080
[ 3173.573120] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3173.573122] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3173.573221] nova: Start NOVA snapshot cleaner thread.
[ 3173.573223] nova: creating an empty nova of size 21474836480
[ 3173.573231] nova: Running snapshot cleaner thread
[ 3173.575354] nova: NOVA initialization finish
[ 3173.575363] nova: Current epoch id: 0
[ 3173.616347] drop-caches (11988): drop_caches: 3
[ 3208.947122] drop-caches (12012): drop_caches: 3
[ 3212.968613] nova: Current epoch id: 0
[ 3212.968668] nova: nova_save_inode_list_to_log: 8 inode nodes, pi head 0x294c72000, tail 0x294c72080
[ 3212.968674] nova: nova_save_blocknode_mappings_to_log: 8 blocknodes, 1 log pages, pi head 0x294c73000, tail 0x294c73080
[ 3214.659614] nova: nova_get_nvmm_info: dev pmem0, phys_addr 0x280000000, virt_addr ffff95fc00000000, size 21474836480
[ 3214.659616] nova: measure timing 0, replica metadata 1, metadata checksum 1, inplace metadata update 1, inplace update 0, wprotect 0, mmap Cow 1, data checksum 1, data parity 1, DRAM checksum 1
[ 3214.659723] nova: Start NOVA snapshot cleaner thread.
[ 3214.659725] nova: creating an empty nova of size 21474836480
[ 3214.659727] nova: Running snapshot cleaner thread
[ 3214.661861] nova: NOVA initialization finish
[ 3214.661872] nova: Current epoch id: 0
[ 3214.714345] drop-caches (12034): drop_caches: 3
[ 3231.542154] nova: nova_insert_dir_radix_tree ERROR -12: n_dir_rd-0-514432.dat
[ 3231.542156] nova error:
[ 3231.542157] nova_rebuild_handle_dentry ERROR -12
[ 3231.542262] nova: nova_insert_dir_radix_tree ERROR -12: n_dir_rd-0-1084475.dat
[ 3231.542262] nova error:
[ 3231.542263] nova_create return -12

Copied from original issue: NVSL/nova-dev#16

@stevenjswanson
Copy link
Member Author

From @Andiry on June 14, 2017 0:59

  1. Do you encounter this issue when running fxmark?

  2. Can you specify which exact test cause this issue?

  3. FxMark has a known issue on system with small DRAM. It creates as many file as possible that fills the whole NVMM space. Since NOVA uses radix tree for directory and the key is the hash value of the file name, creating too many files consume all the DRAM for the radix tree and causes OOM.

@stevenjswanson
Copy link
Member Author

Yes, on fxmark.

It's a new issue: the rest works with what's in Nova-linux. Fails for
current nova-dev.

-steve

--
Composed on (and maybe dictated to) my phone.

On Jun 13, 2017, at 18:00, Andiry Xu notifications@github.com wrote:

Do you encounter this issue when running fxmark?
2.

Can you specify which exact test cause this issue?
3.

FxMark has a known issue on system with small DRAM. It creates as many
file as possible that fills the whole NVMM space. Since NOVA uses radix
tree for directory and the key is the hash value of the file name, creating
too many files consume all the DRAM for the radix tree and causes OOM.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/NVSL/nova-dev/issues/16#issuecomment-308289686, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AIpg3bNXiTsFJZdtuX3vSTEWECCkG6sqks5sDzCMgaJpZM4N5MKo
.

@stevenjswanson
Copy link
Member Author

Here’s the output from fxmark:

NVMM:NOVA:DRBL:1:directio

INFO: DirectIO Enabled

ncpu secs works works/sec real.sec user.sec nice.sec sys.sec idle.sec iowait.sec irq.sec softirq.sec steal.sec guest.sec user.util nice.util sys.util idle.util iowait.util irq.util softirq.util steal.util guest.util

1 15.000033 10827945.000000 721861.411905 15.3331 0.24 0 14.84 107.56 0.01 0 0 0 0 0.195655 0 12.098 87.6861 0.0081523 0 0 0 0

NUM_TEST_CONF = 54

trying to unmount... /mnt/ramdisk
trying to unmount... /mnt/ramdisk
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code -1
Finished: FAILURE

The “trying to unmount” suggests it finished

NVMM:NOVA:DRBL:1:directio

INFO: DirectIO Enabled

maybe it crashed during dismount or while starting up for the next test.

-steve

On Jun 13, 2017, at 5:59 PM, Andiry Xu notifications@github.com wrote:

Do you encounter this issue when running fxmark?

Can you specify which exact test cause this issue?

FxMark has a known issue on system with small DRAM. It creates as many file as possible that fills the whole NVMM space. Since NOVA uses radix tree for directory and the key is the hash value of the file name, creating too many files consume all the DRAM for the radix tree and causes OOM.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/NVSL/nova-dev/issues/16#issuecomment-308289686, or mute the thread https://github.com/notifications/unsubscribe-auth/AIpg3bNXiTsFJZdtuX3vSTEWECCkG6sqks5sDzCMgaJpZM4N5MKo.

@stevenjswanson
Copy link
Member Author

How big is “small DRAM”?

-steve

On Jun 13, 2017, at 5:59 PM, Andiry Xu notifications@github.com wrote:

Do you encounter this issue when running fxmark?

Can you specify which exact test cause this issue?

FxMark has a known issue on system with small DRAM. It creates as many file as possible that fills the whole NVMM space. Since NOVA uses radix tree for directory and the key is the hash value of the file name, creating too many files consume all the DRAM for the radix tree and causes OOM.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/NVSL/nova-dev/issues/16#issuecomment-308289686, or mute the thread https://github.com/notifications/unsubscribe-auth/AIpg3bNXiTsFJZdtuX3vSTEWECCkG6sqks5sDzCMgaJpZM4N5MKo.

@stevenjswanson
Copy link
Member Author

From @Andiry on June 14, 2017 8:0

To efficiently locate a file in a dir, we use hash(filename) as the key, and the corresponding dentry address as the value in a radix tree. Hash(filename) is supposed to evenly distribute in 64 bit integer range. Radix tree is a trie tree.

So that means if all the hash values do not share a common prefix, then the space efficiency will be pretty bad. A single key can take ( 4KB * depth_of_tree) in the worst case. And DRAM are consumed quickly in this case. Assume 5000000 files and the radix tree has 3 layers. In the worst case it takes 5 * 10^6 * 4KB * 3 = 60GB.

stevenjswanson pushed a commit that referenced this issue Jul 5, 2017
We cannot do printk() from tk_debug_account_sleep_time(), because
tk_debug_account_sleep_time() is called under tk_core seq lock.
The reason why printk() is unsafe there is that console_sem may
invoke scheduler (up()->wake_up_process()->activate_task()), which,
in turn, can return back to timekeeping code, for instance, via
get_time()->ktime_get(), deadlocking the system on tk_core seq lock.

[   48.950592] ======================================================
[   48.950622] [ INFO: possible circular locking dependency detected ]
[   48.950622] 4.10.0-rc7-next-20170213+ #101 Not tainted
[   48.950622] -------------------------------------------------------
[   48.950622] kworker/0:0/3 is trying to acquire lock:
[   48.950653]  (tk_core){----..}, at: [<c01cc624>] retrigger_next_event+0x4c/0x90
[   48.950683]
               but task is already holding lock:
[   48.950683]  (hrtimer_bases.lock){-.-...}, at: [<c01cc610>] retrigger_next_event+0x38/0x90
[   48.950714]
               which lock already depends on the new lock.

[   48.950714]
               the existing dependency chain (in reverse order) is:
[   48.950714]
               -> #5 (hrtimer_bases.lock){-.-...}:
[   48.950744]        _raw_spin_lock_irqsave+0x50/0x64
[   48.950775]        lock_hrtimer_base+0x28/0x58
[   48.950775]        hrtimer_start_range_ns+0x20/0x5c8
[   48.950775]        __enqueue_rt_entity+0x320/0x360
[   48.950805]        enqueue_rt_entity+0x2c/0x44
[   48.950805]        enqueue_task_rt+0x24/0x94
[   48.950836]        ttwu_do_activate+0x54/0xc0
[   48.950836]        try_to_wake_up+0x248/0x5c8
[   48.950836]        __setup_irq+0x420/0x5f0
[   48.950836]        request_threaded_irq+0xdc/0x184
[   48.950866]        devm_request_threaded_irq+0x58/0xa4
[   48.950866]        omap_i2c_probe+0x530/0x6a0
[   48.950897]        platform_drv_probe+0x50/0xb0
[   48.950897]        driver_probe_device+0x1f8/0x2cc
[   48.950897]        __driver_attach+0xc0/0xc4
[   48.950927]        bus_for_each_dev+0x6c/0xa0
[   48.950927]        bus_add_driver+0x100/0x210
[   48.950927]        driver_register+0x78/0xf4
[   48.950958]        do_one_initcall+0x3c/0x16c
[   48.950958]        kernel_init_freeable+0x20c/0x2d8
[   48.950958]        kernel_init+0x8/0x110
[   48.950988]        ret_from_fork+0x14/0x24
[   48.950988]
               -> #4 (&rt_b->rt_runtime_lock){-.-...}:
[   48.951019]        _raw_spin_lock+0x40/0x50
[   48.951019]        rq_offline_rt+0x9c/0x2bc
[   48.951019]        set_rq_offline.part.2+0x2c/0x58
[   48.951049]        rq_attach_root+0x134/0x144
[   48.951049]        cpu_attach_domain+0x18c/0x6f4
[   48.951049]        build_sched_domains+0xba4/0xd80
[   48.951080]        sched_init_smp+0x68/0x10c
[   48.951080]        kernel_init_freeable+0x160/0x2d8
[   48.951080]        kernel_init+0x8/0x110
[   48.951080]        ret_from_fork+0x14/0x24
[   48.951110]
               -> #3 (&rq->lock){-.-.-.}:
[   48.951110]        _raw_spin_lock+0x40/0x50
[   48.951141]        task_fork_fair+0x30/0x124
[   48.951141]        sched_fork+0x194/0x2e0
[   48.951141]        copy_process.part.5+0x448/0x1a20
[   48.951171]        _do_fork+0x98/0x7e8
[   48.951171]        kernel_thread+0x2c/0x34
[   48.951171]        rest_init+0x1c/0x18c
[   48.951202]        start_kernel+0x35c/0x3d4
[   48.951202]        0x8000807c
[   48.951202]
               -> #2 (&p->pi_lock){-.-.-.}:
[   48.951232]        _raw_spin_lock_irqsave+0x50/0x64
[   48.951232]        try_to_wake_up+0x30/0x5c8
[   48.951232]        up+0x4c/0x60
[   48.951263]        __up_console_sem+0x2c/0x58
[   48.951263]        console_unlock+0x3b4/0x650
[   48.951263]        vprintk_emit+0x270/0x474
[   48.951293]        vprintk_default+0x20/0x28
[   48.951293]        printk+0x20/0x30
[   48.951324]        kauditd_hold_skb+0x94/0xb8
[   48.951324]        kauditd_thread+0x1a4/0x56c
[   48.951324]        kthread+0x104/0x148
[   48.951354]        ret_from_fork+0x14/0x24
[   48.951354]
               -> #1 ((console_sem).lock){-.....}:
[   48.951385]        _raw_spin_lock_irqsave+0x50/0x64
[   48.951385]        down_trylock+0xc/0x2c
[   48.951385]        __down_trylock_console_sem+0x24/0x80
[   48.951385]        console_trylock+0x10/0x8c
[   48.951416]        vprintk_emit+0x264/0x474
[   48.951416]        vprintk_default+0x20/0x28
[   48.951416]        printk+0x20/0x30
[   48.951446]        tk_debug_account_sleep_time+0x5c/0x70
[   48.951446]        __timekeeping_inject_sleeptime.constprop.3+0x170/0x1a0
[   48.951446]        timekeeping_resume+0x218/0x23c
[   48.951477]        syscore_resume+0x94/0x42c
[   48.951477]        suspend_enter+0x554/0x9b4
[   48.951477]        suspend_devices_and_enter+0xd8/0x4b4
[   48.951507]        enter_state+0x934/0xbd4
[   48.951507]        pm_suspend+0x14/0x70
[   48.951507]        state_store+0x68/0xc8
[   48.951538]        kernfs_fop_write+0xf4/0x1f8
[   48.951538]        __vfs_write+0x1c/0x114
[   48.951538]        vfs_write+0xa0/0x168
[   48.951568]        SyS_write+0x3c/0x90
[   48.951568]        __sys_trace_return+0x0/0x10
[   48.951568]
               -> #0 (tk_core){----..}:
[   48.951599]        lock_acquire+0xe0/0x294
[   48.951599]        ktime_get_update_offsets_now+0x5c/0x1d4
[   48.951629]        retrigger_next_event+0x4c/0x90
[   48.951629]        on_each_cpu+0x40/0x7c
[   48.951629]        clock_was_set_work+0x14/0x20
[   48.951660]        process_one_work+0x2b4/0x808
[   48.951660]        worker_thread+0x3c/0x550
[   48.951660]        kthread+0x104/0x148
[   48.951690]        ret_from_fork+0x14/0x24
[   48.951690]
               other info that might help us debug this:

[   48.951690] Chain exists of:
                 tk_core --> &rt_b->rt_runtime_lock --> hrtimer_bases.lock

[   48.951721]  Possible unsafe locking scenario:

[   48.951721]        CPU0                    CPU1
[   48.951721]        ----                    ----
[   48.951721]   lock(hrtimer_bases.lock);
[   48.951751]                                lock(&rt_b->rt_runtime_lock);
[   48.951751]                                lock(hrtimer_bases.lock);
[   48.951751]   lock(tk_core);
[   48.951782]
                *** DEADLOCK ***

[   48.951782] 3 locks held by kworker/0:0/3:
[   48.951782]  #0:  ("events"){.+.+.+}, at: [<c0156590>] process_one_work+0x1f8/0x808
[   48.951812]  #1:  (hrtimer_work){+.+...}, at: [<c0156590>] process_one_work+0x1f8/0x808
[   48.951843]  #2:  (hrtimer_bases.lock){-.-...}, at: [<c01cc610>] retrigger_next_event+0x38/0x90
[   48.951843]   stack backtrace:
[   48.951873] CPU: 0 PID: 3 Comm: kworker/0:0 Not tainted 4.10.0-rc7-next-20170213+
[   48.951904] Workqueue: events clock_was_set_work
[   48.951904] [<c0110208>] (unwind_backtrace) from [<c010c224>] (show_stack+0x10/0x14)
[   48.951934] [<c010c224>] (show_stack) from [<c04ca6c0>] (dump_stack+0xac/0xe0)
[   48.951934] [<c04ca6c0>] (dump_stack) from [<c019b5cc>] (print_circular_bug+0x1d0/0x308)
[   48.951965] [<c019b5cc>] (print_circular_bug) from [<c019d2a8>] (validate_chain+0xf50/0x1324)
[   48.951965] [<c019d2a8>] (validate_chain) from [<c019ec18>] (__lock_acquire+0x468/0x7e8)
[   48.951995] [<c019ec18>] (__lock_acquire) from [<c019f634>] (lock_acquire+0xe0/0x294)
[   48.951995] [<c019f634>] (lock_acquire) from [<c01d0ea0>] (ktime_get_update_offsets_now+0x5c/0x1d4)
[   48.952026] [<c01d0ea0>] (ktime_get_update_offsets_now) from [<c01cc624>] (retrigger_next_event+0x4c/0x90)
[   48.952026] [<c01cc624>] (retrigger_next_event) from [<c01e4e24>] (on_each_cpu+0x40/0x7c)
[   48.952056] [<c01e4e24>] (on_each_cpu) from [<c01cafc4>] (clock_was_set_work+0x14/0x20)
[   48.952056] [<c01cafc4>] (clock_was_set_work) from [<c015664c>] (process_one_work+0x2b4/0x808)
[   48.952087] [<c015664c>] (process_one_work) from [<c0157774>] (worker_thread+0x3c/0x550)
[   48.952087] [<c0157774>] (worker_thread) from [<c015d644>] (kthread+0x104/0x148)
[   48.952087] [<c015d644>] (kthread) from [<c0107830>] (ret_from_fork+0x14/0x24)

Replace printk() with printk_deferred(), which does not call into
the scheduler.

Fixes: 0bf43f1 ("timekeeping: Prints the amounts of time spent during suspend")
Reported-and-tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "[4.9+]" <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/20170215044332.30449-1-sergey.senozhatsky@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
stevenjswanson pushed a commit that referenced this issue Jul 5, 2017
Commit 6664498 ("packet: call fanout_release, while UNREGISTERING a
netdev"), unfortunately, introduced the following issues.

1. calling mutex_lock(&fanout_mutex) (fanout_release()) from inside
rcu_read-side critical section. rcu_read_lock disables preemption, most often,
which prohibits calling sleeping functions.

[  ] include/linux/rcupdate.h:560 Illegal context switch in RCU read-side critical section!
[  ]
[  ] rcu_scheduler_active = 1, debug_locks = 0
[  ] 4 locks held by ovs-vswitchd/1969:
[  ]  #0:  (cb_lock){++++++}, at: [<ffffffff8158a6c9>] genl_rcv+0x19/0x40
[  ]  #1:  (ovs_mutex){+.+.+.}, at: [<ffffffffa04878ca>] ovs_vport_cmd_del+0x4a/0x100 [openvswitch]
[  ]  #2:  (rtnl_mutex){+.+.+.}, at: [<ffffffff81564157>] rtnl_lock+0x17/0x20
[  ]  #3:  (rcu_read_lock){......}, at: [<ffffffff81614165>] packet_notifier+0x5/0x3f0
[  ]
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff810c9077>] lockdep_rcu_suspicious+0x107/0x110
[  ]  [<ffffffff810a2da7>] ___might_sleep+0x57/0x210
[  ]  [<ffffffff810a2fd0>] __might_sleep+0x70/0x90
[  ]  [<ffffffff8162e80c>] mutex_lock_nested+0x3c/0x3a0
[  ]  [<ffffffff810de93f>] ? vprintk_default+0x1f/0x30
[  ]  [<ffffffff81186e88>] ? printk+0x4d/0x4f
[  ]  [<ffffffff816106dd>] fanout_release+0x1d/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

2. calling mutex_lock(&fanout_mutex) inside spin_lock(&po->bind_lock).
"sleeping function called from invalid context"

[  ] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
[  ] in_atomic(): 1, irqs_disabled(): 0, pid: 1969, name: ovs-vswitchd
[  ] INFO: lockdep is turned off.
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff810a2f52>] ___might_sleep+0x202/0x210
[  ]  [<ffffffff810a2fd0>] __might_sleep+0x70/0x90
[  ]  [<ffffffff8162e80c>] mutex_lock_nested+0x3c/0x3a0
[  ]  [<ffffffff816106dd>] fanout_release+0x1d/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

3. calling dev_remove_pack(&fanout->prot_hook), from inside
spin_lock(&po->bind_lock) or rcu_read-side critical-section. dev_remove_pack()
-> synchronize_net(), which might sleep.

[  ] BUG: scheduling while atomic: ovs-vswitchd/1969/0x00000002
[  ] INFO: lockdep is turned off.
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff81186274>] __schedule_bug+0x64/0x73
[  ]  [<ffffffff8162b8cb>] __schedule+0x6b/0xd10
[  ]  [<ffffffff8162c5db>] schedule+0x6b/0x80
[  ]  [<ffffffff81630b1d>] schedule_timeout+0x38d/0x410
[  ]  [<ffffffff810ea3fd>] synchronize_sched_expedited+0x53d/0x810
[  ]  [<ffffffff810ea6de>] synchronize_rcu_expedited+0xe/0x10
[  ]  [<ffffffff8154eab5>] synchronize_net+0x35/0x50
[  ]  [<ffffffff8154eae3>] dev_remove_pack+0x13/0x20
[  ]  [<ffffffff8161077e>] fanout_release+0xbe/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

4. fanout_release() races with calls from different CPU.

To fix the above problems, remove the call to fanout_release() under
rcu_read_lock(). Instead, call __dev_remove_pack(&fanout->prot_hook) and
netdev_run_todo will be happy that &dev->ptype_specific list is empty. In order
to achieve this, I moved dev_{add,remove}_pack() out of fanout_{add,release} to
__fanout_{link,unlink}. So, call to {,__}unregister_prot_hook() will make sure
fanout->prot_hook is removed as well.

Fixes: 6664498 ("packet: call fanout_release, while UNREGISTERING a netdev")
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Anoob Soman <anoob.soman@citrix.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stevenjswanson pushed a commit that referenced this issue Jul 5, 2017
Use rcuidle console tracepoint because, apparently, it may be issued
from an idle CPU:

  hw-breakpoint: Failed to enable monitor mode on CPU 0.
  hw-breakpoint: CPU 0 failed to disable vector catch

  ===============================
  [ ERR: suspicious RCU usage.  ]
  4.10.0-rc8-next-20170215+ #119 Not tainted
  -------------------------------
  ./include/trace/events/printk.h:32 suspicious rcu_dereference_check() usage!

  other info that might help us debug this:

  RCU used illegally from idle CPU!
  rcu_scheduler_active = 2, debug_locks = 0
  RCU used illegally from extended quiescent state!
  2 locks held by swapper/0/0:
   #0:  (cpu_pm_notifier_lock){......}, at: [<c0237e2c>] cpu_pm_exit+0x10/0x54
   #1:  (console_lock){+.+.+.}, at: [<c01ab350>] vprintk_emit+0x264/0x474

  stack backtrace:
  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.10.0-rc8-next-20170215+ #119
  Hardware name: Generic OMAP4 (Flattened Device Tree)
    console_unlock
    vprintk_emit
    vprintk_default
    printk
    reset_ctrl_regs
    dbg_cpu_pm_notify
    notifier_call_chain
    cpu_pm_exit
    omap_enter_idle_coupled
    cpuidle_enter_state
    cpuidle_enter_state_coupled
    do_idle
    cpu_startup_entry
    start_kernel

This RCU warning, however, is suppressed by lockdep_off() in printk().
lockdep_off() increments the ->lockdep_recursion counter and thus
disables RCU_LOCKDEP_WARN() and debug_lockdep_rcu_enabled(), which want
lockdep to be enabled "current->lockdep_recursion == 0".

Link: http://lkml.kernel.org/r/20170217015932.11898-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Russell King <rmk@armlinux.org.uk>
Cc: <stable@vger.kernel.org> [3.4+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
stevenjswanson pushed a commit that referenced this issue Jul 27, 2017
We cannot do printk() from tk_debug_account_sleep_time(), because
tk_debug_account_sleep_time() is called under tk_core seq lock.
The reason why printk() is unsafe there is that console_sem may
invoke scheduler (up()->wake_up_process()->activate_task()), which,
in turn, can return back to timekeeping code, for instance, via
get_time()->ktime_get(), deadlocking the system on tk_core seq lock.

[   48.950592] ======================================================
[   48.950622] [ INFO: possible circular locking dependency detected ]
[   48.950622] 4.10.0-rc7-next-20170213+ #101 Not tainted
[   48.950622] -------------------------------------------------------
[   48.950622] kworker/0:0/3 is trying to acquire lock:
[   48.950653]  (tk_core){----..}, at: [<c01cc624>] retrigger_next_event+0x4c/0x90
[   48.950683]
               but task is already holding lock:
[   48.950683]  (hrtimer_bases.lock){-.-...}, at: [<c01cc610>] retrigger_next_event+0x38/0x90
[   48.950714]
               which lock already depends on the new lock.

[   48.950714]
               the existing dependency chain (in reverse order) is:
[   48.950714]
               -> #5 (hrtimer_bases.lock){-.-...}:
[   48.950744]        _raw_spin_lock_irqsave+0x50/0x64
[   48.950775]        lock_hrtimer_base+0x28/0x58
[   48.950775]        hrtimer_start_range_ns+0x20/0x5c8
[   48.950775]        __enqueue_rt_entity+0x320/0x360
[   48.950805]        enqueue_rt_entity+0x2c/0x44
[   48.950805]        enqueue_task_rt+0x24/0x94
[   48.950836]        ttwu_do_activate+0x54/0xc0
[   48.950836]        try_to_wake_up+0x248/0x5c8
[   48.950836]        __setup_irq+0x420/0x5f0
[   48.950836]        request_threaded_irq+0xdc/0x184
[   48.950866]        devm_request_threaded_irq+0x58/0xa4
[   48.950866]        omap_i2c_probe+0x530/0x6a0
[   48.950897]        platform_drv_probe+0x50/0xb0
[   48.950897]        driver_probe_device+0x1f8/0x2cc
[   48.950897]        __driver_attach+0xc0/0xc4
[   48.950927]        bus_for_each_dev+0x6c/0xa0
[   48.950927]        bus_add_driver+0x100/0x210
[   48.950927]        driver_register+0x78/0xf4
[   48.950958]        do_one_initcall+0x3c/0x16c
[   48.950958]        kernel_init_freeable+0x20c/0x2d8
[   48.950958]        kernel_init+0x8/0x110
[   48.950988]        ret_from_fork+0x14/0x24
[   48.950988]
               -> #4 (&rt_b->rt_runtime_lock){-.-...}:
[   48.951019]        _raw_spin_lock+0x40/0x50
[   48.951019]        rq_offline_rt+0x9c/0x2bc
[   48.951019]        set_rq_offline.part.2+0x2c/0x58
[   48.951049]        rq_attach_root+0x134/0x144
[   48.951049]        cpu_attach_domain+0x18c/0x6f4
[   48.951049]        build_sched_domains+0xba4/0xd80
[   48.951080]        sched_init_smp+0x68/0x10c
[   48.951080]        kernel_init_freeable+0x160/0x2d8
[   48.951080]        kernel_init+0x8/0x110
[   48.951080]        ret_from_fork+0x14/0x24
[   48.951110]
               -> #3 (&rq->lock){-.-.-.}:
[   48.951110]        _raw_spin_lock+0x40/0x50
[   48.951141]        task_fork_fair+0x30/0x124
[   48.951141]        sched_fork+0x194/0x2e0
[   48.951141]        copy_process.part.5+0x448/0x1a20
[   48.951171]        _do_fork+0x98/0x7e8
[   48.951171]        kernel_thread+0x2c/0x34
[   48.951171]        rest_init+0x1c/0x18c
[   48.951202]        start_kernel+0x35c/0x3d4
[   48.951202]        0x8000807c
[   48.951202]
               -> #2 (&p->pi_lock){-.-.-.}:
[   48.951232]        _raw_spin_lock_irqsave+0x50/0x64
[   48.951232]        try_to_wake_up+0x30/0x5c8
[   48.951232]        up+0x4c/0x60
[   48.951263]        __up_console_sem+0x2c/0x58
[   48.951263]        console_unlock+0x3b4/0x650
[   48.951263]        vprintk_emit+0x270/0x474
[   48.951293]        vprintk_default+0x20/0x28
[   48.951293]        printk+0x20/0x30
[   48.951324]        kauditd_hold_skb+0x94/0xb8
[   48.951324]        kauditd_thread+0x1a4/0x56c
[   48.951324]        kthread+0x104/0x148
[   48.951354]        ret_from_fork+0x14/0x24
[   48.951354]
               -> #1 ((console_sem).lock){-.....}:
[   48.951385]        _raw_spin_lock_irqsave+0x50/0x64
[   48.951385]        down_trylock+0xc/0x2c
[   48.951385]        __down_trylock_console_sem+0x24/0x80
[   48.951385]        console_trylock+0x10/0x8c
[   48.951416]        vprintk_emit+0x264/0x474
[   48.951416]        vprintk_default+0x20/0x28
[   48.951416]        printk+0x20/0x30
[   48.951446]        tk_debug_account_sleep_time+0x5c/0x70
[   48.951446]        __timekeeping_inject_sleeptime.constprop.3+0x170/0x1a0
[   48.951446]        timekeeping_resume+0x218/0x23c
[   48.951477]        syscore_resume+0x94/0x42c
[   48.951477]        suspend_enter+0x554/0x9b4
[   48.951477]        suspend_devices_and_enter+0xd8/0x4b4
[   48.951507]        enter_state+0x934/0xbd4
[   48.951507]        pm_suspend+0x14/0x70
[   48.951507]        state_store+0x68/0xc8
[   48.951538]        kernfs_fop_write+0xf4/0x1f8
[   48.951538]        __vfs_write+0x1c/0x114
[   48.951538]        vfs_write+0xa0/0x168
[   48.951568]        SyS_write+0x3c/0x90
[   48.951568]        __sys_trace_return+0x0/0x10
[   48.951568]
               -> #0 (tk_core){----..}:
[   48.951599]        lock_acquire+0xe0/0x294
[   48.951599]        ktime_get_update_offsets_now+0x5c/0x1d4
[   48.951629]        retrigger_next_event+0x4c/0x90
[   48.951629]        on_each_cpu+0x40/0x7c
[   48.951629]        clock_was_set_work+0x14/0x20
[   48.951660]        process_one_work+0x2b4/0x808
[   48.951660]        worker_thread+0x3c/0x550
[   48.951660]        kthread+0x104/0x148
[   48.951690]        ret_from_fork+0x14/0x24
[   48.951690]
               other info that might help us debug this:

[   48.951690] Chain exists of:
                 tk_core --> &rt_b->rt_runtime_lock --> hrtimer_bases.lock

[   48.951721]  Possible unsafe locking scenario:

[   48.951721]        CPU0                    CPU1
[   48.951721]        ----                    ----
[   48.951721]   lock(hrtimer_bases.lock);
[   48.951751]                                lock(&rt_b->rt_runtime_lock);
[   48.951751]                                lock(hrtimer_bases.lock);
[   48.951751]   lock(tk_core);
[   48.951782]
                *** DEADLOCK ***

[   48.951782] 3 locks held by kworker/0:0/3:
[   48.951782]  #0:  ("events"){.+.+.+}, at: [<c0156590>] process_one_work+0x1f8/0x808
[   48.951812]  #1:  (hrtimer_work){+.+...}, at: [<c0156590>] process_one_work+0x1f8/0x808
[   48.951843]  #2:  (hrtimer_bases.lock){-.-...}, at: [<c01cc610>] retrigger_next_event+0x38/0x90
[   48.951843]   stack backtrace:
[   48.951873] CPU: 0 PID: 3 Comm: kworker/0:0 Not tainted 4.10.0-rc7-next-20170213+
[   48.951904] Workqueue: events clock_was_set_work
[   48.951904] [<c0110208>] (unwind_backtrace) from [<c010c224>] (show_stack+0x10/0x14)
[   48.951934] [<c010c224>] (show_stack) from [<c04ca6c0>] (dump_stack+0xac/0xe0)
[   48.951934] [<c04ca6c0>] (dump_stack) from [<c019b5cc>] (print_circular_bug+0x1d0/0x308)
[   48.951965] [<c019b5cc>] (print_circular_bug) from [<c019d2a8>] (validate_chain+0xf50/0x1324)
[   48.951965] [<c019d2a8>] (validate_chain) from [<c019ec18>] (__lock_acquire+0x468/0x7e8)
[   48.951995] [<c019ec18>] (__lock_acquire) from [<c019f634>] (lock_acquire+0xe0/0x294)
[   48.951995] [<c019f634>] (lock_acquire) from [<c01d0ea0>] (ktime_get_update_offsets_now+0x5c/0x1d4)
[   48.952026] [<c01d0ea0>] (ktime_get_update_offsets_now) from [<c01cc624>] (retrigger_next_event+0x4c/0x90)
[   48.952026] [<c01cc624>] (retrigger_next_event) from [<c01e4e24>] (on_each_cpu+0x40/0x7c)
[   48.952056] [<c01e4e24>] (on_each_cpu) from [<c01cafc4>] (clock_was_set_work+0x14/0x20)
[   48.952056] [<c01cafc4>] (clock_was_set_work) from [<c015664c>] (process_one_work+0x2b4/0x808)
[   48.952087] [<c015664c>] (process_one_work) from [<c0157774>] (worker_thread+0x3c/0x550)
[   48.952087] [<c0157774>] (worker_thread) from [<c015d644>] (kthread+0x104/0x148)
[   48.952087] [<c015d644>] (kthread) from [<c0107830>] (ret_from_fork+0x14/0x24)

Replace printk() with printk_deferred(), which does not call into
the scheduler.

Fixes: 0bf43f1 ("timekeeping: Prints the amounts of time spent during suspend")
Reported-and-tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: "[4.9+]" <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/20170215044332.30449-1-sergey.senozhatsky@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
stevenjswanson pushed a commit that referenced this issue Jul 27, 2017
Commit 6664498 ("packet: call fanout_release, while UNREGISTERING a
netdev"), unfortunately, introduced the following issues.

1. calling mutex_lock(&fanout_mutex) (fanout_release()) from inside
rcu_read-side critical section. rcu_read_lock disables preemption, most often,
which prohibits calling sleeping functions.

[  ] include/linux/rcupdate.h:560 Illegal context switch in RCU read-side critical section!
[  ]
[  ] rcu_scheduler_active = 1, debug_locks = 0
[  ] 4 locks held by ovs-vswitchd/1969:
[  ]  #0:  (cb_lock){++++++}, at: [<ffffffff8158a6c9>] genl_rcv+0x19/0x40
[  ]  #1:  (ovs_mutex){+.+.+.}, at: [<ffffffffa04878ca>] ovs_vport_cmd_del+0x4a/0x100 [openvswitch]
[  ]  #2:  (rtnl_mutex){+.+.+.}, at: [<ffffffff81564157>] rtnl_lock+0x17/0x20
[  ]  #3:  (rcu_read_lock){......}, at: [<ffffffff81614165>] packet_notifier+0x5/0x3f0
[  ]
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff810c9077>] lockdep_rcu_suspicious+0x107/0x110
[  ]  [<ffffffff810a2da7>] ___might_sleep+0x57/0x210
[  ]  [<ffffffff810a2fd0>] __might_sleep+0x70/0x90
[  ]  [<ffffffff8162e80c>] mutex_lock_nested+0x3c/0x3a0
[  ]  [<ffffffff810de93f>] ? vprintk_default+0x1f/0x30
[  ]  [<ffffffff81186e88>] ? printk+0x4d/0x4f
[  ]  [<ffffffff816106dd>] fanout_release+0x1d/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

2. calling mutex_lock(&fanout_mutex) inside spin_lock(&po->bind_lock).
"sleeping function called from invalid context"

[  ] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
[  ] in_atomic(): 1, irqs_disabled(): 0, pid: 1969, name: ovs-vswitchd
[  ] INFO: lockdep is turned off.
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff810a2f52>] ___might_sleep+0x202/0x210
[  ]  [<ffffffff810a2fd0>] __might_sleep+0x70/0x90
[  ]  [<ffffffff8162e80c>] mutex_lock_nested+0x3c/0x3a0
[  ]  [<ffffffff816106dd>] fanout_release+0x1d/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

3. calling dev_remove_pack(&fanout->prot_hook), from inside
spin_lock(&po->bind_lock) or rcu_read-side critical-section. dev_remove_pack()
-> synchronize_net(), which might sleep.

[  ] BUG: scheduling while atomic: ovs-vswitchd/1969/0x00000002
[  ] INFO: lockdep is turned off.
[  ] Call Trace:
[  ]  [<ffffffff813770c1>] dump_stack+0x85/0xc4
[  ]  [<ffffffff81186274>] __schedule_bug+0x64/0x73
[  ]  [<ffffffff8162b8cb>] __schedule+0x6b/0xd10
[  ]  [<ffffffff8162c5db>] schedule+0x6b/0x80
[  ]  [<ffffffff81630b1d>] schedule_timeout+0x38d/0x410
[  ]  [<ffffffff810ea3fd>] synchronize_sched_expedited+0x53d/0x810
[  ]  [<ffffffff810ea6de>] synchronize_rcu_expedited+0xe/0x10
[  ]  [<ffffffff8154eab5>] synchronize_net+0x35/0x50
[  ]  [<ffffffff8154eae3>] dev_remove_pack+0x13/0x20
[  ]  [<ffffffff8161077e>] fanout_release+0xbe/0xe0
[  ]  [<ffffffff81614459>] packet_notifier+0x2f9/0x3f0

4. fanout_release() races with calls from different CPU.

To fix the above problems, remove the call to fanout_release() under
rcu_read_lock(). Instead, call __dev_remove_pack(&fanout->prot_hook) and
netdev_run_todo will be happy that &dev->ptype_specific list is empty. In order
to achieve this, I moved dev_{add,remove}_pack() out of fanout_{add,release} to
__fanout_{link,unlink}. So, call to {,__}unregister_prot_hook() will make sure
fanout->prot_hook is removed as well.

Fixes: 6664498 ("packet: call fanout_release, while UNREGISTERING a netdev")
Reported-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Anoob Soman <anoob.soman@citrix.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
stevenjswanson pushed a commit that referenced this issue Jul 27, 2017
Use rcuidle console tracepoint because, apparently, it may be issued
from an idle CPU:

  hw-breakpoint: Failed to enable monitor mode on CPU 0.
  hw-breakpoint: CPU 0 failed to disable vector catch

  ===============================
  [ ERR: suspicious RCU usage.  ]
  4.10.0-rc8-next-20170215+ #119 Not tainted
  -------------------------------
  ./include/trace/events/printk.h:32 suspicious rcu_dereference_check() usage!

  other info that might help us debug this:

  RCU used illegally from idle CPU!
  rcu_scheduler_active = 2, debug_locks = 0
  RCU used illegally from extended quiescent state!
  2 locks held by swapper/0/0:
   #0:  (cpu_pm_notifier_lock){......}, at: [<c0237e2c>] cpu_pm_exit+0x10/0x54
   #1:  (console_lock){+.+.+.}, at: [<c01ab350>] vprintk_emit+0x264/0x474

  stack backtrace:
  CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.10.0-rc8-next-20170215+ #119
  Hardware name: Generic OMAP4 (Flattened Device Tree)
    console_unlock
    vprintk_emit
    vprintk_default
    printk
    reset_ctrl_regs
    dbg_cpu_pm_notify
    notifier_call_chain
    cpu_pm_exit
    omap_enter_idle_coupled
    cpuidle_enter_state
    cpuidle_enter_state_coupled
    do_idle
    cpu_startup_entry
    start_kernel

This RCU warning, however, is suppressed by lockdep_off() in printk().
lockdep_off() increments the ->lockdep_recursion counter and thus
disables RCU_LOCKDEP_WARN() and debug_lockdep_rcu_enabled(), which want
lockdep to be enabled "current->lockdep_recursion == 0".

Link: http://lkml.kernel.org/r/20170217015932.11898-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Russell King <rmk@armlinux.org.uk>
Cc: <stable@vger.kernel.org> [3.4+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
"err" needs to be left set to -EFAULT if split_huge_page succeeds.
Otherwise if "err" gets clobbered with zero and write_protect_page
fails, try_to_merge_one_page() will succeed instead of returning -EFAULT
and then try_to_merge_with_ksm_page() will continue thinking kpage is a
PageKsm when in fact it's still an anonymous page.  Eventually it'll
crash in page_add_anon_rmap.

This has been reproduced on Fedora25 kernel but I can reproduce with
upstream too.

The bug was introduced in commit f765f54 ("ksm: prepare to new THP
semantics") introduced in v4.5.

    page:fffff67546ce1cc0 count:4 mapcount:2 mapping:ffffa094551e36e1 index:0x7f0f46673
    flags: 0x2ffffc0004007c(referenced|uptodate|dirty|lru|active|swapbacked)
    page dumped because: VM_BUG_ON_PAGE(!PageLocked(page))
    page->mem_cgroup:ffffa09674bf0000
    ------------[ cut here ]------------
    kernel BUG at mm/rmap.c:1222!
    CPU: 1 PID: 76 Comm: ksmd Not tainted 4.9.3-200.fc25.x86_64 #1
    RIP: do_page_add_anon_rmap+0x1c4/0x240
    Call Trace:
      page_add_anon_rmap+0x18/0x20
      try_to_merge_with_ksm_page+0x50b/0x780
      ksm_scan_thread+0x1211/0x1410
      ? prepare_to_wait_event+0x100/0x100
      ? try_to_merge_with_ksm_page+0x780/0x780
      kthread+0xd9/0xf0
      ? kthread_park+0x60/0x60
      ret_from_fork+0x25/0x30

Fixes: f765f54 ("ksm: prepare to new THP semantics")
Link: http://lkml.kernel.org/r/20170513131040.21732-1-aarcange@redhat.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Reported-by: Federico Simoncelli <fsimonce@redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
… sizing

We have seen an early OOM killer invocation on ppc64 systems with
crashkernel=4096M:

	kthreadd invoked oom-killer: gfp_mask=0x16040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK), nodemask=7, order=0, oom_score_adj=0
	kthreadd cpuset=/ mems_allowed=7
	CPU: 0 PID: 2 Comm: kthreadd Not tainted 4.4.68-1.gd7fe927-default #1
	Call Trace:
	  dump_stack+0xb0/0xf0 (unreliable)
	  dump_header+0xb0/0x258
	  out_of_memory+0x5f0/0x640
	  __alloc_pages_nodemask+0xa8c/0xc80
	  kmem_getpages+0x84/0x1a0
	  fallback_alloc+0x2a4/0x320
	  kmem_cache_alloc_node+0xc0/0x2e0
	  copy_process.isra.25+0x260/0x1b30
	  _do_fork+0x94/0x470
	  kernel_thread+0x48/0x60
	  kthreadd+0x264/0x330
	  ret_from_kernel_thread+0x5c/0xa4

	Mem-Info:
	active_anon:0 inactive_anon:0 isolated_anon:0
	 active_file:0 inactive_file:0 isolated_file:0
	 unevictable:0 dirty:0 writeback:0 unstable:0
	 slab_reclaimable:5 slab_unreclaimable:73
	 mapped:0 shmem:0 pagetables:0 bounce:0
	 free:0 free_pcp:0 free_cma:0
	Node 7 DMA free:0kB min:0kB low:0kB high:0kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:52428800kB managed:110016kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:320kB slab_unreclaimable:4672kB kernel_stack:1152kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
	lowmem_reserve[]: 0 0 0 0
	Node 7 DMA: 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 0kB
	0 total pagecache pages
	0 pages in swap cache
	Swap cache stats: add 0, delete 0, find 0/0
	Free swap  = 0kB
	Total swap = 0kB
	819200 pages RAM
	0 pages HighMem/MovableOnly
	817481 pages reserved
	0 pages cma reserved
	0 pages hwpoisoned

the reason is that the managed memory is too low (only 110MB) while the
rest of the the 50GB is still waiting for the deferred intialization to
be done.  update_defer_init estimates the initial memoty to initialize
to 2GB at least but it doesn't consider any memory allocated in that
range.  In this particular case we've had

	Reserving 4096MB of memory at 128MB for crashkernel (System RAM: 51200MB)

so the low 2GB is mostly depleted.

Fix this by considering memblock allocations in the initial static
initialization estimation.  Move the max_initialise to
reset_deferred_meminit and implement a simple memblock_reserved_memory
helper which iterates all reserved blocks and sums the size of all that
start below the given address.  The cumulative size is than added on top
of the initial estimation.  This is still not ideal because
reset_deferred_meminit doesn't consider holes and so reservation might
be above the initial estimation whihch we ignore but let's make the
logic simpler until we really need to handle more complicated cases.

Fixes: 3a80a7f ("mm: meminit: initialise a subset of struct pages if CONFIG_DEFERRED_STRUCT_PAGE_INIT is set")
Link: http://lkml.kernel.org/r/20170531104010.GI27783@dhcp22.suse.cz
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Tested-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org>	[4.2+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
If bio has no data, such as ones from blkdev_issue_flush(),
then we have nothing to protect.

This patch prevent bugon like follows:

kfree_debugcheck: out of range ptr ac1fa1d106742a5ah
kernel BUG at mm/slab.c:2773!
invalid opcode: 0000 [#1] SMP
Modules linked in: bcache
CPU: 0 PID: 4428 Comm: xfs_io Tainted: G        W       4.11.0-rc4-ext4-00041-g2ef0043-dirty #43
Hardware name: Virtuozzo KVM, BIOS seabios-1.7.5-11.vz7.4 04/01/2014
task: ffff880137786440 task.stack: ffffc90000ba8000
RIP: 0010:kfree_debugcheck+0x25/0x2a
RSP: 0018:ffffc90000babde0 EFLAGS: 00010082
RAX: 0000000000000034 RBX: ac1fa1d106742a5a RCX: 0000000000000007
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88013f3ccb40
RBP: ffffc90000babde8 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000fcb76420 R11: 00000000725172ed R12: 0000000000000282
R13: ffffffff8150e766 R14: ffff88013a145e00 R15: 0000000000000001
FS:  00007fb09384bf40(0000) GS:ffff88013f200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fd0172f9e40 CR3: 0000000137fa9000 CR4: 00000000000006f0
Call Trace:
 kfree+0xc8/0x1b3
 bio_integrity_free+0xc3/0x16b
 bio_free+0x25/0x66
 bio_put+0x14/0x26
 blkdev_issue_flush+0x7a/0x85
 blkdev_fsync+0x35/0x42
 vfs_fsync_range+0x8e/0x9f
 vfs_fsync+0x1c/0x1e
 do_fsync+0x31/0x4a
 SyS_fsync+0x10/0x14
 entry_SYSCALL_64_fastpath+0x1f/0xc2

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
This prevents a deadlock that somehow results from the suspend() ->
forbid() -> resume() callchain.

[  125.266960] [drm] Initialized nouveau 1.3.1 20120801 for 0000:02:00.0 on minor 1
[  370.120872] INFO: task kworker/4:1:77 blocked for more than 120 seconds.
[  370.120920]       Tainted: G           O    4.12.0-rc3 #20
[  370.120947] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  370.120982] kworker/4:1     D13808    77      2 0x00000000
[  370.120998] Workqueue: pm pm_runtime_work
[  370.121004] Call Trace:
[  370.121018]  __schedule+0x2bf/0xb40
[  370.121025]  ? mark_held_locks+0x5f/0x90
[  370.121038]  schedule+0x3d/0x90
[  370.121044]  rpm_resume+0x107/0x870
[  370.121052]  ? finish_wait+0x90/0x90
[  370.121065]  ? pci_pm_runtime_resume+0xa0/0xa0
[  370.121070]  pm_runtime_forbid+0x4c/0x60
[  370.121129]  nouveau_pmops_runtime_suspend+0xaf/0xc0 [nouveau]
[  370.121139]  pci_pm_runtime_suspend+0x5f/0x170
[  370.121147]  ? pci_pm_runtime_resume+0xa0/0xa0
[  370.121152]  __rpm_callback+0xb9/0x1e0
[  370.121159]  ? pci_pm_runtime_resume+0xa0/0xa0
[  370.121166]  rpm_callback+0x24/0x80
[  370.121171]  ? pci_pm_runtime_resume+0xa0/0xa0
[  370.121176]  rpm_suspend+0x138/0x6e0
[  370.121192]  pm_runtime_work+0x7b/0xc0
[  370.121199]  process_one_work+0x253/0x6a0
[  370.121216]  worker_thread+0x4d/0x3b0
[  370.121229]  kthread+0x133/0x150
[  370.121234]  ? process_one_work+0x6a0/0x6a0
[  370.121238]  ? kthread_create_on_node+0x70/0x70
[  370.121246]  ret_from_fork+0x2a/0x40
[  370.121283]
               Showing all locks held in the system:
[  370.121291] 2 locks held by kworker/4:1/77:
[  370.121298]  #0:  ("pm"){.+.+.+}, at: [<ffffffffac0d3530>] process_one_work+0x1d0/0x6a0
[  370.121315]  #1:  ((&dev->power.work)){+.+.+.}, at: [<ffffffffac0d3530>] process_one_work+0x1d0/0x6a0
[  370.121330] 1 lock held by khungtaskd/81:
[  370.121333]  #0:  (tasklist_lock){.+.+..}, at: [<ffffffffac10fc8d>] debug_show_all_locks+0x3d/0x1a0
[  370.121355] 1 lock held by dmesg/1639:
[  370.121358]  #0:  (&user->lock){+.+.+.}, at: [<ffffffffac124b6d>] devkmsg_read+0x4d/0x360

[  370.121377] =============================================

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
During an eeh call to cxl_remove can result in double free_irq of
psl,slice interrupts. This can happen if perst_reloads_same_image == 1
and call to cxl_configure_adapter() fails during slot_reset
callback. In such a case we see a kernel oops with following back-trace:

Oops: Kernel access of bad area, sig: 11 [#1]
Call Trace:
  free_irq+0x88/0xd0 (unreliable)
  cxl_unmap_irq+0x20/0x40 [cxl]
  cxl_native_release_psl_irq+0x78/0xd8 [cxl]
  pci_deconfigure_afu+0xac/0x110 [cxl]
  cxl_remove+0x104/0x210 [cxl]
  pci_device_remove+0x6c/0x110
  device_release_driver_internal+0x204/0x2e0
  pci_stop_bus_device+0xa0/0xd0
  pci_stop_and_remove_bus_device+0x28/0x40
  pci_hp_remove_devices+0xb0/0x150
  pci_hp_remove_devices+0x68/0x150
  eeh_handle_normal_event+0x140/0x580
  eeh_handle_event+0x174/0x360
  eeh_event_handler+0x1e8/0x1f0

This patch fixes the issue of double free_irq by checking that
variables that hold the virqs (err_hwirq, serr_hwirq, psl_virq) are
not '0' before un-mapping and resetting these variables to '0' when
they are un-mapped.

Cc: stable@vger.kernel.org
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
Under memory pressure, we start ageing pages, which amounts to parsing
the page tables. Since we don't want to allocate any extra level,
we pass NULL for our private allocation cache. Which means that
stage2_get_pud() is allowed to fail. This results in the following
splat:

[ 1520.409577] Unable to handle kernel NULL pointer dereference at virtual address 00000008
[ 1520.417741] pgd = ffff810f52fef000
[ 1520.421201] [00000008] *pgd=0000010f636c5003, *pud=0000010f56f48003, *pmd=0000000000000000
[ 1520.429546] Internal error: Oops: 96000006 [#1] PREEMPT SMP
[ 1520.435156] Modules linked in:
[ 1520.438246] CPU: 15 PID: 53550 Comm: qemu-system-aar Tainted: G        W       4.12.0-rc4-00027-g1885c397eaec #7205
[ 1520.448705] Hardware name: FOXCONN R2-1221R-A4/C2U4N_MB, BIOS G31FB12A 10/26/2016
[ 1520.463726] task: ffff800ac5fb4e00 task.stack: ffff800ce04e0000
[ 1520.469666] PC is at stage2_get_pmd+0x34/0x110
[ 1520.474119] LR is at kvm_age_hva_handler+0x44/0xf0
[ 1520.478917] pc : [<ffff0000080b137c>] lr : [<ffff0000080b149c>] pstate: 40000145
[ 1520.486325] sp : ffff800ce04e33d0
[ 1520.489644] x29: ffff800ce04e33d0 x28: 0000000ffff40064
[ 1520.494967] x27: 0000ffff27e00000 x26: 0000000000000000
[ 1520.500289] x25: ffff81051ba65008 x24: 0000ffff40065000
[ 1520.505618] x23: 0000ffff40064000 x22: 0000000000000000
[ 1520.510947] x21: ffff810f52b20000 x20: 0000000000000000
[ 1520.516274] x19: 0000000058264000 x18: 0000000000000000
[ 1520.521603] x17: 0000ffffa6fe7438 x16: ffff000008278b70
[ 1520.526940] x15: 000028ccd8000000 x14: 0000000000000008
[ 1520.532264] x13: ffff7e0018298000 x12: 0000000000000002
[ 1520.537582] x11: ffff000009241b93 x10: 0000000000000940
[ 1520.542908] x9 : ffff0000092ef800 x8 : 0000000000000200
[ 1520.548229] x7 : ffff800ce04e36a8 x6 : 0000000000000000
[ 1520.553552] x5 : 0000000000000001 x4 : 0000000000000000
[ 1520.558873] x3 : 0000000000000000 x2 : 0000000000000008
[ 1520.571696] x1 : ffff000008fd5000 x0 : ffff0000080b149c
[ 1520.577039] Process qemu-system-aar (pid: 53550, stack limit = 0xffff800ce04e0000)
[...]
[ 1521.510735] [<ffff0000080b137c>] stage2_get_pmd+0x34/0x110
[ 1521.516221] [<ffff0000080b149c>] kvm_age_hva_handler+0x44/0xf0
[ 1521.522054] [<ffff0000080b0610>] handle_hva_to_gpa+0xb8/0xe8
[ 1521.527716] [<ffff0000080b3434>] kvm_age_hva+0x44/0xf0
[ 1521.532854] [<ffff0000080a58b0>] kvm_mmu_notifier_clear_flush_young+0x70/0xc0
[ 1521.539992] [<ffff000008238378>] __mmu_notifier_clear_flush_young+0x88/0xd0
[ 1521.546958] [<ffff00000821eca0>] page_referenced_one+0xf0/0x188
[ 1521.552881] [<ffff00000821f36c>] rmap_walk_anon+0xec/0x250
[ 1521.558370] [<ffff000008220f78>] rmap_walk+0x78/0xa0
[ 1521.563337] [<ffff000008221104>] page_referenced+0x164/0x180
[ 1521.569002] [<ffff0000081f1af0>] shrink_active_list+0x178/0x3b8
[ 1521.574922] [<ffff0000081f2058>] shrink_node_memcg+0x328/0x600
[ 1521.580758] [<ffff0000081f23f4>] shrink_node+0xc4/0x328
[ 1521.585986] [<ffff0000081f2718>] do_try_to_free_pages+0xc0/0x340
[ 1521.592000] [<ffff0000081f2a64>] try_to_free_pages+0xcc/0x240
[...]

The trivial fix is to handle this NULL pud value early, rather than
dereferencing it blindly.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
If queue is stopped, we shouldn't dispatch request into driver and
hardware, unfortunately the check is removed in bd166ef(blk-mq-sched:
add framework for MQ capable IO schedulers).

This patch fixes the issue by moving the check back into
__blk_mq_try_issue_directly().

This patch fixes request use-after-free[1][2] during canceling requets
of NVMe in nvme_dev_disable(), which can be triggered easily during
NVMe reset & remove test.

[1] oops kernel log when CONFIG_BLK_DEV_INTEGRITY is on
[  103.412969] BUG: unable to handle kernel NULL pointer dereference at 000000000000000a
[  103.412980] IP: bio_integrity_advance+0x48/0xf0
[  103.412981] PGD 275a88067
[  103.412981] P4D 275a88067
[  103.412982] PUD 276c43067
[  103.412983] PMD 0
[  103.412984]
[  103.412986] Oops: 0000 [#1] SMP
[  103.412989] Modules linked in: vfat fat intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc aesni_intel crypto_simd cryptd ipmi_ssif iTCO_wdt iTCO_vendor_support mxm_wmi glue_helper dcdbas ipmi_si mei_me pcspkr mei sg ipmi_devintf lpc_ich ipmi_msghandler shpchp acpi_power_meter wmi nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel nvme ahci nvme_core libahci libata tg3 i2c_core megaraid_sas ptp pps_core dm_mirror dm_region_hash dm_log dm_mod
[  103.413035] CPU: 0 PID: 102 Comm: kworker/0:2 Not tainted 4.11.0+ #1
[  103.413036] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.2.5 09/06/2016
[  103.413041] Workqueue: events nvme_remove_dead_ctrl_work [nvme]
[  103.413043] task: ffff9cc8775c8000 task.stack: ffffc033c252c000
[  103.413045] RIP: 0010:bio_integrity_advance+0x48/0xf0
[  103.413046] RSP: 0018:ffffc033c252fc10 EFLAGS: 00010202
[  103.413048] RAX: 0000000000000000 RBX: ffff9cc8720a8cc0 RCX: ffff9cca72958240
[  103.413049] RDX: ffff9cca72958000 RSI: 0000000000000008 RDI: ffff9cc872537f00
[  103.413049] RBP: ffffc033c252fc28 R08: 0000000000000000 R09: ffffffffb963a0d5
[  103.413050] R10: 000000000000063e R11: 0000000000000000 R12: ffff9cc8720a8d18
[  103.413051] R13: 0000000000001000 R14: ffff9cc872682e00 R15: 00000000fffffffb
[  103.413053] FS:  0000000000000000(0000) GS:ffff9cc877c00000(0000) knlGS:0000000000000000
[  103.413054] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  103.413055] CR2: 000000000000000a CR3: 0000000276c41000 CR4: 00000000001406f0
[  103.413056] Call Trace:
[  103.413063]  bio_advance+0x2a/0xe0
[  103.413067]  blk_update_request+0x76/0x330
[  103.413072]  blk_mq_end_request+0x1a/0x70
[  103.413074]  blk_mq_dispatch_rq_list+0x370/0x410
[  103.413076]  ? blk_mq_flush_busy_ctxs+0x94/0xe0
[  103.413080]  blk_mq_sched_dispatch_requests+0x173/0x1a0
[  103.413083]  __blk_mq_run_hw_queue+0x8e/0xa0
[  103.413085]  __blk_mq_delay_run_hw_queue+0x9d/0xa0
[  103.413088]  blk_mq_start_hw_queue+0x17/0x20
[  103.413090]  blk_mq_start_hw_queues+0x32/0x50
[  103.413095]  nvme_kill_queues+0x54/0x80 [nvme_core]
[  103.413097]  nvme_remove_dead_ctrl_work+0x1f/0x40 [nvme]
[  103.413103]  process_one_work+0x149/0x360
[  103.413105]  worker_thread+0x4d/0x3c0
[  103.413109]  kthread+0x109/0x140
[  103.413111]  ? rescuer_thread+0x380/0x380
[  103.413113]  ? kthread_park+0x60/0x60
[  103.413120]  ret_from_fork+0x2c/0x40
[  103.413121] Code: 08 4c 8b 63 50 48 8b 80 80 00 00 00 48 8b 90 d0 03 00 00 31 c0 48 83 ba 40 02 00 00 00 48 8d 8a 40 02 00 00 48 0f 45 c1 c1 ee 09 <0f> b6 48 0a 0f b6 40 09 41 89 f5 83 e9 09 41 d3 ed 44 0f af e8
[  103.413145] RIP: bio_integrity_advance+0x48/0xf0 RSP: ffffc033c252fc10
[  103.413146] CR2: 000000000000000a
[  103.413157] ---[ end trace cd6875d16eb5a11e ]---
[  103.455368] Kernel panic - not syncing: Fatal exception
[  103.459826] Kernel Offset: 0x37600000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[  103.850916] ---[ end Kernel panic - not syncing: Fatal exception
[  103.857637] sched: Unexpected reschedule of offline CPU#1!
[  103.863762] ------------[ cut here ]------------

[2] kernel hang in blk_mq_freeze_queue_wait() when CONFIG_BLK_DEV_INTEGRITY is off
[  247.129825] INFO: task nvme-test:1772 blocked for more than 120 seconds.
[  247.137311]       Not tainted 4.12.0-rc2.upstream+ #4
[  247.142954] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  247.151704] Call Trace:
[  247.154445]  __schedule+0x28a/0x880
[  247.158341]  schedule+0x36/0x80
[  247.161850]  blk_mq_freeze_queue_wait+0x4b/0xb0
[  247.166913]  ? remove_wait_queue+0x60/0x60
[  247.171485]  blk_freeze_queue+0x1a/0x20
[  247.175770]  blk_cleanup_queue+0x7f/0x140
[  247.180252]  nvme_ns_remove+0xa3/0xb0 [nvme_core]
[  247.185503]  nvme_remove_namespaces+0x32/0x50 [nvme_core]
[  247.191532]  nvme_uninit_ctrl+0x2d/0xa0 [nvme_core]
[  247.196977]  nvme_remove+0x70/0x110 [nvme]
[  247.201545]  pci_device_remove+0x39/0xc0
[  247.205927]  device_release_driver_internal+0x141/0x200
[  247.211761]  device_release_driver+0x12/0x20
[  247.216531]  pci_stop_bus_device+0x8c/0xa0
[  247.221104]  pci_stop_and_remove_bus_device_locked+0x1a/0x30
[  247.227420]  remove_store+0x7c/0x90
[  247.231320]  dev_attr_store+0x18/0x30
[  247.235409]  sysfs_kf_write+0x3a/0x50
[  247.239497]  kernfs_fop_write+0xff/0x180
[  247.243867]  __vfs_write+0x37/0x160
[  247.247757]  ? selinux_file_permission+0xe5/0x120
[  247.253011]  ? security_file_permission+0x3b/0xc0
[  247.258260]  vfs_write+0xb2/0x1b0
[  247.261964]  ? syscall_trace_enter+0x1d0/0x2b0
[  247.266924]  SyS_write+0x55/0xc0
[  247.270540]  do_syscall_64+0x67/0x150
[  247.274636]  entry_SYSCALL64_slow_path+0x25/0x25
[  247.279794] RIP: 0033:0x7f5c96740840
[  247.283785] RSP: 002b:00007ffd00e87ee8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[  247.292238] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f5c96740840
[  247.300194] RDX: 0000000000000002 RSI: 00007f5c97060000 RDI: 0000000000000001
[  247.308159] RBP: 00007f5c97060000 R08: 000000000000000a R09: 00007f5c97059740
[  247.316123] R10: 0000000000000001 R11: 0000000000000246 R12: 00007f5c96a14400
[  247.324087] R13: 0000000000000002 R14: 0000000000000001 R15: 0000000000000000
[  370.016340] INFO: task nvme-test:1772 blocked for more than 120 seconds.

Fixes: 12d7095(blk-mq: don't fail allocating driver tag for stopped hw queue)
Cc: stable@vger.kernel.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
If the device is asleep (no GT wakeref), we know the GPU is already idle.
If we add an early return, we can avoid touching registers and checking
hw state outside of the assumed GT wakelock. This prevents causing such
errors whilst debugging:

[ 2613.401647] RPM wakelock ref not held during HW access
[ 2613.401684] ------------[ cut here ]------------
[ 2613.401720] WARNING: CPU: 5 PID: 7739 at drivers/gpu/drm/i915/intel_drv.h:1787 gen6_read32+0x21f/0x2b0 [i915]
[ 2613.401731] Modules linked in: snd_hda_intel i915 vgem snd_hda_codec_hdmi x86_pkg_temp_thermal intel_powerclamp snd_hda_codec_realtek coretemp snd_hda_codec_generic crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm r8169 mii mei_me lpc_ich mei prime_numbers [last unloaded: i915]
[ 2613.401823] CPU: 5 PID: 7739 Comm: drv_missed_irq Tainted: G     U          4.12.0-rc2-CI-CI_DRM_421+ #1
[ 2613.401825] Hardware name: MSI MS-7924/Z97M-G43(MS-7924), BIOS V1.12 02/15/2016
[ 2613.401840] task: ffff880409e3a740 task.stack: ffffc900084dc000
[ 2613.401861] RIP: 0010:gen6_read32+0x21f/0x2b0 [i915]
[ 2613.401863] RSP: 0018:ffffc900084dfce8 EFLAGS: 00010292
[ 2613.401869] RAX: 000000000000002a RBX: ffff8804016a8000 RCX: 0000000000000006
[ 2613.401871] RDX: 0000000000000006 RSI: ffffffff81cbf2d9 RDI: ffffffff81c9e3a7
[ 2613.401874] RBP: ffffc900084dfd18 R08: ffff880409e3afc8 R09: 0000000000000000
[ 2613.401877] R10: 000000008a1c483f R11: 0000000000000000 R12: 000000000000209c
[ 2613.401879] R13: 0000000000000001 R14: ffff8804016a8000 R15: ffff8804016ac150
[ 2613.401882] FS:  00007f39ef3dd8c0(0000) GS:ffff88041fb40000(0000) knlGS:0000000000000000
[ 2613.401885] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2613.401887] CR2: 00000000023717c8 CR3: 00000002e7b34000 CR4: 00000000001406e0
[ 2613.401889] Call Trace:
[ 2613.401912]  intel_engine_is_idle+0x76/0x90 [i915]
[ 2613.401931]  i915_gem_wait_for_idle+0xe6/0x1e0 [i915]
[ 2613.401951]  fault_irq_set+0x40/0x90 [i915]
[ 2613.401970]  i915_ring_test_irq_set+0x42/0x50 [i915]
[ 2613.401976]  simple_attr_write+0xc7/0xe0
[ 2613.401981]  full_proxy_write+0x4f/0x70
[ 2613.401987]  __vfs_write+0x23/0x120
[ 2613.401992]  ? rcu_read_lock_sched_held+0x75/0x80
[ 2613.401996]  ? rcu_sync_lockdep_assert+0x2a/0x50
[ 2613.401999]  ? __sb_start_write+0xfa/0x1f0
[ 2613.402004]  vfs_write+0xc5/0x1d0
[ 2613.402008]  ? trace_hardirqs_on_caller+0xe7/0x1c0
[ 2613.402013]  SyS_write+0x44/0xb0
[ 2613.402020]  entry_SYSCALL_64_fastpath+0x1c/0xb1
[ 2613.402022] RIP: 0033:0x7f39eded6670
[ 2613.402025] RSP: 002b:00007fffdcdcb1a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 2613.402030] RAX: ffffffffffffffda RBX: ffffffff81470203 RCX: 00007f39eded6670
[ 2613.402033] RDX: 0000000000000001 RSI: 000000000041bc33 RDI: 0000000000000006
[ 2613.402036] RBP: ffffc900084dff88 R08: 00007f39ef3dd8c0 R09: 0000000000000001
[ 2613.402038] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000041bc33
[ 2613.402041] R13: 0000000000000006 R14: 0000000000000000 R15: 0000000000000000
[ 2613.402046]  ? __this_cpu_preempt_check+0x13/0x20
[ 2613.402052] Code: 01 9b fa e0 0f ff e9 28 fe ff ff 80 3d 6a dd 0e 00 00 0f 85 29 fe ff ff 48 c7 c7 48 19 29 a0 c6 05 56 dd 0e 00 01 e8 da 9a fa e0 <0f> ff e9 0f fe ff ff b9 01 00 00 00 ba 01 00 00 00 44 89 e6 48
[ 2613.402199] ---[ end trace 31f0cfa93ab632bf ]---

Fixes: 25112b6 ("drm/i915: Wait for all engines to be idle as part of i915_gem_wait_for_idle()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170530121334.17364-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
(cherry picked from commit 863e9fd)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
Allow intel_engine_is_idle() to be called outside of the GT wakeref by
acquiring the device runtime pm for ourselves. This allows the function
to act as check after we assume the engine is idle and we release the GT
wakeref held whilst we have requests. At the moment, we do not call it
outside of an awake context but taking the wakeref as required makes it
more convenient to use for quick debugging in future.

[ 2613.401647] RPM wakelock ref not held during HW access
[ 2613.401684] ------------[ cut here ]------------
[ 2613.401720] WARNING: CPU: 5 PID: 7739 at drivers/gpu/drm/i915/intel_drv.h:1787 gen6_read32+0x21f/0x2b0 [i915]
[ 2613.401731] Modules linked in: snd_hda_intel i915 vgem snd_hda_codec_hdmi x86_pkg_temp_thermal intel_powerclamp snd_hda_codec_realtek coretemp snd_hda_codec_generic crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm r8169 mii mei_me lpc_ich mei prime_numbers [last unloaded: i915]
[ 2613.401823] CPU: 5 PID: 7739 Comm: drv_missed_irq Tainted: G     U          4.12.0-rc2-CI-CI_DRM_421+ #1
[ 2613.401825] Hardware name: MSI MS-7924/Z97M-G43(MS-7924), BIOS V1.12 02/15/2016
[ 2613.401840] task: ffff880409e3a740 task.stack: ffffc900084dc000
[ 2613.401861] RIP: 0010:gen6_read32+0x21f/0x2b0 [i915]
[ 2613.401863] RSP: 0018:ffffc900084dfce8 EFLAGS: 00010292
[ 2613.401869] RAX: 000000000000002a RBX: ffff8804016a8000 RCX: 0000000000000006
[ 2613.401871] RDX: 0000000000000006 RSI: ffffffff81cbf2d9 RDI: ffffffff81c9e3a7
[ 2613.401874] RBP: ffffc900084dfd18 R08: ffff880409e3afc8 R09: 0000000000000000
[ 2613.401877] R10: 000000008a1c483f R11: 0000000000000000 R12: 000000000000209c
[ 2613.401879] R13: 0000000000000001 R14: ffff8804016a8000 R15: ffff8804016ac150
[ 2613.401882] FS:  00007f39ef3dd8c0(0000) GS:ffff88041fb40000(0000) knlGS:0000000000000000
[ 2613.401885] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2613.401887] CR2: 00000000023717c8 CR3: 00000002e7b34000 CR4: 00000000001406e0
[ 2613.401889] Call Trace:
[ 2613.401912]  intel_engine_is_idle+0x76/0x90 [i915]
[ 2613.401931]  i915_gem_wait_for_idle+0xe6/0x1e0 [i915]
[ 2613.401951]  fault_irq_set+0x40/0x90 [i915]
[ 2613.401970]  i915_ring_test_irq_set+0x42/0x50 [i915]
[ 2613.401976]  simple_attr_write+0xc7/0xe0
[ 2613.401981]  full_proxy_write+0x4f/0x70
[ 2613.401987]  __vfs_write+0x23/0x120
[ 2613.401992]  ? rcu_read_lock_sched_held+0x75/0x80
[ 2613.401996]  ? rcu_sync_lockdep_assert+0x2a/0x50
[ 2613.401999]  ? __sb_start_write+0xfa/0x1f0
[ 2613.402004]  vfs_write+0xc5/0x1d0
[ 2613.402008]  ? trace_hardirqs_on_caller+0xe7/0x1c0
[ 2613.402013]  SyS_write+0x44/0xb0
[ 2613.402020]  entry_SYSCALL_64_fastpath+0x1c/0xb1
[ 2613.402022] RIP: 0033:0x7f39eded6670
[ 2613.402025] RSP: 002b:00007fffdcdcb1a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 2613.402030] RAX: ffffffffffffffda RBX: ffffffff81470203 RCX: 00007f39eded6670
[ 2613.402033] RDX: 0000000000000001 RSI: 000000000041bc33 RDI: 0000000000000006
[ 2613.402036] RBP: ffffc900084dff88 R08: 00007f39ef3dd8c0 R09: 0000000000000001
[ 2613.402038] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000041bc33
[ 2613.402041] R13: 0000000000000006 R14: 0000000000000000 R15: 0000000000000000
[ 2613.402046]  ? __this_cpu_preempt_check+0x13/0x20
[ 2613.402052] Code: 01 9b fa e0 0f ff e9 28 fe ff ff 80 3d 6a dd 0e 00 00 0f 85 29 fe ff ff 48 c7 c7 48 19 29 a0 c6 05 56 dd 0e 00 01 e8 da 9a fa e0 <0f> ff e9 0f fe ff ff b9 01 00 00 00 ba 01 00 00 00 44 89 e6 48
[ 2613.402199] ---[ end trace 31f0cfa93ab632bf ]---

Fixes: 5400367 ("drm/i915: Ensure the engine is idle before manually changing HWS")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170530121334.17364-2-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
(cherry picked from commit a091d4e)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
Since

commit bac2a90
Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Date:   Wed Jan 21 02:17:42 2015 +0100

    PCI / PM: Avoid resuming PCI devices during system suspend

PCI devices will default to allowing the system suspend complete
optimization where devices are not woken up during system suspend if
they were already runtime suspended. This however breaks the i915/HDA
drivers for two reasons:

- The i915 driver has system suspend specific steps that it needs to
  run, that bring the device to a different state than its runtime
  suspended state.

- The HDA driver's suspend handler requires power that it will request
  from the i915 driver's power domain handler. This in turn requires the
  i915 driver to runtime resume itself, but this won't be possible if the
  suspend complete optimization is in effect: in this case the i915
  runtime PM is disabled and trying to get an RPM reference returns
  -EACCESS.

Solve this by requiring the PCI/PM core to resume the device during
system suspend which in effect disables the suspend complete optimization.

Regardless of the above commit the optimization stayed disabled for DRM
devices until

commit d14d2a8
Author: Lukas Wunner <lukas@wunner.de>
Date:   Wed Jun 8 12:49:29 2016 +0200

    drm: Remove dev_pm_ops from drm_class

so this patch is in practice a fix for this commit. Another reason for
the bug staying hidden for so long is that the optimization for a device
is disabled if it's disabled for any of its children devices. i915 may
have a backlight device as its child which doesn't support runtime PM
and so doesn't allow the optimization either.  So if this backlight
device got registered the bug stayed hidden.

Credits to Marta, Tomi and David who enabled pstore logging,
that caught one instance of this issue across a suspend/
resume-to-ram and Ville who rememberd that the optimization was enabled
for some devices at one point.

The first WARN triggered by the problem:

[ 6250.746445] WARNING: CPU: 2 PID: 17384 at drivers/gpu/drm/i915/intel_runtime_pm.c:2846 intel_runtime_pm_get+0x6b/0xd0 [i915]
[ 6250.746448] pm_runtime_get_sync() failed: -13
[ 6250.746451] Modules linked in: snd_hda_intel i915 vgem snd_hda_codec_hdmi x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul
snd_hda_codec_realtek snd_hda_codec_generic ghash_clmulni_intel e1000e snd_hda_codec snd_hwdep snd_hda_core ptp mei_me pps_core snd_pcm lpc_ich mei prime_
numbers i2c_hid i2c_designware_platform i2c_designware_core [last unloaded: i915]
[ 6250.746512] CPU: 2 PID: 17384 Comm: kworker/u8:0 Tainted: G     U  W       4.11.0-rc5-CI-CI_DRM_334+ #1
[ 6250.746515] Hardware name:                  /NUC5i5RYB, BIOS RYBDWi35.86A.0362.2017.0118.0940 01/18/2017
[ 6250.746521] Workqueue: events_unbound async_run_entry_fn
[ 6250.746525] Call Trace:
[ 6250.746530]  dump_stack+0x67/0x92
[ 6250.746536]  __warn+0xc6/0xe0
[ 6250.746542]  ? pci_restore_standard_config+0x40/0x40
[ 6250.746546]  warn_slowpath_fmt+0x46/0x50
[ 6250.746553]  ? __pm_runtime_resume+0x56/0x80
[ 6250.746584]  intel_runtime_pm_get+0x6b/0xd0 [i915]
[ 6250.746610]  intel_display_power_get+0x1b/0x40 [i915]
[ 6250.746646]  i915_audio_component_get_power+0x15/0x20 [i915]
[ 6250.746654]  snd_hdac_display_power+0xc8/0x110 [snd_hda_core]
[ 6250.746661]  azx_runtime_resume+0x218/0x280 [snd_hda_intel]
[ 6250.746667]  pci_pm_runtime_resume+0x76/0xa0
[ 6250.746672]  __rpm_callback+0xb4/0x1f0
[ 6250.746677]  ? pci_restore_standard_config+0x40/0x40
[ 6250.746682]  rpm_callback+0x1f/0x80
[ 6250.746686]  ? pci_restore_standard_config+0x40/0x40
[ 6250.746690]  rpm_resume+0x4ba/0x740
[ 6250.746698]  __pm_runtime_resume+0x49/0x80
[ 6250.746703]  pci_pm_suspend+0x57/0x140
[ 6250.746709]  dpm_run_callback+0x6f/0x330
[ 6250.746713]  ? pci_pm_freeze+0xe0/0xe0
[ 6250.746718]  __device_suspend+0xf9/0x370
[ 6250.746724]  ? dpm_watchdog_set+0x60/0x60
[ 6250.746730]  async_suspend+0x1a/0x90
[ 6250.746735]  async_run_entry_fn+0x34/0x160
[ 6250.746741]  process_one_work+0x1f2/0x6d0
[ 6250.746749]  worker_thread+0x49/0x4a0
[ 6250.746755]  kthread+0x107/0x140
[ 6250.746759]  ? process_one_work+0x6d0/0x6d0
[ 6250.746763]  ? kthread_create_on_node+0x40/0x40
[ 6250.746768]  ret_from_fork+0x2e/0x40
[ 6250.746778] ---[ end trace 102a62fd2160f5e6 ]---

v2:
- Use the new pci_dev->needs_resume flag, to avoid any overhead during
  the ->pm_prepare hook. (Rafael)

v3:
- Update commit message to reference the actual regressing commit.
  (Lukas)

v4:
- Rebase on v4 of patch 1/2.

Fixes: d14d2a8 ("drm: Remove dev_pm_ops from drm_class")
References: https://bugs.freedesktop.org/show_bug.cgi?id=100378
References: https://bugs.freedesktop.org/show_bug.cgi?id=100770
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Marta Lofstedt <marta.lofstedt@intel.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Tomi Sarvela <tomi.p.sarvela@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Lukas Wunner <lukas@wunner.de>
Cc: linux-pci@vger.kernel.org
Cc: <stable@vger.kernel.org> # v4.10.x: 4d071c3 - PCI/PM: Add needs_resume flag
Cc: <stable@vger.kernel.org> # v4.10.x
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reported-and-tested-by: Marta Lofstedt <marta.lofstedt@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1493726649-32094-2-git-send-email-imre.deak@intel.com
(cherry picked from commit adfdf85)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
…_timer

I have encountered a NULL pointer dereference in
throtl_schedule_pending_timer:
  [  413.735396] BUG: unable to handle kernel NULL pointer dereference at 0000000000000038
  [  413.735535] IP: [<ffffffff812ebbbf>] throtl_schedule_pending_timer+0x3f/0x210
  [  413.735643] PGD 22c8cf067 PUD 22cb34067 PMD 0
  [  413.735713] Oops: 0000 [#1] SMP
  ......

This is caused by the following case:
  blk_throtl_bio
    throtl_schedule_next_dispatch  <= sq is top level one without parent
      throtl_schedule_pending_timer
        sq_to_tg(sq)->td->throtl_slice  <= sq_to_tg(sq) returns NULL

Fix it by using sq_to_td instead of sq_to_tg(sq)->td, which will always
return a valid td.

Fixes: 297e3d8 ("blk-throttle: make throtl_slice tunable")
Signed-off-by: Joseph Qi <qijiang.qj@alibaba-inc.com>
Reviewed-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
key_update() freed the key_preparsed_payload even if it was not
initialized first.  This would cause a crash if userspace called
keyctl_update() on a key with type like "asymmetric" that has a
->preparse() method but not an ->update() method.  Possibly it could
even be triggered for other key types by racing with keyctl_setperm() to
make the KEY_NEED_WRITE check fail (the permission was already checked,
so normally it wouldn't fail there).

Reproducer with key type "asymmetric", given a valid cert.der:

keyctl new_session
keyid=$(keyctl padd asymmetric desc @s < cert.der)
keyctl setperm $keyid 0x3f000000
keyctl update $keyid data

[  150.686666] BUG: unable to handle kernel NULL pointer dereference at 0000000000000001
[  150.687601] IP: asymmetric_key_free_kids+0x12/0x30
[  150.688139] PGD 38a3d067
[  150.688141] PUD 3b3de067
[  150.688447] PMD 0
[  150.688745]
[  150.689160] Oops: 0000 [#1] SMP
[  150.689455] Modules linked in:
[  150.689769] CPU: 1 PID: 2478 Comm: keyctl Not tainted 4.11.0-rc4-xfstests-00187-ga9f6b6b8cd2f #742
[  150.690916] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014
[  150.692199] task: ffff88003b30c480 task.stack: ffffc90000350000
[  150.692952] RIP: 0010:asymmetric_key_free_kids+0x12/0x30
[  150.693556] RSP: 0018:ffffc90000353e58 EFLAGS: 00010202
[  150.694142] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000004
[  150.694845] RDX: ffffffff81ee3920 RSI: ffff88003d4b0700 RDI: 0000000000000001
[  150.697569] RBP: ffffc90000353e60 R08: ffff88003d5d2140 R09: 0000000000000000
[  150.702483] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
[  150.707393] R13: 0000000000000004 R14: ffff880038a4d2d8 R15: 000000000040411f
[  150.709720] FS:  00007fcbcee35700(0000) GS:ffff88003fd00000(0000) knlGS:0000000000000000
[  150.711504] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  150.712733] CR2: 0000000000000001 CR3: 0000000039eab000 CR4: 00000000003406e0
[  150.714487] Call Trace:
[  150.714975]  asymmetric_key_free_preparse+0x2f/0x40
[  150.715907]  key_update+0xf7/0x140
[  150.716560]  ? key_default_cmp+0x20/0x20
[  150.717319]  keyctl_update_key+0xb0/0xe0
[  150.718066]  SyS_keyctl+0x109/0x130
[  150.718663]  entry_SYSCALL_64_fastpath+0x1f/0xc2
[  150.719440] RIP: 0033:0x7fcbce75ff19
[  150.719926] RSP: 002b:00007ffd5d167088 EFLAGS: 00000206 ORIG_RAX: 00000000000000fa
[  150.720918] RAX: ffffffffffffffda RBX: 0000000000404d80 RCX: 00007fcbce75ff19
[  150.721874] RDX: 00007ffd5d16785e RSI: 000000002866cd36 RDI: 0000000000000002
[  150.722827] RBP: 0000000000000006 R08: 000000002866cd36 R09: 00007ffd5d16785e
[  150.723781] R10: 0000000000000004 R11: 0000000000000206 R12: 0000000000404d80
[  150.724650] R13: 00007ffd5d16784d R14: 00007ffd5d167238 R15: 000000000040411f
[  150.725447] Code: 83 c4 08 31 c0 5b 41 5c 41 5d 41 5e 41 5f 5d c3 66 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 85 ff 74 23 55 48 89 e5 53 48 89 fb <48> 8b 3f e8 06 21 c5 ff 48 8b 7b 08 e8 fd 20 c5 ff 48 89 df e8
[  150.727489] RIP: asymmetric_key_free_kids+0x12/0x30 RSP: ffffc90000353e58
[  150.728117] CR2: 0000000000000001
[  150.728430] ---[ end trace f7f8fe1da2d5ae8d ]---

Fixes: 4d8c025 ("KEYS: Call ->free_preparse() even after ->preparse() returns an error")
Cc: stable@vger.kernel.org # 3.17+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
During an eeh call to cxl_remove can result in double free_irq of
psl,slice interrupts. This can happen if perst_reloads_same_image == 1
and call to cxl_configure_adapter() fails during slot_reset
callback. In such a case we see a kernel oops with following back-trace:

Oops: Kernel access of bad area, sig: 11 [#1]
Call Trace:
  free_irq+0x88/0xd0 (unreliable)
  cxl_unmap_irq+0x20/0x40 [cxl]
  cxl_native_release_psl_irq+0x78/0xd8 [cxl]
  pci_deconfigure_afu+0xac/0x110 [cxl]
  cxl_remove+0x104/0x210 [cxl]
  pci_device_remove+0x6c/0x110
  device_release_driver_internal+0x204/0x2e0
  pci_stop_bus_device+0xa0/0xd0
  pci_stop_and_remove_bus_device+0x28/0x40
  pci_hp_remove_devices+0xb0/0x150
  pci_hp_remove_devices+0x68/0x150
  eeh_handle_normal_event+0x140/0x580
  eeh_handle_event+0x174/0x360
  eeh_event_handler+0x1e8/0x1f0

This patch fixes the issue of double free_irq by checking that
variables that hold the virqs (err_hwirq, serr_hwirq, psl_virq) are
not '0' before un-mapping and resetting these variables to '0' when
they are un-mapped.

Cc: stable@vger.kernel.org
Signed-off-by: Vaibhav Jain <vaibhav@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
A NULL-pointer dereference bug in gadgetfs was uncovered by syzkaller:

> kasan: GPF could be caused by NULL-ptr deref or user memory access
> general protection fault: 0000 [#1] SMP KASAN
> Dumping ftrace buffer:
>    (ftrace buffer empty)
> Modules linked in:
> CPU: 2 PID: 4820 Comm: syz-executor0 Not tainted 4.12.0-rc4+ #5
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> task: ffff880039542dc0 task.stack: ffff88003bdd0000
> RIP: 0010:__list_del_entry_valid+0x7e/0x170 lib/list_debug.c:51
> RSP: 0018:ffff88003bdd6e50 EFLAGS: 00010246
> RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000010000
> RDX: 0000000000000000 RSI: ffffffff86504948 RDI: ffffffff86504950
> RBP: ffff88003bdd6e68 R08: ffff880039542dc0 R09: ffffffff8778ce00
> R10: ffff88003bdd6e68 R11: dffffc0000000000 R12: 0000000000000000
> R13: dffffc0000000000 R14: 1ffff100077badd2 R15: ffffffff864d2e40
> FS:  0000000000000000(0000) GS:ffff88006dc00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 000000002014aff9 CR3: 0000000006022000 CR4: 00000000000006e0
> Call Trace:
>  __list_del_entry include/linux/list.h:116 [inline]
>  list_del include/linux/list.h:124 [inline]
>  usb_gadget_unregister_driver+0x166/0x4c0 drivers/usb/gadget/udc/core.c:1387
>  dev_release+0x80/0x160 drivers/usb/gadget/legacy/inode.c:1187
>  __fput+0x332/0x7f0 fs/file_table.c:209
>  ____fput+0x15/0x20 fs/file_table.c:245
>  task_work_run+0x19b/0x270 kernel/task_work.c:116
>  exit_task_work include/linux/task_work.h:21 [inline]
>  do_exit+0x18a3/0x2820 kernel/exit.c:878
>  do_group_exit+0x149/0x420 kernel/exit.c:982
>  get_signal+0x77f/0x1780 kernel/signal.c:2318
>  do_signal+0xd2/0x2130 arch/x86/kernel/signal.c:808
>  exit_to_usermode_loop+0x1a7/0x240 arch/x86/entry/common.c:157
>  prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
>  syscall_return_slowpath+0x3ba/0x410 arch/x86/entry/common.c:263
>  entry_SYSCALL_64_fastpath+0xbc/0xbe
> RIP: 0033:0x4461f9
> RSP: 002b:00007fdac2b1ecf8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: fffffffffffffe00 RBX: 00000000007080c8 RCX: 00000000004461f9
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000007080c8
> RBP: 00000000007080a8 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: 00007fdac2b1f9c0 R15: 00007fdac2b1f700
> Code: 00 00 00 00 ad de 49 39 c4 74 6a 48 b8 00 02 00 00 00 00 ad de
> 48 89 da 48 39 c3 74 74 48 c1 ea 03 48 b8 00 00 00 00 00 fc ff df <80>
> 3c 02 00 0f 85 92 00 00 00 48 8b 13 48 39 f2 75 66 49 8d 7c
> RIP: __list_del_entry_valid+0x7e/0x170 lib/list_debug.c:51 RSP: ffff88003bdd6e50
> ---[ end trace 30e94b1eec4831c8 ]---
> Kernel panic - not syncing: Fatal exception

The bug was caused by dev_release() failing to turn off its
gadget_registered flag after unregistering the gadget driver.  As a
result, when a later user closed the device file before writing a
valid set of descriptors, dev_release() thought the gadget had been
registered and tried to unregister it, even though it had not been.
This led to the NULL pointer dereference.

The fix is simple: turn off the flag when the gadget is unregistered.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-and-tested-by: Andrey Konovalov <andreyknvl@google.com>
CC: <stable@vger.kernel.org>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
The inode destruction path for the 'dax' device filesystem incorrectly
assumes that the inode was initialized through 'alloc_dax()'. However,
if someone attempts to directly mount the dax filesystem with 'mount -t
dax dax mnt' that will bypass 'alloc_dax()' and the following failure
signatures may occur as a result:

 kill_dax() must be called before final iput()
 WARNING: CPU: 2 PID: 1188 at drivers/dax/super.c:243 dax_destroy_inode+0x48/0x50
 RIP: 0010:dax_destroy_inode+0x48/0x50
 Call Trace:
  destroy_inode+0x3b/0x60
  evict+0x139/0x1c0
  iput+0x1f9/0x2d0
  dentry_unlink_inode+0xc3/0x160
  __dentry_kill+0xcf/0x180
  ? dput+0x37/0x3b0
  dput+0x3a3/0x3b0
  do_one_tree+0x36/0x40
  shrink_dcache_for_umount+0x2d/0x90
  generic_shutdown_super+0x1f/0x120
  kill_anon_super+0x12/0x20
  deactivate_locked_super+0x43/0x70
  deactivate_super+0x4e/0x60

 general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC
 RIP: 0010:kfree+0x6d/0x290
 Call Trace:
  <IRQ>
  dax_i_callback+0x22/0x60
  ? dax_destroy_inode+0x50/0x50
  rcu_process_callbacks+0x298/0x740

 ida_remove called for id=0 which is not allocated.
 WARNING: CPU: 0 PID: 0 at lib/idr.c:383 ida_remove+0x110/0x120
 [..]
 Call Trace:
  <IRQ>
  ida_simple_remove+0x2b/0x50
  ? dax_destroy_inode+0x50/0x50
  dax_i_callback+0x3c/0x60
  rcu_process_callbacks+0x298/0x740

Add missing initialization of the 'struct dax_device' and inode so that
the destruction path does not kfree() or ida_simple_remove()
uninitialized data.

Fixes: 7b6be84 ("dax: refactor dax-fs into a generic provider of 'struct dax_device' instances")
Reported-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
stevenjswanson pushed a commit that referenced this issue Jul 31, 2017
Yuval Mintz says:

====================
bnx2x: Fix malicious VFs indication

It was discovered that for a VF there's a simple [yet uncommon] scenario
which would cause device firmware to declare that VF as malicious -
Add a vlan interface on top of a VF and disable txvlan offloading for
that VF [causing VF to transmit packets where vlan is on payload].

Patch #1 corrects driver transmission to prevent this issue.
Patch #2 is a by-product correcting PF behavior once a VF is declared
malicious.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Syzkaller report this:

pf: pf version 1.04, major 47, cluster 64, nice 0
pf: No ATAPI disk detected
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN PTI
CPU: 0 PID: 9887 Comm: syz-executor.0 Tainted: G         C        5.1.0-rc3+ #8
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:pf_init+0x7af/0x1000 [pf]
Code: 46 77 d2 48 89 d8 48 c1 e8 03 80 3c 28 00 74 08 48 89 df e8 03 25 a6 d2 4c 8b 23 49 8d bc 24 80 05 00 00 48 89 f8 48 c1 e8 03 <80> 3c 28 00 74 05 e8 e6 24 a6 d2 49 8b bc 24 80 05 00 00 e8 79 34
RSP: 0018:ffff8881abcbf998 EFLAGS: 00010202
RAX: 00000000000000b0 RBX: ffffffffc1e4a8a8 RCX: ffffffffaec50788
RDX: 0000000000039b10 RSI: ffffc9000153c000 RDI: 0000000000000580
RBP: dffffc0000000000 R08: ffffed103ee44e59 R09: ffffed103ee44e59
R10: 0000000000000001 R11: ffffed103ee44e58 R12: 0000000000000000
R13: ffffffffc1e4b028 R14: 0000000000000000 R15: 0000000000000020
FS:  00007f1b78a91700(0000) GS:ffff8881f7200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6d72b207f8 CR3: 00000001d5790004 CR4: 00000000007606f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 ? 0xffffffffc1e50000
 do_one_initcall+0xbc/0x47d init/main.c:901
 do_init_module+0x1b5/0x547 kernel/module.c:3456
 load_module+0x6405/0x8c10 kernel/module.c:3804
 __do_sys_finit_module+0x162/0x190 kernel/module.c:3898
 do_syscall_64+0x9f/0x450 arch/x86/entry/common.c:290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462e99
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1b78a90c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99
RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003
RBP: 00007f1b78a90c70 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f1b78a916bc
R13: 00000000004bcefa R14: 00000000006f6fb0 R15: 0000000000000004
Modules linked in: pf(+) paride gpio_tps65218 tps65218 i2c_cht_wc ati_remote dc395x act_meta_skbtcindex act_ife ife ecdh_generic rc_xbox_dvd sky81452_regulator v4l2_fwnode leds_blinkm snd_usb_hiface comedi(C) aes_ti slhc cfi_cmdset_0020 mtd cfi_util sx8654 mdio_gpio of_mdio fixed_phy mdio_bitbang libphy alcor_pci matrix_keymap hid_uclogic usbhid scsi_transport_fc videobuf2_v4l2 videobuf2_dma_sg snd_soc_pcm179x_spi snd_soc_pcm179x_codec i2c_demux_pinctrl mdev snd_indigodj isl6405 mii enc28j60 cmac adt7316_i2c(C) adt7316(C) fmc_trivial fmc nf_reject_ipv4 authenc rc_dtt200u rtc_ds1672 dvb_usb_dibusb_mc dvb_usb_dibusb_mc_common dib3000mc dibx000_common dvb_usb_dibusb_common dvb_usb dvb_core videobuf2_common videobuf2_vmalloc videobuf2_memops regulator_haptic adf7242 mac802154 ieee802154 s5h1409 da9034_ts snd_intel8x0m wmi cx24120 usbcore sdhci_cadence sdhci_pltfm sdhci mmc_core joydev i2c_algo_bit scsi_transport_iscsi iscsi_boot_sysfs ves1820 lockd grace nfs_acl auth_rpcgss sunrp
 c
 ip_vs snd_soc_adau7002 snd_cs4281 snd_rawmidi gameport snd_opl3_lib snd_seq_device snd_hwdep snd_ac97_codec ad7418 hid_primax hid snd_soc_cs4265 snd_soc_core snd_pcm_dmaengine snd_pcm snd_timer ac97_bus snd_compress snd soundcore ti_adc108s102 eeprom_93cx6 i2c_algo_pca mlxreg_hotplug st_pressure st_sensors industrialio_triggered_buffer kfifo_buf industrialio v4l2_common videodev media snd_soc_adau_utils rc_pinnacle_grey rc_core pps_gpio leds_lm3692x nandcore ledtrig_pattern iptable_security iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun mousedev ppdev tpm kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel ide_pci_generic aes_x86_64 piix crypto_simd input_leds psmouse cryp
 td
 glue_helper ide_core intel_agp serio_raw intel_gtt agpgart ata_generic i2c_piix4 pata_acpi parport_pc parport rtc_cmos floppy sch_fq_codel ip_tables x_tables sha1_ssse3 sha1_generic ipv6 [last unloaded: paride]
Dumping ftrace buffer:
  (ftrace buffer empty)
---[ end trace 7a818cf5f210d79e ]---

If alloc_disk fails in pf_init_units, pf->disk will be
NULL, however in pf_detect and pf_exit, it's not check
this before free.It may result a NULL pointer dereference.

Also when register_blkdev failed, blk_cleanup_queue() and
blk_mq_free_tag_set() should be called to free resources.

Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: 6ce5902 ("paride/pf: cleanup queues when detection fails")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>

Signed-off-by: Jens Axboe <axboe@kernel.dk>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
…ock skip hints

Mikhail Gavrilo reported the following bug being triggered in a Fedora
kernel based on 5.1-rc1 but it is relevant to a vanilla kernel.

 kernel: page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
 kernel: ------------[ cut here ]------------
 kernel: kernel BUG at include/linux/mm.h:1021!
 kernel: invalid opcode: 0000 [#1] SMP NOPTI
 kernel: CPU: 6 PID: 116 Comm: kswapd0 Tainted: G         C        5.1.0-0.rc1.git1.3.fc31.x86_64 #1
 kernel: Hardware name: System manufacturer System Product Name/ROG STRIX X470-I GAMING, BIOS 1201 12/07/2018
 kernel: RIP: 0010:__reset_isolation_pfn+0x244/0x2b0
 kernel: Code: fe 06 e8 0f 8e fc ff 44 0f b6 4c 24 04 48 85 c0 0f 85 dc fe ff ff e9 68 fe ff ff 48 c7 c6 58 b7 2e 8c 4c 89 ff e8 0c 75 00 00 <0f> 0b 48 c7 c6 58 b7 2e 8c e8 fe 74 00 00 0f 0b 48 89 fa 41 b8 01
 kernel: RSP: 0018:ffff9e2d03f0fde8 EFLAGS: 00010246
 kernel: RAX: 0000000000000034 RBX: 000000000081f380 RCX: ffff8cffbddd6c20
 kernel: RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffff8cffbddd6c20
 kernel: RBP: 0000000000000001 R08: 0000009898b94613 R09: 0000000000000000
 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000100000
 kernel: R13: 0000000000100000 R14: 0000000000000001 R15: ffffca7de07ce000
 kernel: FS:  0000000000000000(0000) GS:ffff8cffbdc00000(0000) knlGS:0000000000000000
 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 kernel: CR2: 00007fc1670e9000 CR3: 00000007f5276000 CR4: 00000000003406e0
 kernel: Call Trace:
 kernel:  __reset_isolation_suitable+0x62/0x120
 kernel:  reset_isolation_suitable+0x3b/0x40
 kernel:  kswapd+0x147/0x540
 kernel:  ? finish_wait+0x90/0x90
 kernel:  kthread+0x108/0x140
 kernel:  ? balance_pgdat+0x560/0x560
 kernel:  ? kthread_park+0x90/0x90
 kernel:  ret_from_fork+0x27/0x50

He bisected it down to e332f74 ("mm, compaction: be selective about
what pageblocks to clear skip hints").  The problem is that the patch in
question was sloppy with respect to the handling of zone boundaries.  In
some instances, it was possible for PFNs outside of a zone to be examined
and if those were not properly initialised or poisoned then it would
trigger the VM_BUG_ON.  This patch corrects the zone boundary issues when
resetting pageblock skip hints and Mikhail reported that the bug did not
trigger after 30 hours of testing.

Link: http://lkml.kernel.org/r/20190327085424.GL3189@techsingularity.net
Fixes: e332f74 ("mm, compaction: be selective about what pageblocks to clear skip hints")
Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Running LTP oom01 in a tight loop or memory stress testing put the system
in a low-memory situation could triggers random memory corruption like
page flag corruption below due to in fast_isolate_freepages(), if
isolation fails, next_search_order() does not abort the search immediately
could lead to improper accesses.

UBSAN: Undefined behaviour in ./include/linux/mm.h:1195:50
index 7 is out of range for type 'zone [5]'
Call Trace:
 dump_stack+0x62/0x9a
 ubsan_epilogue+0xd/0x7f
 __ubsan_handle_out_of_bounds+0x14d/0x192
 __isolate_free_page+0x52c/0x600
 compaction_alloc+0x886/0x25f0
 unmap_and_move+0x37/0x1e70
 migrate_pages+0x2ca/0xb20
 compact_zone+0x19cb/0x3620
 kcompactd_do_work+0x2df/0x680
 kcompactd+0x1d8/0x6c0
 kthread+0x32c/0x3f0
 ret_from_fork+0x35/0x40
------------[ cut here ]------------
kernel BUG at mm/page_alloc.c:3124!
invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
RIP: 0010:__isolate_free_page+0x464/0x600
RSP: 0000:ffff888b9e1af848 EFLAGS: 00010007
RAX: 0000000030000000 RBX: ffff888c39fcf0f8 RCX: 0000000000000000
RDX: 1ffff111873f9e25 RSI: 0000000000000004 RDI: ffffed1173c35ef6
RBP: ffff888b9e1af898 R08: fffffbfff4fc2461 R09: fffffbfff4fc2460
R10: fffffbfff4fc2460 R11: ffffffffa7e12303 R12: 0000000000000008
R13: dffffc0000000000 R14: 0000000000000000 R15: 0000000000000007
FS:  0000000000000000(0000) GS:ffff888ba8e80000(0000)
knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fc7abc00000 CR3: 0000000752416004 CR4: 00000000001606a0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 compaction_alloc+0x886/0x25f0
 unmap_and_move+0x37/0x1e70
 migrate_pages+0x2ca/0xb20
 compact_zone+0x19cb/0x3620
 kcompactd_do_work+0x2df/0x680
 kcompactd+0x1d8/0x6c0
 kthread+0x32c/0x3f0
 ret_from_fork+0x35/0x40

Link: http://lkml.kernel.org/r/20190320192648.52499-1-cai@lca.pw
Fixes: dbe2d4e ("mm, compaction: round-robin the order while searching the free lists for a target")
Signed-off-by: Qian Cai <cai@lca.pw>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
When reboot the system again and again, may cause a memory
overwrite.

[   15.638922] systemd[1]: Reached target Swap.
[   15.667561] tun: Universal TUN/TAP device driver, 1.6
[   15.676756] Bridge firewalling registered
[   17.344135] Unable to handle kernel paging request at virtual address 0000000200000040
[   17.352179] Mem abort info:
[   17.355007]   ESR = 0x96000004
[   17.358105]   Exception class = DABT (current EL), IL = 32 bits
[   17.364112]   SET = 0, FnV = 0
[   17.367209]   EA = 0, S1PTW = 0
[   17.370393] Data abort info:
[   17.373315]   ISV = 0, ISS = 0x00000004
[   17.377206]   CM = 0, WnR = 0
[   17.380214] user pgtable: 4k pages, 48-bit VAs, pgdp = (____ptrval____)
[   17.386926] [0000000200000040] pgd=0000000000000000
[   17.391878] Internal error: Oops: 96000004 [#1] SMP
[   17.396824] CPU: 23 PID: 95 Comm: kworker/u130:0 Tainted: G            E     4.19.25-1.2.78.aarch64 #1
[   17.414175] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.54 08/16/2018
[   17.425615] Workqueue: events_unbound async_run_entry_fn
[   17.435151] pstate: 00000005 (nzcv daif -PAN -UAO)
[   17.444139] pc : __mutex_lock.isra.1+0x74/0x540
[   17.453002] lr : __mutex_lock.isra.1+0x3c/0x540
[   17.461701] sp : ffff000100d9bb60
[   17.469146] x29: ffff000100d9bb60 x28: 0000000000000000
[   17.478547] x27: 0000000000000000 x26: ffff802fb8945000
[   17.488063] x25: 0000000000000000 x24: ffff802fa32081a8
[   17.497381] x23: 0000000000000002 x22: ffff801fa2b15220
[   17.506701] x21: ffff000009809000 x20: ffff802fa23a0888
[   17.515980] x19: ffff801fa2b15220 x18: 0000000000000000
[   17.525272] x17: 0000000200000000 x16: 0000000200000000
[   17.534511] x15: 0000000000000000 x14: 0000000000000000
[   17.543652] x13: ffff000008d95db8 x12: 000000000000000d
[   17.552780] x11: ffff000008d95d90 x10: 0000000000000b00
[   17.561819] x9 : ffff000100d9bb90 x8 : ffff802fb89d6560
[   17.570829] x7 : 0000000000000004 x6 : 00000004a1801d05
[   17.579839] x5 : 0000000000000000 x4 : 0000000000000000
[   17.588852] x3 : ffff802fb89d5a00 x2 : 0000000000000000
[   17.597734] x1 : 0000000200000000 x0 : 0000000200000000
[   17.606631] Process kworker/u130:0 (pid: 95, stack limit = 0x(____ptrval____))
[   17.617438] Call trace:
[   17.623349]  __mutex_lock.isra.1+0x74/0x540
[   17.630927]  __mutex_lock_slowpath+0x24/0x30
[   17.638602]  mutex_lock+0x50/0x60
[   17.645295]  drain_workqueue+0x34/0x198
[   17.652623]  __sas_drain_work+0x7c/0x168
[   17.659903]  sas_drain_work+0x60/0x68
[   17.666947]  hisi_sas_scan_finished+0x30/0x40 [hisi_sas_main]
[   17.676129]  do_scsi_scan_host+0x70/0xb0
[   17.683534]  do_scan_async+0x20/0x228
[   17.690586]  async_run_entry_fn+0x4c/0x1d0
[   17.697997]  process_one_work+0x1b4/0x3f8
[   17.705296]  worker_thread+0x54/0x470

Every time the call trace is not the same, but the overwrite address
is always the same:
Unable to handle kernel paging request at virtual address 0000000200000040

The root cause is, when write the reg XGMAC_MAC_TX_LF_RF_CONTROL_REG,
didn't use the io_base offset.

Signed-off-by: Yonglong Liu <liuyonglong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
When a bpf program is uploaded, the driver computes the number of
xdp tx queues resulting in the allocation of additional qsets.
Starting from commit '2ecbe4f4a027 ("net: thunderx: replace global
nicvf_rx_mode_wq work queue for all VFs to private for each of them")'
the driver runs link state polling for each VF resulting in the
following NULL pointer dereference:

[   56.169256] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000020
[   56.178032] Mem abort info:
[   56.180834]   ESR = 0x96000005
[   56.183877]   Exception class = DABT (current EL), IL = 32 bits
[   56.189792]   SET = 0, FnV = 0
[   56.192834]   EA = 0, S1PTW = 0
[   56.195963] Data abort info:
[   56.198831]   ISV = 0, ISS = 0x00000005
[   56.202662]   CM = 0, WnR = 0
[   56.205619] user pgtable: 64k pages, 48-bit VAs, pgdp = 0000000021f0c7a0
[   56.212315] [0000000000000020] pgd=0000000000000000, pud=0000000000000000
[   56.219094] Internal error: Oops: 96000005 [#1] SMP
[   56.260459] CPU: 39 PID: 2034 Comm: ip Not tainted 5.1.0-rc3+ #3
[   56.266452] Hardware name: GIGABYTE R120-T33/MT30-GS1, BIOS T49 02/02/2018
[   56.273315] pstate: 80000005 (Nzcv daif -PAN -UAO)
[   56.278098] pc : __ll_sc___cmpxchg_case_acq_64+0x4/0x20
[   56.283312] lr : mutex_lock+0x2c/0x50
[   56.286962] sp : ffff0000219af1b0
[   56.290264] x29: ffff0000219af1b0 x28: ffff800f64de49a0
[   56.295565] x27: 0000000000000000 x26: 0000000000000015
[   56.300865] x25: 0000000000000000 x24: 0000000000000000
[   56.306165] x23: 0000000000000000 x22: ffff000011117000
[   56.311465] x21: ffff800f64dfc080 x20: 0000000000000020
[   56.316766] x19: 0000000000000020 x18: 0000000000000001
[   56.322066] x17: 0000000000000000 x16: ffff800f2e077080
[   56.327367] x15: 0000000000000004 x14: 0000000000000000
[   56.332667] x13: ffff000010964438 x12: 0000000000000002
[   56.337967] x11: 0000000000000000 x10: 0000000000000c70
[   56.343268] x9 : ffff0000219af120 x8 : ffff800f2e077d50
[   56.348568] x7 : 0000000000000027 x6 : 000000062a9d6a84
[   56.353869] x5 : 0000000000000000 x4 : ffff800f2e077480
[   56.359169] x3 : 0000000000000008 x2 : ffff800f2e077080
[   56.364469] x1 : 0000000000000000 x0 : 0000000000000020
[   56.369770] Process ip (pid: 2034, stack limit = 0x00000000c862da3a)
[   56.376110] Call trace:
[   56.378546]  __ll_sc___cmpxchg_case_acq_64+0x4/0x20
[   56.383414]  drain_workqueue+0x34/0x198
[   56.387247]  nicvf_open+0x48/0x9e8 [nicvf]
[   56.391334]  nicvf_open+0x898/0x9e8 [nicvf]
[   56.395507]  nicvf_xdp+0x1bc/0x238 [nicvf]
[   56.399595]  dev_xdp_install+0x68/0x90
[   56.403333]  dev_change_xdp_fd+0xc8/0x240
[   56.407333]  do_setlink+0x8e0/0xbe8
[   56.410810]  __rtnl_newlink+0x5b8/0x6d8
[   56.414634]  rtnl_newlink+0x54/0x80
[   56.418112]  rtnetlink_rcv_msg+0x22c/0x2f8
[   56.422199]  netlink_rcv_skb+0x60/0x120
[   56.426023]  rtnetlink_rcv+0x28/0x38
[   56.429587]  netlink_unicast+0x1c8/0x258
[   56.433498]  netlink_sendmsg+0x1b4/0x350
[   56.437410]  sock_sendmsg+0x4c/0x68
[   56.440887]  ___sys_sendmsg+0x240/0x280
[   56.444711]  __sys_sendmsg+0x68/0xb0
[   56.448275]  __arm64_sys_sendmsg+0x2c/0x38
[   56.452361]  el0_svc_handler+0x9c/0x128
[   56.456186]  el0_svc+0x8/0xc
[   56.459056] Code: 35ffff91 2a1003e0 d65f03c0 f9800011 (c85ffc10)
[   56.465166] ---[ end trace 4a57fdc27b0a572c ]---
[   56.469772] Kernel panic - not syncing: Fatal exception

Fix it by checking nicvf_rx_mode_wq pointer in nicvf_open and nicvf_stop

Fixes: 2ecbe4f ("net: thunderx: replace global nicvf_rx_mode_wq work queue for all VFs to private for each of them")
Fixes: 2c632ad ("net: thunderx: move link state polling function to VF")
Reported-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
Tested-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
the control path of 'sample' action does not validate the value of 'rate'
provided by the user, but then it uses it as divisor in the traffic path.
Validate it in tcf_sample_init(), and return -EINVAL with a proper extack
message in case that value is zero, to fix a splat with the script below:

 # tc f a dev test0 egress matchall action sample rate 0 group 1 index 2
 # tc -s a s action sample
 total acts 1

         action order 0: sample rate 1/0 group 1 pipe
          index 2 ref 1 bind 1 installed 19 sec used 19 sec
         Action statistics:
         Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
         backlog 0b 0p requeues 0
 # ping 192.0.2.1 -I test0 -c1 -q

 divide error: 0000 [#1] SMP PTI
 CPU: 1 PID: 6192 Comm: ping Not tainted 5.1.0-rc2.diag2+ #591
 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
 RIP: 0010:tcf_sample_act+0x9e/0x1e0 [act_sample]
 Code: 6a f1 85 c0 74 0d 80 3d 83 1a 00 00 00 0f 84 9c 00 00 00 4d 85 e4 0f 84 85 00 00 00 e8 9b d7 9c f1 44 8b 8b e0 00 00 00 31 d2 <41> f7 f1 85 d2 75 70 f6 85 83 00 00 00 10 48 8b 45 10 8b 88 08 01
 RSP: 0018:ffffae320190ba30 EFLAGS: 00010246
 RAX: 00000000b0677d21 RBX: ffff8af1ed9ec000 RCX: 0000000059a9fe49
 RDX: 0000000000000000 RSI: 000000000c7e33b7 RDI: ffff8af23daa0af0
 RBP: ffff8af1ee11b200 R08: 0000000074fcaf7e R09: 0000000000000000
 R10: 0000000000000050 R11: ffffffffb3088680 R12: ffff8af232307f80
 R13: 0000000000000003 R14: ffff8af1ed9ec000 R15: 0000000000000000
 FS:  00007fe9c6d2f740(0000) GS:ffff8af23da80000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 00007fff6772f000 CR3: 00000000746a2004 CR4: 00000000001606e0
 Call Trace:
  tcf_action_exec+0x7c/0x1c0
  tcf_classify+0x57/0x160
  __dev_queue_xmit+0x3dc/0xd10
  ip_finish_output2+0x257/0x6d0
  ip_output+0x75/0x280
  ip_send_skb+0x15/0x40
  raw_sendmsg+0xae3/0x1410
  sock_sendmsg+0x36/0x40
  __sys_sendto+0x10e/0x140
  __x64_sys_sendto+0x24/0x30
  do_syscall_64+0x60/0x210
  entry_SYSCALL_64_after_hwframe+0x49/0xbe
  [...]
  Kernel panic - not syncing: Fatal exception in interrupt

Add a TDC selftest to document that 'rate' is now being validated.

Reported-by: Matteo Croce <mcroce@redhat.com>
Fixes: 5c5670f ("net/sched: Introduce sample tc action")
Signed-off-by: Davide Caratti <dcaratti@redhat.com>
Acked-by: Yotam Gigi <yotam.gi@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Fix device initialization completion handling for vNIC adapters.
Initialize the completion structure on probe and reinitialize when needed.
This also fixes a race condition during kdump where the driver can attempt
to access the completion struct before it is initialized:

Unable to handle kernel paging request for data at address 0x00000000
Faulting instruction address: 0xc0000000081acbe0
Oops: Kernel access of bad area, sig: 11 [#1]
LE SMP NR_CPUS=2048 NUMA pSeries
Modules linked in: ibmvnic(+) ibmveth sunrpc overlay squashfs loop
CPU: 19 PID: 301 Comm: systemd-udevd Not tainted 4.18.0-64.el8.ppc64le #1
NIP:  c0000000081acbe0 LR: c0000000081ad964 CTR: c0000000081ad900
REGS: c000000027f3f990 TRAP: 0300   Not tainted  (4.18.0-64.el8.ppc64le)
MSR:  800000010280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE,TM[E]> CR: 28228288  XER: 00000006
CFAR: c000000008008934 DAR: 0000000000000000 DSISR: 40000000 IRQMASK: 1
GPR00: c0000000081ad964 c000000027f3fc10 c0000000095b5800 c0000000221b4e58
GPR04: 0000000000000003 0000000000000001 000049a086918581 00000000000000d4
GPR08: 0000000000000007 0000000000000000 ffffffffffffffe8 d0000000014dde28
GPR12: c0000000081ad900 c000000009a00c00 0000000000000001 0000000000000100
GPR16: 0000000000000038 0000000000000007 c0000000095e2230 0000000000000006
GPR20: 0000000000400140 0000000000000001 c00000000910c880 0000000000000000
GPR24: 0000000000000000 0000000000000006 0000000000000000 0000000000000003
GPR28: 0000000000000001 0000000000000001 c0000000221b4e60 c0000000221b4e58
NIP [c0000000081acbe0] __wake_up_locked+0x50/0x100
LR [c0000000081ad964] complete+0x64/0xa0
Call Trace:
[c000000027f3fc10] [c000000027f3fc60] 0xc000000027f3fc60 (unreliable)
[c000000027f3fc60] [c0000000081ad964] complete+0x64/0xa0
[c000000027f3fca0] [d0000000014dad58] ibmvnic_handle_crq+0xce0/0x1160 [ibmvnic]
[c000000027f3fd50] [d0000000014db270] ibmvnic_tasklet+0x98/0x130 [ibmvnic]
[c000000027f3fda0] [c00000000813f334] tasklet_action_common.isra.3+0xc4/0x1a0
[c000000027f3fe00] [c000000008cd13f4] __do_softirq+0x164/0x400
[c000000027f3fef0] [c00000000813ed64] irq_exit+0x184/0x1c0
[c000000027f3ff20] [c0000000080188e8] __do_irq+0xb8/0x210
[c000000027f3ff90] [c00000000802d0a4] call_do_irq+0x14/0x24
[c000000026a5b010] [c000000008018adc] do_IRQ+0x9c/0x130
[c000000026a5b060] [c000000008008ce4] hardware_interrupt_common+0x114/0x120

Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Syzkaller report this:

pcd: pcd version 1.07, major 46, nice 0
pcd0: Autoprobe failed
pcd: No CD-ROM drive found
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN PTI
CPU: 1 PID: 4525 Comm: syz-executor.0 Not tainted 5.1.0-rc3+ #8
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:pcd_init+0x95c/0x1000 [pcd]
Code: c4 ab f7 48 89 d8 48 c1 e8 03 80 3c 28 00 74 08 48 89 df e8 56 a3 da f7 4c 8b 23 49 8d bc 24 80 05 00 00 48 89 f8 48 c1 e8 03 <80> 3c 28 00 74 05 e8 39 a3 da f7 49 8b bc 24 80 05 00 00 e8 cc b2
RSP: 0018:ffff8881e84df880 EFLAGS: 00010202
RAX: 00000000000000b0 RBX: ffffffffc155a088 RCX: ffffffffc1508935
RDX: 0000000000040000 RSI: ffffc900014f0000 RDI: 0000000000000580
RBP: dffffc0000000000 R08: ffffed103ee658b8 R09: ffffed103ee658b8
R10: 0000000000000001 R11: ffffed103ee658b7 R12: 0000000000000000
R13: ffffffffc155a778 R14: ffffffffc155a4a8 R15: 0000000000000003
FS:  00007fe71bee3700(0000) GS:ffff8881f7300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a7334441a8 CR3: 00000001e9674003 CR4: 00000000007606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 ? 0xffffffffc1508000
 ? 0xffffffffc1508000
 do_one_initcall+0xbc/0x47d init/main.c:901
 do_init_module+0x1b5/0x547 kernel/module.c:3456
 load_module+0x6405/0x8c10 kernel/module.c:3804
 __do_sys_finit_module+0x162/0x190 kernel/module.c:3898
 do_syscall_64+0x9f/0x450 arch/x86/entry/common.c:290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462e99
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe71bee2c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99
RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003
RBP: 00007fe71bee2c70 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fe71bee36bc
R13: 00000000004bcefa R14: 00000000006f6fb0 R15: 0000000000000004
Modules linked in: pcd(+) paride solos_pci atm ts_fsm rtc_mt6397 mac80211 nhc_mobility nhc_udp nhc_ipv6 nhc_hop nhc_dest nhc_fragment nhc_routing 6lowpan rtc_cros_ec memconsole intel_xhci_usb_role_switch roles rtc_wm8350 usbcore industrialio_triggered_buffer kfifo_buf industrialio asc7621 dm_era dm_persistent_data dm_bufio dm_mod tpm gnss_ubx gnss_serial serdev gnss max2165 cpufreq_dt hid_penmount hid menf21bmc_wdt rc_core n_tracesink ide_gd_mod cdns_csi2tx v4l2_fwnode videodev media pinctrl_lewisburg pinctrl_intel iptable_security iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun joydev mousedev ppdev kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel aes_x86_64 crypto_simd
 ide_pci_generic piix input_leds cryptd glue_helper psmouse ide_core intel_agp serio_raw intel_gtt ata_generic i2c_piix4 agpgart pata_acpi parport_pc parport floppy rtc_cmos sch_fq_codel ip_tables x_tables sha1_ssse3 sha1_generic ipv6 [last unloaded: bmc150_magn]
Dumping ftrace buffer:
   (ftrace buffer empty)
---[ end trace d873691c3cd69f56 ]---

If alloc_disk fails in pcd_init_units, cd->disk will be
NULL, however in pcd_detect and pcd_exit, it's not check
this before free.It may result a NULL pointer dereference.

Also when register_blkdev failed, blk_cleanup_queue() and
blk_mq_free_tag_set() should be called to free resources.

Reported-by: Hulk Robot <hulkci@huawei.com>
Fixes: 81b74ac ("paride/pcd: cleanup queues when detection fails")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>

Signed-off-by: Jens Axboe <axboe@kernel.dk>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
If xace hardware reports a bad version number, the error handling code
in ace_setup() calls put_disk(), followed by queue cleanup. However, since
the disk data structure has the queue pointer set, put_disk() also
cleans and releases the queue. This results in blk_cleanup_queue()
accessing an already released data structure, which in turn may result
in a crash such as the following.

[   10.681671] BUG: Kernel NULL pointer dereference at 0x00000040
[   10.681826] Faulting instruction address: 0xc0431480
[   10.682072] Oops: Kernel access of bad area, sig: 11 [#1]
[   10.682251] BE PAGE_SIZE=4K PREEMPT Xilinx Virtex440
[   10.682387] Modules linked in:
[   10.682528] CPU: 0 PID: 1 Comm: swapper Tainted: G        W         5.0.0-rc6-next-20190218+ #2
[   10.682733] NIP:  c0431480 LR: c043147c CTR: c0422ad8
[   10.682863] REGS: cf82fbe0 TRAP: 0300   Tainted: G        W          (5.0.0-rc6-next-20190218+)
[   10.683065] MSR:  00029000 <CE,EE,ME>  CR: 22000222  XER: 00000000
[   10.683236] DEAR: 00000040 ESR: 00000000
[   10.683236] GPR00: c043147c cf82fc90 cf82ccc0 00000000 00000000 00000000 00000002 00000000
[   10.683236] GPR08: 00000000 00000000 c04310bc 00000000 22000222 00000000 c0002c54 00000000
[   10.683236] GPR16: 00000000 00000001 c09aa39c c09021b0 c09021dc 00000007 c0a68c08 00000000
[   10.683236] GPR24: 00000001 ced6d400 ced6dcf0 c0815d9c 00000000 00000000 00000000 cedf0800
[   10.684331] NIP [c0431480] blk_mq_run_hw_queue+0x28/0x114
[   10.684473] LR [c043147c] blk_mq_run_hw_queue+0x24/0x114
[   10.684602] Call Trace:
[   10.684671] [cf82fc90] [c043147c] blk_mq_run_hw_queue+0x24/0x114 (unreliable)
[   10.684854] [cf82fcc0] [c04315bc] blk_mq_run_hw_queues+0x50/0x7c
[   10.685002] [cf82fce0] [c0422b24] blk_set_queue_dying+0x30/0x68
[   10.685154] [cf82fcf0] [c0423ec0] blk_cleanup_queue+0x34/0x14c
[   10.685306] [cf82fd10] [c054d73c] ace_probe+0x3dc/0x508
[   10.685445] [cf82fd50] [c052d740] platform_drv_probe+0x4c/0xb8
[   10.685592] [cf82fd70] [c052abb0] really_probe+0x20c/0x32c
[   10.685728] [cf82fda0] [c052ae58] driver_probe_device+0x68/0x464
[   10.685877] [cf82fdc0] [c052b500] device_driver_attach+0xb4/0xe4
[   10.686024] [cf82fde0] [c052b5dc] __driver_attach+0xac/0xfc
[   10.686161] [cf82fe00] [c0528428] bus_for_each_dev+0x80/0xc0
[   10.686314] [cf82fe30] [c0529b3c] bus_add_driver+0x144/0x234
[   10.686457] [cf82fe50] [c052c46c] driver_register+0x88/0x15c
[   10.686610] [cf82fe60] [c09de288] ace_init+0x4c/0xac
[   10.686742] [cf82fe80] [c0002730] do_one_initcall+0xac/0x330
[   10.686888] [cf82fee0] [c09aafd0] kernel_init_freeable+0x34c/0x478
[   10.687043] [cf82ff30] [c0002c6c] kernel_init+0x18/0x114
[   10.687188] [cf82ff40] [c000f2f0] ret_from_kernel_thread+0x14/0x1c
[   10.687349] Instruction dump:
[   10.687435] 3863ffd4 4bfffd70 9421ffd0 7c0802a6 93c10028 7c9e2378 93e1002c 38810008
[   10.687637] 7c7f1b78 90010034 4bfffc25 813f008c <81290040> 75290100 4182002c 80810008
[   10.688056] ---[ end trace 13c9ff51d41b9d40 ]---

Fix the problem by setting the disk queue pointer to NULL before calling
put_disk(). A more comprehensive fix might be to rearrange the code
to check the hardware version before initializing data structures,
but I don't know if this would have undesirable side effects, and
it would increase the complexity of backporting the fix to older kernels.

Fixes: 74489a9 ("Add support for Xilinx SystemACE CompactFlash interface")
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
If the msix_affinity_masks is alloced failed, then we'll
try to free some resources in vp_free_vectors() that may
access it directly.

We met the following stack in our production:
[   29.296767] BUG: unable to handle kernel NULL pointer dereference at  (null)
[   29.311151] IP: [<ffffffffc04fe35a>] vp_free_vectors+0x6a/0x150 [virtio_pci]
[   29.324787] PGD 0
[   29.333224] Oops: 0000 [#1] SMP
[...]
[   29.425175] RIP: 0010:[<ffffffffc04fe35a>]  [<ffffffffc04fe35a>] vp_free_vectors+0x6a/0x150 [virtio_pci]
[   29.441405] RSP: 0018:ffff9a55c2dcfa10  EFLAGS: 00010206
[   29.453491] RAX: 0000000000000000 RBX: ffff9a55c322c400 RCX: 0000000000000000
[   29.467488] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9a55c322c400
[   29.481461] RBP: ffff9a55c2dcfa20 R08: 0000000000000000 R09: ffffc1b6806ff020
[   29.495427] R10: 0000000000000e95 R11: 0000000000aaaaaa R12: 0000000000000000
[   29.509414] R13: 0000000000010000 R14: ffff9a55bd2d9e98 R15: ffff9a55c322c400
[   29.523407] FS:  00007fdcba69f8c0(0000) GS:ffff9a55c2840000(0000) knlGS:0000000000000000
[   29.538472] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   29.551621] CR2: 0000000000000000 CR3: 000000003ce52000 CR4: 00000000003607a0
[   29.565886] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   29.580055] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   29.594122] Call Trace:
[   29.603446]  [<ffffffffc04fe8a2>] vp_request_msix_vectors+0xe2/0x260 [virtio_pci]
[   29.618017]  [<ffffffffc04fedc5>] vp_try_to_find_vqs+0x95/0x3b0 [virtio_pci]
[   29.632152]  [<ffffffffc04ff117>] vp_find_vqs+0x37/0xb0 [virtio_pci]
[   29.645582]  [<ffffffffc057bf63>] init_vq+0x153/0x260 [virtio_blk]
[   29.658831]  [<ffffffffc057c1e8>] virtblk_probe+0xe8/0x87f [virtio_blk]
[...]

Cc: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Longpeng <longpeng2@huawei.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Ido Schimmel says:

====================
mlxsw: Various fixes

This patchset contains various small fixes for mlxsw.

Patch #1 fixes a warning generated by switchdev core when the driver
fails to insert an MDB entry in the commit phase.

Patches #2-#4 fix a warning in check_flush_dependency() that can be
triggered when a work item in a WQ_MEM_RECLAIM workqueue tries to flush
a non-WQ_MEM_RECLAIM workqueue.

It seems that the semantics of the WQ_MEM_RECLAIM flag are not very
clear [1] and that various patches have been sent to remove it from
various workqueues throughout the kernel [2][3][4] in order to silence
the warning.

These patches do the same for the workqueues created by mlxsw that
probably should not have been created with this flag in the first place.

Patch #5 fixes a regression where an IP address cannot be assigned to a
VRF upper due to erroneous MAC validation check. Patch #6 adds a test
case.

Patch #7 adjusts Spectrum-2 shared buffer configuration to be compatible
with Spectrum-1. The problem and fix are described in detail in the
commit message.

Please consider patches #1-#5 for 5.0.y. I verified they apply cleanly.

[1] https://patchwork.kernel.org/patch/10791315/
[2] Commit ce162bf ("mac80211_hwsim: don't use WQ_MEM_RECLAIM")
[3] Commit 39baf10 ("IB/core: Fix use workqueue without WQ_MEM_RECLAIM")
[4] Commit 75215e5 ("iwcm: Don't allocate iwcm workqueue with WQ_MEM_RECLAIM")
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
…onally"

This reverts the first part of commit 4e485d0 ("strparser: Call
skb_unclone conditionally").  To build a message with multiple
fragments we need our own root of frag_list.  We can't simply
use the frag_list of orig_skb, because it will lead to linking
all orig_skbs together creating very long frag chains, and causing
stack overflow on kfree_skb() (which is called recursively on
the frag_lists).

BUG: stack guard page was hit at 00000000d40fad41 (stack is 0000000029dde9f4..000000008cce03d5)
kernel stack overflow (double-fault): 0000 [#1] PREEMPT SMP
RIP: 0010:free_one_page+0x2b/0x490

Call Trace:
  __free_pages_ok+0x143/0x2c0
  skb_release_data+0x8e/0x140
  ? skb_release_data+0xad/0x140
  kfree_skb+0x32/0xb0

  [...]

  skb_release_data+0xad/0x140
  ? skb_release_data+0xad/0x140
  kfree_skb+0x32/0xb0
  skb_release_data+0xad/0x140
  ? skb_release_data+0xad/0x140
  kfree_skb+0x32/0xb0
  skb_release_data+0xad/0x140
  ? skb_release_data+0xad/0x140
  kfree_skb+0x32/0xb0
  skb_release_data+0xad/0x140
  ? skb_release_data+0xad/0x140
  kfree_skb+0x32/0xb0
  skb_release_data+0xad/0x140
  __kfree_skb+0xe/0x20
  tcp_disconnect+0xd6/0x4d0
  tcp_close+0xf4/0x430
  ? tcp_check_oom+0xf0/0xf0
  tls_sk_proto_close+0xe4/0x1e0 [tls]
  inet_release+0x36/0x60
  __sock_release+0x37/0xa0
  sock_close+0x11/0x20
  __fput+0xa2/0x1d0
  task_work_run+0x89/0xb0
  exit_to_usermode_loop+0x9a/0xa0
  do_syscall_64+0xc0/0xf0
  entry_SYSCALL_64_after_hwframe+0x44/0xa9

Let's leave the second unclone conditional, as I'm not entirely
sure what is its purpose :)

Fixes: 4e485d0 ("strparser: Call skb_unclone conditionally")
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Syzkaller report this:

BUG: unable to handle kernel paging request at fffffbfff830524b
PGD 237fe8067 P4D 237fe8067 PUD 237e64067 PMD 1c9716067 PTE 0
Oops: 0000 [#1] SMP KASAN PTI
CPU: 1 PID: 4465 Comm: syz-executor.0 Not tainted 5.0.0+ #5
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
RIP: 0010:__list_add_valid+0x21/0xe0 lib/list_debug.c:23
Code: 8b 0c 24 e9 17 fd ff ff 90 55 48 89 fd 48 8d 7a 08 53 48 89 d3 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 48 83 ec 08 <80> 3c 02 00 0f 85 8b 00 00 00 48 8b 53 08 48 39 f2 75 35 48 89 f2
RSP: 0018:ffff8881ea2278d0 EFLAGS: 00010282
RAX: dffffc0000000000 RBX: ffffffffc1829250 RCX: 1ffff1103d444ef4
RDX: 1ffffffff830524b RSI: ffffffff85659300 RDI: ffffffffc1829258
RBP: ffffffffc1879250 R08: fffffbfff0acb269 R09: fffffbfff0acb269
R10: ffff8881ea2278f0 R11: fffffbfff0acb268 R12: ffffffffc1829250
R13: dffffc0000000000 R14: 0000000000000008 R15: ffffffffc187c830
FS:  00007fe0361df700(0000) GS:ffff8881f7300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: fffffbfff830524b CR3: 00000001eb39a001 CR4: 00000000007606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 __list_add include/linux/list.h:60 [inline]
 list_add include/linux/list.h:79 [inline]
 proto_register+0x444/0x8f0 net/core/sock.c:3375
 nr_proto_init+0x73/0x4b3 [netrom]
 ? 0xffffffffc1628000
 ? 0xffffffffc1628000
 do_one_initcall+0xbc/0x47d init/main.c:887
 do_init_module+0x1b5/0x547 kernel/module.c:3456
 load_module+0x6405/0x8c10 kernel/module.c:3804
 __do_sys_finit_module+0x162/0x190 kernel/module.c:3898
 do_syscall_64+0x9f/0x450 arch/x86/entry/common.c:290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x462e99
Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe0361dec58 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 000000000073bf00 RCX: 0000000000462e99
RDX: 0000000000000000 RSI: 0000000020000100 RDI: 0000000000000003
RBP: 00007fe0361dec70 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fe0361df6bc
R13: 00000000004bcefa R14: 00000000006f6fb0 R15: 0000000000000004
Modules linked in: netrom(+) ax25 fcrypt pcbc af_alg arizona_ldo1 v4l2_common videodev media v4l2_dv_timings hdlc ide_cd_mod snd_soc_sigmadsp_regmap snd_soc_sigmadsp intel_spi_platform intel_spi mtd spi_nor snd_usbmidi_lib usbcore lcd ti_ads7950 hi6421_regulator snd_soc_kbl_rt5663_max98927 snd_soc_hdac_hdmi snd_hda_ext_core snd_hda_core snd_soc_rt5663 snd_soc_core snd_pcm_dmaengine snd_compress snd_soc_rl6231 mac80211 rtc_rc5t583 spi_slave_time leds_pwm hid_gt683r hid industrialio_triggered_buffer kfifo_buf industrialio ir_kbd_i2c rc_core led_class_flash dwc_xlgmac snd_ymfpci gameport snd_mpu401_uart snd_rawmidi snd_ac97_codec snd_pcm ac97_bus snd_opl3_lib snd_timer snd_seq_device snd_hwdep snd soundcore iptable_security iptable_raw iptable_mangle iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan
 bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun joydev mousedev ppdev tpm kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel ide_pci_generic piix aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ide_core psmouse input_leds i2c_piix4 serio_raw intel_agp intel_gtt ata_generic agpgart pata_acpi parport_pc rtc_cmos parport floppy sch_fq_codel ip_tables x_tables sha1_ssse3 sha1_generic ipv6 [last unloaded: rxrpc]
Dumping ftrace buffer:
   (ftrace buffer empty)
CR2: fffffbfff830524b
---[ end trace 039ab24b305c4b19 ]---

If nr_proto_init failed, it may forget to call proto_unregister,
tiggering this issue.This patch rearrange code of nr_proto_init
to avoid such issues.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Move ieee80211_tx_status_ext() outside of status_list lock section
in order to avoid locking dependency and possible deadlock reposed by
LOCKDEP in below warning.

Also do mt76_tx_status_lock() just before it's needed.

[  440.224832] WARNING: possible circular locking dependency detected
[  440.224833] 5.1.0-rc2+ #22 Not tainted
[  440.224834] ------------------------------------------------------
[  440.224835] kworker/u16:28/2362 is trying to acquire lock:
[  440.224836] 0000000089b8cacf (&(&q->lock)->rlock#2){+.-.}, at: mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.224842]
               but task is already holding lock:
[  440.224842] 000000002cfedc59 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x32/0x1f0 [mac80211]
[  440.224863]
               which lock already depends on the new lock.

[  440.224863]
               the existing dependency chain (in reverse order) is:
[  440.224864]
               -> #3 (&(&sta->lock)->rlock){+.-.}:
[  440.224869]        _raw_spin_lock_bh+0x34/0x40
[  440.224880]        ieee80211_start_tx_ba_session+0xe4/0x3d0 [mac80211]
[  440.224894]        minstrel_ht_get_rate+0x45c/0x510 [mac80211]
[  440.224906]        rate_control_get_rate+0xc1/0x140 [mac80211]
[  440.224918]        ieee80211_tx_h_rate_ctrl+0x195/0x3c0 [mac80211]
[  440.224930]        ieee80211_xmit_fast+0x26d/0xa50 [mac80211]
[  440.224942]        __ieee80211_subif_start_xmit+0xfc/0x310 [mac80211]
[  440.224954]        ieee80211_subif_start_xmit+0x38/0x390 [mac80211]
[  440.224956]        dev_hard_start_xmit+0xb8/0x300
[  440.224957]        __dev_queue_xmit+0x7d4/0xbb0
[  440.224968]        ip6_finish_output2+0x246/0x860 [ipv6]
[  440.224978]        mld_sendpack+0x1bd/0x360 [ipv6]
[  440.224987]        mld_ifc_timer_expire+0x1a4/0x2f0 [ipv6]
[  440.224989]        call_timer_fn+0x89/0x2a0
[  440.224990]        run_timer_softirq+0x1bd/0x4d0
[  440.224992]        __do_softirq+0xdb/0x47c
[  440.224994]        irq_exit+0xfa/0x100
[  440.224996]        smp_apic_timer_interrupt+0x9a/0x220
[  440.224997]        apic_timer_interrupt+0xf/0x20
[  440.224999]        cpuidle_enter_state+0xc1/0x470
[  440.225000]        do_idle+0x21a/0x260
[  440.225001]        cpu_startup_entry+0x19/0x20
[  440.225004]        start_secondary+0x135/0x170
[  440.225006]        secondary_startup_64+0xa4/0xb0
[  440.225007]
               -> #2 (&(&sta->rate_ctrl_lock)->rlock){+.-.}:
[  440.225009]        _raw_spin_lock_bh+0x34/0x40
[  440.225022]        rate_control_tx_status+0x4f/0xb0 [mac80211]
[  440.225031]        ieee80211_tx_status_ext+0x142/0x1a0 [mac80211]
[  440.225035]        mt76x02_send_tx_status+0x2e4/0x340 [mt76x02_lib]
[  440.225037]        mt76x02_tx_status_data+0x31/0x40 [mt76x02_lib]
[  440.225040]        mt76u_tx_status_data+0x51/0xa0 [mt76_usb]
[  440.225042]        process_one_work+0x237/0x5d0
[  440.225043]        worker_thread+0x3c/0x390
[  440.225045]        kthread+0x11d/0x140
[  440.225046]        ret_from_fork+0x3a/0x50
[  440.225047]
               -> #1 (&(&list->lock)->rlock#8){+.-.}:
[  440.225049]        _raw_spin_lock_bh+0x34/0x40
[  440.225052]        mt76_tx_status_skb_add+0x51/0x100 [mt76]
[  440.225054]        mt76x02u_tx_prepare_skb+0xbd/0x116 [mt76x02_usb]
[  440.225056]        mt76u_tx_queue_skb+0x5f/0x180 [mt76_usb]
[  440.225058]        mt76_tx+0x93/0x190 [mt76]
[  440.225070]        ieee80211_tx_frags+0x148/0x210 [mac80211]
[  440.225081]        __ieee80211_tx+0x75/0x1b0 [mac80211]
[  440.225092]        ieee80211_tx+0xde/0x110 [mac80211]
[  440.225105]        __ieee80211_tx_skb_tid_band+0x72/0x90 [mac80211]
[  440.225122]        ieee80211_send_auth+0x1f3/0x360 [mac80211]
[  440.225141]        ieee80211_auth.cold.40+0x6c/0x100 [mac80211]
[  440.225156]        ieee80211_mgd_auth.cold.50+0x132/0x15f [mac80211]
[  440.225171]        cfg80211_mlme_auth+0x149/0x360 [cfg80211]
[  440.225181]        nl80211_authenticate+0x273/0x2e0 [cfg80211]
[  440.225183]        genl_family_rcv_msg+0x196/0x3a0
[  440.225184]        genl_rcv_msg+0x47/0x8e
[  440.225185]        netlink_rcv_skb+0x3a/0xf0
[  440.225187]        genl_rcv+0x24/0x40
[  440.225188]        netlink_unicast+0x16d/0x210
[  440.225189]        netlink_sendmsg+0x204/0x3b0
[  440.225191]        sock_sendmsg+0x36/0x40
[  440.225193]        ___sys_sendmsg+0x259/0x2b0
[  440.225194]        __sys_sendmsg+0x47/0x80
[  440.225196]        do_syscall_64+0x60/0x1f0
[  440.225197]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
[  440.225198]
               -> #0 (&(&q->lock)->rlock#2){+.-.}:
[  440.225200]        lock_acquire+0xb9/0x1a0
[  440.225202]        _raw_spin_lock_bh+0x34/0x40
[  440.225204]        mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225215]        ieee80211_agg_start_txq+0xe8/0x2b0 [mac80211]
[  440.225225]        ieee80211_stop_tx_ba_cb+0xb8/0x1f0 [mac80211]
[  440.225235]        ieee80211_ba_session_work+0x1c1/0x2f0 [mac80211]
[  440.225236]        process_one_work+0x237/0x5d0
[  440.225237]        worker_thread+0x3c/0x390
[  440.225239]        kthread+0x11d/0x140
[  440.225240]        ret_from_fork+0x3a/0x50
[  440.225240]
               other info that might help us debug this:

[  440.225241] Chain exists of:
                 &(&q->lock)->rlock#2 --> &(&sta->rate_ctrl_lock)->rlock --> &(&sta->lock)->rlock

[  440.225243]  Possible unsafe locking scenario:

[  440.225244]        CPU0                    CPU1
[  440.225244]        ----                    ----
[  440.225245]   lock(&(&sta->lock)->rlock);
[  440.225245]                                lock(&(&sta->rate_ctrl_lock)->rlock);
[  440.225246]                                lock(&(&sta->lock)->rlock);
[  440.225247]   lock(&(&q->lock)->rlock#2);
[  440.225248]
                *** DEADLOCK ***

[  440.225249] 5 locks held by kworker/u16:28/2362:
[  440.225250]  #0: 0000000048fcd291 ((wq_completion)phy0){+.+.}, at: process_one_work+0x1b5/0x5d0
[  440.225252]  #1: 00000000f1c6828f ((work_completion)(&sta->ampdu_mlme.work)){+.+.}, at: process_one_work+0x1b5/0x5d0
[  440.225254]  #2: 00000000433d2b2c (&sta->ampdu_mlme.mtx){+.+.}, at: ieee80211_ba_session_work+0x5c/0x2f0 [mac80211]
[  440.225265]  #3: 000000002cfedc59 (&(&sta->lock)->rlock){+.-.}, at: ieee80211_stop_tx_ba_cb+0x32/0x1f0 [mac80211]
[  440.225276]  #4: 000000009d7b9a44 (rcu_read_lock){....}, at: ieee80211_agg_start_txq+0x33/0x2b0 [mac80211]
[  440.225286]
               stack backtrace:
[  440.225288] CPU: 2 PID: 2362 Comm: kworker/u16:28 Not tainted 5.1.0-rc2+ #22
[  440.225289] Hardware name: LENOVO 20KGS23S0P/20KGS23S0P, BIOS N23ET55W (1.30 ) 08/31/2018
[  440.225300] Workqueue: phy0 ieee80211_ba_session_work [mac80211]
[  440.225301] Call Trace:
[  440.225304]  dump_stack+0x85/0xc0
[  440.225306]  print_circular_bug.isra.38.cold.58+0x15c/0x195
[  440.225307]  check_prev_add.constprop.48+0x5f0/0xc00
[  440.225309]  ? check_prev_add.constprop.48+0x39d/0xc00
[  440.225311]  ? __lock_acquire+0x41d/0x1100
[  440.225312]  __lock_acquire+0xd98/0x1100
[  440.225313]  ? __lock_acquire+0x41d/0x1100
[  440.225315]  lock_acquire+0xb9/0x1a0
[  440.225317]  ? mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225319]  _raw_spin_lock_bh+0x34/0x40
[  440.225321]  ? mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225323]  mt76_wake_tx_queue+0x4c/0xb0 [mt76]
[  440.225334]  ieee80211_agg_start_txq+0xe8/0x2b0 [mac80211]
[  440.225344]  ieee80211_stop_tx_ba_cb+0xb8/0x1f0 [mac80211]
[  440.225354]  ieee80211_ba_session_work+0x1c1/0x2f0 [mac80211]
[  440.225356]  process_one_work+0x237/0x5d0
[  440.225358]  worker_thread+0x3c/0x390
[  440.225359]  ? wq_calc_node_cpumask+0x70/0x70
[  440.225360]  kthread+0x11d/0x140
[  440.225362]  ? kthread_create_on_node+0x40/0x40
[  440.225363]  ret_from_fork+0x3a/0x50

Cc: stable@vger.kernel.org
Fixes: 88046b2 ("mt76: add support for reporting tx status with skb")
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Currently mm_iommu_do_alloc() is called in 2 cases:
- VFIO_IOMMU_SPAPR_REGISTER_MEMORY ioctl() for normal memory:
	this locks &mem_list_mutex and then locks mm::mmap_sem
	several times when adjusting locked_vm or pinning pages;
- vfio_pci_nvgpu_regops::mmap() for GPU memory:
	this is called with mm::mmap_sem held already and it locks
	&mem_list_mutex.

So one can craft a userspace program to do special ioctl and mmap in
2 threads concurrently and cause a deadlock which lockdep warns about
(below).

We did not hit this yet because QEMU constructs the machine in a single
thread.

This moves the overlap check next to where the new entry is added and
reduces the amount of time spent with &mem_list_mutex held.

This moves locked_vm adjustment from under &mem_list_mutex.

This relies on mm_iommu_adjust_locked_vm() doing nothing when entries==0.

This is one of the lockdep warnings:

======================================================
WARNING: possible circular locking dependency detected
5.1.0-rc2-le_nv2_aikATfstn1-p1 #363 Not tainted
------------------------------------------------------
qemu-system-ppc/8038 is trying to acquire lock:
000000002ec6c453 (mem_list_mutex){+.+.}, at: mm_iommu_do_alloc+0x70/0x490

but task is already holding lock:
00000000fd7da97f (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0xf0/0x160

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&mm->mmap_sem){++++}:
       lock_acquire+0xf8/0x260
       down_write+0x44/0xa0
       mm_iommu_adjust_locked_vm.part.1+0x4c/0x190
       mm_iommu_do_alloc+0x310/0x490
       tce_iommu_ioctl.part.9+0xb84/0x1150 [vfio_iommu_spapr_tce]
       vfio_fops_unl_ioctl+0x94/0x430 [vfio]
       do_vfs_ioctl+0xe4/0x930
       ksys_ioctl+0xc4/0x110
       sys_ioctl+0x28/0x80
       system_call+0x5c/0x70

-> #0 (mem_list_mutex){+.+.}:
       __lock_acquire+0x1484/0x1900
       lock_acquire+0xf8/0x260
       __mutex_lock+0x88/0xa70
       mm_iommu_do_alloc+0x70/0x490
       vfio_pci_nvgpu_mmap+0xc0/0x130 [vfio_pci]
       vfio_pci_mmap+0x198/0x2a0 [vfio_pci]
       vfio_device_fops_mmap+0x44/0x70 [vfio]
       mmap_region+0x5d4/0x770
       do_mmap+0x42c/0x650
       vm_mmap_pgoff+0x124/0x160
       ksys_mmap_pgoff+0xdc/0x2f0
       sys_mmap+0x40/0x80
       system_call+0x5c/0x70

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&mm->mmap_sem);
                               lock(mem_list_mutex);
                               lock(&mm->mmap_sem);
  lock(mem_list_mutex);

 *** DEADLOCK ***

1 lock held by qemu-system-ppc/8038:
 #0: 00000000fd7da97f (&mm->mmap_sem){++++}, at: vm_mmap_pgoff+0xf0/0x160

Fixes: c10c21e ("powerpc/vfio/iommu/kvm: Do not pin device memory", 2018-12-19)
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
…barrier

free_user() could be called in atomic context.

This patch pushed the free operation off into a workqueue.

Example:

 BUG: sleeping function called from invalid context at kernel/workqueue.c:2856
 in_atomic(): 1, irqs_disabled(): 0, pid: 177, name: ksoftirqd/27
 CPU: 27 PID: 177 Comm: ksoftirqd/27 Not tainted 4.19.25-3 #1
 Hardware name: AIC 1S-HV26-08/MB-DPSB04-06, BIOS IVYBV060 10/21/2015
 Call Trace:
  dump_stack+0x5c/0x7b
  ___might_sleep+0xec/0x110
  __flush_work+0x48/0x1f0
  ? try_to_del_timer_sync+0x4d/0x80
  _cleanup_srcu_struct+0x104/0x140
  free_user+0x18/0x30 [ipmi_msghandler]
  ipmi_free_recv_msg+0x3a/0x50 [ipmi_msghandler]
  deliver_response+0xbd/0xd0 [ipmi_msghandler]
  deliver_local_response+0xe/0x30 [ipmi_msghandler]
  handle_one_recv_msg+0x163/0xc80 [ipmi_msghandler]
  ? dequeue_entity+0xa0/0x960
  handle_new_recv_msgs+0x15c/0x1f0 [ipmi_msghandler]
  tasklet_action_common.isra.22+0x103/0x120
  __do_softirq+0xf8/0x2d7
  run_ksoftirqd+0x26/0x50
  smpboot_thread_fn+0x11d/0x1e0
  kthread+0x103/0x140
  ? sort_range+0x20/0x20
  ? kthread_destroy_worker+0x40/0x40
  ret_from_fork+0x1f/0x40

Fixes: 77f8269 ("ipmi: fix use-after-free of user->release_barrier.rda")

Reported-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: stable@vger.kernel.org # 5.0
Cc: Yang Yingliang <yangyingliang@huawei.com>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
By calling maps__insert() we assume to get 2 references on the map,
which we relese within maps__remove call.

However if there's already same map name, we currently don't bump the
reference and can crash, like:

  Program received signal SIGABRT, Aborted.
  0x00007ffff75e60f5 in raise () from /lib64/libc.so.6

  (gdb) bt
  #0  0x00007ffff75e60f5 in raise () from /lib64/libc.so.6
  #1  0x00007ffff75d0895 in abort () from /lib64/libc.so.6
  #2  0x00007ffff75d0769 in __assert_fail_base.cold () from /lib64/libc.so.6
  #3  0x00007ffff75de596 in __assert_fail () from /lib64/libc.so.6
  #4  0x00000000004fc006 in refcount_sub_and_test (i=1, r=0x1224e88) at tools/include/linux/refcount.h:131
  #5  refcount_dec_and_test (r=0x1224e88) at tools/include/linux/refcount.h:148
  #6  map__put (map=0x1224df0) at util/map.c:299
  #7  0x00000000004fdb95 in __maps__remove (map=0x1224df0, maps=0xb17d80) at util/map.c:953
  #8  maps__remove (maps=0xb17d80, map=0x1224df0) at util/map.c:959
  #9  0x00000000004f7d8a in map_groups__remove (map=<optimized out>, mg=<optimized out>) at util/map_groups.h:65
  #10 machine__process_ksymbol_unregister (sample=<optimized out>, event=0x7ffff7279670, machine=<optimized out>) at util/machine.c:728
  #11 machine__process_ksymbol (machine=<optimized out>, event=0x7ffff7279670, sample=<optimized out>) at util/machine.c:741
  #12 0x00000000004fffbb in perf_session__deliver_event (session=0xb11390, event=0x7ffff7279670, tool=0x7fffffffc7b0, file_offset=13936) at util/session.c:1362
  #13 0x00000000005039bb in do_flush (show_progress=false, oe=0xb17e80) at util/ordered-events.c:243
  #14 __ordered_events__flush (oe=0xb17e80, how=OE_FLUSH__ROUND, timestamp=<optimized out>) at util/ordered-events.c:322
  #15 0x00000000005005e4 in perf_session__process_user_event (session=session@entry=0xb11390, event=event@entry=0x7ffff72a4af8,
  ...

Add the map to the list and getting the reference event if we find the
map with same name.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Eric Saint-Etienne <eric.saint.etienne@oracle.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Song Liu <songliubraving@fb.com>
Fixes: 1e62856 ("perf symbols: Fix slowness due to -ffunction-section")
Link: http://lkml.kernel.org/r/20190416160127.30203-10-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
There is a UBSAN report as below:
UBSAN: Undefined behaviour in net/ipv4/tcp_input.c:2877:56
signed integer overflow:
2147483647 * 1000 cannot be represented in type 'int'
CPU: 3 PID: 0 Comm: swapper/3 Not tainted 5.1.0-rc4-00058-g582549e #1
Call Trace:
 <IRQ>
 dump_stack+0x8c/0xba
 ubsan_epilogue+0x11/0x60
 handle_overflow+0x12d/0x170
 ? ttwu_do_wakeup+0x21/0x320
 __ubsan_handle_mul_overflow+0x12/0x20
 tcp_ack_update_rtt+0x76c/0x780
 tcp_clean_rtx_queue+0x499/0x14d0
 tcp_ack+0x69e/0x1240
 ? __wake_up_sync_key+0x2c/0x50
 ? update_group_capacity+0x50/0x680
 tcp_rcv_established+0x4e2/0xe10
 tcp_v4_do_rcv+0x22b/0x420
 tcp_v4_rcv+0xfe8/0x1190
 ip_protocol_deliver_rcu+0x36/0x180
 ip_local_deliver+0x15b/0x1a0
 ip_rcv+0xac/0xd0
 __netif_receive_skb_one_core+0x7f/0xb0
 __netif_receive_skb+0x33/0xc0
 netif_receive_skb_internal+0x84/0x1c0
 napi_gro_receive+0x2a0/0x300
 receive_buf+0x3d4/0x2350
 ? detach_buf_split+0x159/0x390
 virtnet_poll+0x198/0x840
 ? reweight_entity+0x243/0x4b0
 net_rx_action+0x25c/0x770
 __do_softirq+0x19b/0x66d
 irq_exit+0x1eb/0x230
 do_IRQ+0x7a/0x150
 common_interrupt+0xf/0xf
 </IRQ>

It can be reproduced by:
  echo 2147483647 > /proc/sys/net/ipv4/tcp_min_rtt_wlen

Fixes: f672258 ("tcp: track min RTT using windowed min-filter")
Signed-off-by: ZhangXiaoxu <zhangxiaoxu5@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Ido Schimmel says:

====================
mlxsw: Few small fixes

Patch #1, from Petr, adjusts mlxsw to provide the same QoS behavior for
both Spectrum-1 and Spectrum-2. The fix is required due to a difference
in the behavior of Spectrum-2 compared to Spectrum-1. The problem and
solution are described in the detail in the changelog.

Patch #2 increases the time period in which the driver waits for the
firmware to signal it has finished its initialization. The issue will be
fixed in future firmware versions and the timeout will be decreased.

Patch #3, from Amit, fixes a display problem where the autoneg status in
ethtool is not updated in case the netdev is not running.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
When a module option, or core kernel argument, toggles a static-key it
requires jump labels to be initialized early.  While x86, PowerPC, and
ARM64 arrange for jump_label_init() to be called before parse_args(),
ARM does not.

  Kernel command line: rdinit=/sbin/init page_alloc.shuffle=1 panic=-1 console=ttyAMA0,115200 page_alloc.shuffle=1
  ------------[ cut here ]------------
  WARNING: CPU: 0 PID: 0 at ./include/linux/jump_label.h:303
  page_alloc_shuffle+0x12c/0x1ac
  static_key_enable(): static key 'page_alloc_shuffle_key+0x0/0x4' used
  before call to jump_label_init()
  Modules linked in:
  CPU: 0 PID: 0 Comm: swapper Not tainted
  5.1.0-rc4-next-20190410-00003-g3367c36ce744 #1
  Hardware name: ARM Integrator/CP (Device Tree)
  [<c0011c68>] (unwind_backtrace) from [<c000ec48>] (show_stack+0x10/0x18)
  [<c000ec48>] (show_stack) from [<c07e9710>] (dump_stack+0x18/0x24)
  [<c07e9710>] (dump_stack) from [<c001bb1c>] (__warn+0xe0/0x108)
  [<c001bb1c>] (__warn) from [<c001bb88>] (warn_slowpath_fmt+0x44/0x6c)
  [<c001bb88>] (warn_slowpath_fmt) from [<c0b0c4a8>]
  (page_alloc_shuffle+0x12c/0x1ac)
  [<c0b0c4a8>] (page_alloc_shuffle) from [<c0b0c550>] (shuffle_store+0x28/0x48)
  [<c0b0c550>] (shuffle_store) from [<c003e6a0>] (parse_args+0x1f4/0x350)
  [<c003e6a0>] (parse_args) from [<c0ac3c00>] (start_kernel+0x1c0/0x488)

Move the fallback call to jump_label_init() to occur before
parse_args().

The redundant calls to jump_label_init() in other archs are left intact
in case they have static key toggling use cases that are even earlier
than option parsing.

Link: http://lkml.kernel.org/r/155544804466.1032396.13418949511615676665.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Guenter Roeck <groeck@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Russell King <rmk@armlinux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
The way of getting private imx_i2c_struct in i2c_imx_clk_notifier_call()
is incorrect, should use clk_change_nb element to get correct address
and avoid below kernel dump during POST_RATE_CHANGE notify by clk
framework:

Unable to handle kernel paging request at virtual address 03ef1488
pgd = (ptrval)
[03ef1488] *pgd=00000000
Internal error: Oops: 5 [#1] PREEMPT SMP ARM
Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
Workqueue: events reduce_bus_freq_handler
PC is at i2c_imx_set_clk+0x10/0xb8
LR is at i2c_imx_clk_notifier_call+0x20/0x28
pc : [<806a893c>]    lr : [<806a8a04>]    psr: a0080013
sp : bf399dd8  ip : bf3432ac  fp : bf7c1dc0
r10: 00000002  r9 : 00000000  r8 : 00000000
r7 : 03ef1480  r6 : bf399e50  r5 : ffffffff  r4 : 00000000
r3 : bf025300  r2 : bf399e50  r1 : 00b71b00  r0 : bf399be8
Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
Control: 10c5387d  Table: 4e03004a  DAC: 00000051
Process kworker/2:1 (pid: 38, stack limit = 0x(ptrval))
Stack: (0xbf399dd8 to 0xbf39a000)
9dc0:                                                       806a89e4 00000000
9de0: ffffffff bf399e50 00000002 806a8a04 806a89e4 80142900 ffffffff 00000000
9e00: bf34ef18 bf34ef04 00000000 ffffffff bf399e50 80142d84 00000000 bf399e6c
9e20: bf34ef00 80f214c4 bf025300 00000002 80f08d08 bf017480 00000000 80142df0
9e40: 00000000 80166ed8 80c27638 8045de58 bf352340 03ef1480 00b71b00 0f82e242
9e60: bf025300 00000002 03ef1480 80f60e5c 00000001 8045edf0 00000002 8045eb08
9e80: bf025300 00000002 03ef1480 8045ee10 03ef1480 8045eb08 bf01be40 00000002
9ea0: 03ef1480 8045ee10 07de2900 8045eb08 bf01b780 00000002 07de2900 8045ee10
9ec0: 80c27898 bf399ee4 bf020a80 00000002 1f78a400 8045ee10 80f60e5c 80460514
9ee0: 80f60e5c bf01b600 bf01b480 80460460 0f82e242 bf383a80 bf383a00 80f60e5c
9f00: 00000000 bf7c1dc0 80f60e70 80460564 80f60df0 80f60d24 80f60df0 8011e72c
9f20: 00000000 80f60df0 80f60e6c bf7c4f00 00000000 8011e7ac bf274000 8013bd84
9f40: bf7c1dd8 80f03d00 bf274000 bf7c1dc0 bf274014 bf7c1dd8 80f03d00 bf398000
9f60: 00000008 8013bfb4 00000000 bf25d100 bf25d0c0 00000000 bf274000 8013bf88
9f80: bf25d11c bf0cfebc 00000000 8014140c bf25d0c0 801412ec 00000000 00000000
9fa0: 00000000 00000000 00000000 801010e8 00000000 00000000 00000000 00000000
9fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
[<806a893c>] (i2c_imx_set_clk) from [<806a8a04>] (i2c_imx_clk_notifier_call+0x20/0x28)
[<806a8a04>] (i2c_imx_clk_notifier_call) from [<80142900>] (notifier_call_chain+0x44/0x84)
[<80142900>] (notifier_call_chain) from [<80142d84>] (__srcu_notifier_call_chain+0x44/0x98)
[<80142d84>] (__srcu_notifier_call_chain) from [<80142df0>] (srcu_notifier_call_chain+0x18/0x20)
[<80142df0>] (srcu_notifier_call_chain) from [<8045de58>] (__clk_notify+0x78/0xa4)
[<8045de58>] (__clk_notify) from [<8045edf0>] (__clk_recalc_rates+0x60/0xb4)
[<8045edf0>] (__clk_recalc_rates) from [<8045ee10>] (__clk_recalc_rates+0x80/0xb4)
Code: e92d40f8 e5903298 e59072a0 e1530001 (e5975008)
---[ end trace fc7f5514b97b6cbb ]---

Fixes: 90ad2cb ("i2c: imx: use clk notifier for rate changes")
Signed-off-by: Anson Huang <Anson.Huang@nxp.com>
Reviewed-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Cc: stable@kernel.org
juno-kim pushed a commit that referenced this issue Jun 4, 2019
After commit 5271953 ("rxrpc: Use the UDP encap_rcv hook"),
rxrpc_input_packet() is directly called from lockless UDP receive
path, under rcu_read_lock() protection.

It must therefore use RCU rules :

- udp_sk->sk_user_data can be cleared at any point in this function.
  rcu_dereference_sk_user_data() is what we need here.

- Also, since sk_user_data might have been set in rxrpc_open_socket()
  we must observe a proper RCU grace period before kfree(local) in
  rxrpc_lookup_local()

v4: @Local can be NULL in xrpc_lookup_local() as reported by kbuild test robot <lkp@intel.com>
        and Julia Lawall <julia.lawall@lip6.fr>, thanks !

v3,v2 : addressed David Howells feedback, thanks !

syzbot reported :

kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 19236 Comm: syz-executor703 Not tainted 5.1.0-rc6 #79
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:__lock_acquire+0xbef/0x3fb0 kernel/locking/lockdep.c:3573
Code: 00 0f 85 a5 1f 00 00 48 81 c4 10 01 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 48 b8 00 00 00 00 00 fc ff df 4c 89 ea 48 c1 ea 03 <80> 3c 02 00 0f 85 4a 21 00 00 49 81 7d 00 20 54 9c 89 0f 84 cf f4
RSP: 0018:ffff88809d7aef58 EFLAGS: 00010002
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000026 RSI: 0000000000000000 RDI: 0000000000000001
RBP: ffff88809d7af090 R08: 0000000000000001 R09: 0000000000000001
R10: ffffed1015d05bc7 R11: ffff888089428600 R12: 0000000000000000
R13: 0000000000000130 R14: 0000000000000001 R15: 0000000000000001
FS:  00007f059044d700(0000) GS:ffff8880ae800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000004b6040 CR3: 00000000955ca000 CR4: 00000000001406f0
Call Trace:
 lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4211
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x95/0xcd kernel/locking/spinlock.c:152
 skb_queue_tail+0x26/0x150 net/core/skbuff.c:2972
 rxrpc_reject_packet net/rxrpc/input.c:1126 [inline]
 rxrpc_input_packet+0x4a0/0x5536 net/rxrpc/input.c:1414
 udp_queue_rcv_one_skb+0xaf2/0x1780 net/ipv4/udp.c:2011
 udp_queue_rcv_skb+0x128/0x730 net/ipv4/udp.c:2085
 udp_unicast_rcv_skb.isra.0+0xb9/0x360 net/ipv4/udp.c:2245
 __udp4_lib_rcv+0x701/0x2ca0 net/ipv4/udp.c:2301
 udp_rcv+0x22/0x30 net/ipv4/udp.c:2482
 ip_protocol_deliver_rcu+0x60/0x8f0 net/ipv4/ip_input.c:208
 ip_local_deliver_finish+0x23b/0x390 net/ipv4/ip_input.c:234
 NF_HOOK include/linux/netfilter.h:289 [inline]
 NF_HOOK include/linux/netfilter.h:283 [inline]
 ip_local_deliver+0x1e9/0x520 net/ipv4/ip_input.c:255
 dst_input include/net/dst.h:450 [inline]
 ip_rcv_finish+0x1e1/0x300 net/ipv4/ip_input.c:413
 NF_HOOK include/linux/netfilter.h:289 [inline]
 NF_HOOK include/linux/netfilter.h:283 [inline]
 ip_rcv+0xe8/0x3f0 net/ipv4/ip_input.c:523
 __netif_receive_skb_one_core+0x115/0x1a0 net/core/dev.c:4987
 __netif_receive_skb+0x2c/0x1c0 net/core/dev.c:5099
 netif_receive_skb_internal+0x117/0x660 net/core/dev.c:5202
 napi_frags_finish net/core/dev.c:5769 [inline]
 napi_gro_frags+0xade/0xd10 net/core/dev.c:5843
 tun_get_user+0x2f24/0x3fb0 drivers/net/tun.c:1981
 tun_chr_write_iter+0xbd/0x156 drivers/net/tun.c:2027
 call_write_iter include/linux/fs.h:1866 [inline]
 do_iter_readv_writev+0x5e1/0x8e0 fs/read_write.c:681
 do_iter_write fs/read_write.c:957 [inline]
 do_iter_write+0x184/0x610 fs/read_write.c:938
 vfs_writev+0x1b3/0x2f0 fs/read_write.c:1002
 do_writev+0x15e/0x370 fs/read_write.c:1037
 __do_sys_writev fs/read_write.c:1110 [inline]
 __se_sys_writev fs/read_write.c:1107 [inline]
 __x64_sys_writev+0x75/0xb0 fs/read_write.c:1107
 do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe

Fixes: 5271953 ("rxrpc: Use the UDP encap_rcv hook")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: syzbot <syzkaller@googlegroups.com>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
When ddc-i2c-bus property is used, a NULL pointer dereference is reported:

[   31.041669] Unable to handle kernel NULL pointer dereference at virtual address 00000008
[   31.041671] pgd = 4d3c16f6
[   31.041673] [00000008] *pgd=00000000
[   31.041678] Internal error: Oops: 5 [#1] SMP ARM

[   31.041711] Hardware name: Rockchip (Device Tree)
[   31.041718] PC is at i2c_transfer+0x8/0xe4
[   31.041721] LR is at drm_scdc_read+0x54/0x84
[   31.041723] pc : [<c073273c>]    lr : [<c05926c4>]    psr: 280f0013
[   31.041725] sp : edffdad0  ip : 5ccb5511  fp : 00000058
[   31.041727] r10: 00000780  r9 : edf91608  r8 : c11b0f48
[   31.041728] r7 : 00000438  r6 : 00000000  r5 : 00000000  r4 : 00000000
[   31.041730] r3 : edffdae7  r2 : 00000002  r1 : edffdaec  r0 : 00000000

[   31.041908] [<c073273c>] (i2c_transfer) from [<c05926c4>] (drm_scdc_read+0x54/0x84)
[   31.041913] [<c05926c4>] (drm_scdc_read) from [<c0592858>] (drm_scdc_set_scrambling+0x30/0xbc)
[   31.041919] [<c0592858>] (drm_scdc_set_scrambling) from [<c05cc0f4>] (dw_hdmi_update_power+0x1440/0x1610)
[   31.041926] [<c05cc0f4>] (dw_hdmi_update_power) from [<c05cc574>] (dw_hdmi_bridge_enable+0x2c/0x70)
[   31.041932] [<c05cc574>] (dw_hdmi_bridge_enable) from [<c05aed48>] (drm_bridge_enable+0x24/0x34)
[   31.041938] [<c05aed48>] (drm_bridge_enable) from [<c0591060>] (drm_atomic_helper_commit_modeset_enables+0x114/0x220)
[   31.041943] [<c0591060>] (drm_atomic_helper_commit_modeset_enables) from [<c05c3fe0>] (rockchip_atomic_helper_commit_tail_rpm+0x28/0x64)

hdmi->i2c may not be set when ddc-i2c-bus property is used in device tree.
Fix this by using hdmi->ddc as the i2c adapter when calling drm_scdc_*().
Also report that SCDC is not supported when there is no DDC bus.

Fixes: 264fce6 ("drm/bridge: dw-hdmi: Add SCDC and TMDS Scrambling support")
Signed-off-by: Jonas Karlman <jonas@kwiboo.se>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Reviewed-by: Neil Armstrong <narmstrong@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/VE1PR03MB59031814B5BCAB2152923BDAAC210@VE1PR03MB5903.eurprd03.prod.outlook.com
juno-kim pushed a commit that referenced this issue Jun 4, 2019
…esult

During the development of commit 5e1f0f0 ("mm, compaction: capture
a page under direct compaction"), a paranoid check was added to ensure
that if a captured page was available after compaction that it was
consistent with the final state of compaction.  The intent was to catch
serious programming bugs such as using a stale page pointer and causing
corruption problems.

However, it is possible to get a captured page even if compaction was
unsuccessful if an interrupt triggered and happened to free pages in
interrupt context that got merged into a suitable high-order page.  It's
highly unlikely but Li Wang did report the following warning on s390
occuring when testing OOM handling.  Note that the warning is slightly
edited for clarity.

  WARNING: CPU: 0 PID: 9783 at mm/page_alloc.c:3777 __alloc_pages_direct_compact+0x182/0x190
  Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs
    lockd grace fscache sunrpc pkey ghash_s390 prng xts aes_s390
    des_s390 des_generic sha512_s390 zcrypt_cex4 zcrypt vmur binfmt_misc
    ip_tables xfs libcrc32c dasd_fba_mod qeth_l2 dasd_eckd_mod dasd_mod
    qeth qdio lcs ctcm ccwgroup fsm dm_mirror dm_region_hash dm_log
    dm_mod
  CPU: 0 PID: 9783 Comm: copy.sh Kdump: loaded Not tainted 5.1.0-rc 5 #1

This patch simply removes the check entirely instead of trying to be
clever about pages freed from interrupt context.  If a serious
programming error was introduced, it is highly likely to be caught by
prep_new_page() instead.

Link: http://lkml.kernel.org/r/20190419085133.GH18914@techsingularity.net
Fixes: 5e1f0f0 ("mm, compaction: capture a page under direct compaction")
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reported-by: Li Wang <liwang@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
Syzkaller report this:

  sysctl could not get directory: /net//bridge -12
  kasan: CONFIG_KASAN_INLINE enabled
  kasan: GPF could be caused by NULL-ptr deref or user memory access
  general protection fault: 0000 [#1] SMP KASAN PTI
  CPU: 1 PID: 7027 Comm: syz-executor.0 Tainted: G         C        5.1.0-rc3+ #8
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014
  RIP: 0010:__write_once_size include/linux/compiler.h:220 [inline]
  RIP: 0010:__rb_change_child include/linux/rbtree_augmented.h:144 [inline]
  RIP: 0010:__rb_erase_augmented include/linux/rbtree_augmented.h:186 [inline]
  RIP: 0010:rb_erase+0x5f4/0x19f0 lib/rbtree.c:459
  Code: 00 0f 85 60 13 00 00 48 89 1a 48 83 c4 18 5b 5d 41 5c 41 5d 41 5e 41 5f c3 48 89 f2 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 <80> 3c 02 00 0f 85 75 0c 00 00 4d 85 ed 4c 89 2e 74 ce 4c 89 ea 48
  RSP: 0018:ffff8881bb507778 EFLAGS: 00010206
  RAX: dffffc0000000000 RBX: ffff8881f224b5b8 RCX: ffffffff818f3f6a
  RDX: 000000000000000a RSI: 0000000000000050 RDI: ffff8881f224b568
  RBP: 0000000000000000 R08: ffffed10376a0ef4 R09: ffffed10376a0ef4
  R10: 0000000000000001 R11: ffffed10376a0ef4 R12: ffff8881f224b558
  R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
  FS:  00007f3e7ce13700(0000) GS:ffff8881f7300000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
  CR2: 00007fd60fbe9398 CR3: 00000001cb55c001 CR4: 00000000007606e0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
  PKRU: 55555554
  Call Trace:
   erase_entry fs/proc/proc_sysctl.c:178 [inline]
   erase_header+0xe3/0x160 fs/proc/proc_sysctl.c:207
   start_unregistering fs/proc/proc_sysctl.c:331 [inline]
   drop_sysctl_table+0x558/0x880 fs/proc/proc_sysctl.c:1631
   get_subdir fs/proc/proc_sysctl.c:1022 [inline]
   __register_sysctl_table+0xd65/0x1090 fs/proc/proc_sysctl.c:1335
   br_netfilter_init+0x68/0x1000 [br_netfilter]
   do_one_initcall+0xbc/0x47d init/main.c:901
   do_init_module+0x1b5/0x547 kernel/module.c:3456
   load_module+0x6405/0x8c10 kernel/module.c:3804
   __do_sys_finit_module+0x162/0x190 kernel/module.c:3898
   do_syscall_64+0x9f/0x450 arch/x86/entry/common.c:290
   entry_SYSCALL_64_after_hwframe+0x49/0xbe
  Modules linked in: br_netfilter(+) backlight comedi(C) hid_sensor_hub max3100 ti_ads8688 udc_core fddi snd_mona leds_gpio rc_streamzap mtd pata_netcell nf_log_common rc_winfast udp_tunnel snd_usbmidi_lib snd_usb_toneport snd_usb_line6 snd_rawmidi snd_seq_device snd_hwdep videobuf2_v4l2 videobuf2_common videodev media videobuf2_vmalloc videobuf2_memops rc_gadmei_rm008z 8250_of smm665 hid_tmff hid_saitek hwmon_vid rc_ati_tv_wonder_hd_600 rc_core pata_pdc202xx_old dn_rtmsg as3722 ad714x_i2c ad714x snd_soc_cs4265 hid_kensington panel_ilitek_ili9322 drm drm_panel_orientation_quirks ipack cdc_phonet usbcore phonet hid_jabra hid extcon_arizona can_dev industrialio_triggered_buffer kfifo_buf industrialio adm1031 i2c_mux_ltc4306 i2c_mux ipmi_msghandler mlxsw_core snd_soc_cs35l34 snd_soc_core snd_pcm_dmaengine snd_pcm snd_timer ac97_bus snd_compress snd soundcore gpio_da9055 uio ecdh_generic mdio_thunder of_mdio fixed_phy libphy mdio_cavium iptable_security iptable_raw iptable_mangle
   iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_filter bpfilter ip6_vti ip_vti ip_gre ipip sit tunnel4 ip_tunnel hsr veth netdevsim vxcan batman_adv cfg80211 rfkill chnl_net caif nlmon dummy team bonding vcan bridge stp llc ip6_gre gre ip6_tunnel tunnel6 tun joydev mousedev ppdev tpm kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel ide_pci_generic piix aes_x86_64 crypto_simd cryptd ide_core glue_helper input_leds psmouse intel_agp intel_gtt serio_raw ata_generic i2c_piix4 agpgart pata_acpi parport_pc parport floppy rtc_cmos sch_fq_codel ip_tables x_tables sha1_ssse3 sha1_generic ipv6 [last unloaded: br_netfilter]
  Dumping ftrace buffer:
     (ftrace buffer empty)
  ---[ end trace 68741688d5fbfe85 ]---

commit 23da958 ("fs/proc/proc_sysctl.c: fix NULL pointer
dereference in put_links") forgot to handle start_unregistering() case,
while header->parent is NULL, it calls erase_header() and as seen in the
above syzkaller call trace, accessing &header->parent->root will trigger
a NULL pointer dereference.

As that commit explained, there is also no need to call
start_unregistering() if header->parent is NULL.

Link: http://lkml.kernel.org/r/20190409153622.28112-1-yuehaibing@huawei.com
Fixes: 23da958 ("fs/proc/proc_sysctl.c: fix NULL pointer dereference in put_links")
Fixes: 0e47c99 ("sysctl: Replace root_list with links between sysctl_table_sets")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reported-by: Hulk Robot <hulkci@huawei.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
We don't check for the validity of the lengths in the packet received
from the firmware.  If the MPDU length received in the rx descriptor
is too short to contain the header length and the crypt length
together, we may end up trying to copy a negative number of bytes
(headlen - hdrlen < 0) which will underflow and cause us to try to
copy a huge amount of data.  This causes oopses such as this one:

BUG: unable to handle kernel paging request at ffff896be2970000
PGD 5e201067 P4D 5e201067 PUD 5e205067 PMD 16110d063 PTE 8000000162970161
Oops: 0003 [#1] PREEMPT SMP NOPTI
CPU: 2 PID: 1824 Comm: irq/134-iwlwifi Not tainted 4.19.33-04308-geea41cf4930f #1
Hardware name: [...]
RIP: 0010:memcpy_erms+0x6/0x10
Code: 90 90 90 90 eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3
 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
RSP: 0018:ffffa4630196fc60 EFLAGS: 00010287
RAX: ffff896be2924618 RBX: ffff896bc8ecc600 RCX: 00000000fffb4610
RDX: 00000000fffffff8 RSI: ffff896a835e2a38 RDI: ffff896be2970000
RBP: ffffa4630196fd30 R08: ffff896bc8ecc600 R09: ffff896a83597000
R10: ffff896bd6998400 R11: 000000000200407f R12: ffff896a83597050
R13: 00000000fffffff8 R14: 0000000000000010 R15: ffff896a83597038
FS:  0000000000000000(0000) GS:ffff896be8280000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff896be2970000 CR3: 000000005dc12002 CR4: 00000000003606e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 iwl_mvm_rx_mpdu_mq+0xb51/0x121b [iwlmvm]
 iwl_pcie_rx_handle+0x58c/0xa89 [iwlwifi]
 iwl_pcie_irq_rx_msix_handler+0xd9/0x12a [iwlwifi]
 irq_thread_fn+0x24/0x49
 irq_thread+0xb0/0x122
 kthread+0x138/0x140
 ret_from_fork+0x1f/0x40

Fix that by checking the lengths for correctness and trigger a warning
to show that we have received wrong data.

Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
juno-kim pushed a commit that referenced this issue Jun 4, 2019
If io_allocate_scq_urings() fails to allocate an sq_* region, it will
call io_mem_free() for any previously allocated regions, but leave
dangling pointers to these regions in the ctx. Any regions which have
not yet been allocated are left NULL. Note that when returning
-EOVERFLOW, the previously allocated sq_ring is not freed, which appears
to be an unintentional leak.

When io_allocate_scq_urings() fails, io_uring_create() will call
io_ring_ctx_wait_and_kill(), which calls io_mem_free() on all the sq_*
regions, assuming the pointers are valid and not NULL.

This can result in pages being freed multiple times, which has been
observed to corrupt the page state, leading to subsequent fun. This can
also result in virt_to_page() on NULL, resulting in the use of bogus
page addresses, and yet more subsequent fun. The latter can be detected
with CONFIG_DEBUG_VIRTUAL on arm64.

Adding a cleanup path to io_allocate_scq_urings() complicates the logic,
so let's leave it to io_ring_ctx_free() to consistently free these
pointers, and simplify the io_allocate_scq_urings() error paths.

Full splats from before this patch below. Note that the pointer logged
by the DEBUG_VIRTUAL "non-linear address" warning has been hashed, and
is actually NULL.

[   26.098129] page:ffff80000e949a00 count:0 mapcount:-128 mapping:0000000000000000 index:0x0
[   26.102976] flags: 0x63fffc000000()
[   26.104373] raw: 000063fffc000000 ffff80000e86c188 ffff80000ea3df08 0000000000000000
[   26.108917] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[   26.137235] page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
[   26.143960] ------------[ cut here ]------------
[   26.146020] kernel BUG at include/linux/mm.h:547!
[   26.147586] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[   26.149163] Modules linked in:
[   26.150287] Process syz-executor.21 (pid: 20204, stack limit = 0x000000000e9cefeb)
[   26.153307] CPU: 2 PID: 20204 Comm: syz-executor.21 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #18
[   26.156566] Hardware name: linux,dummy-virt (DT)
[   26.158089] pstate: 40400005 (nZcv daif +PAN -UAO)
[   26.159869] pc : io_mem_free+0x9c/0xa8
[   26.161436] lr : io_mem_free+0x9c/0xa8
[   26.162720] sp : ffff000013003d60
[   26.164048] x29: ffff000013003d60 x28: ffff800025048040
[   26.165804] x27: 0000000000000000 x26: ffff800025048040
[   26.167352] x25: 00000000000000c0 x24: ffff0000112c2820
[   26.169682] x23: 0000000000000000 x22: 0000000020000080
[   26.171899] x21: ffff80002143b418 x20: ffff80002143b400
[   26.174236] x19: ffff80002143b280 x18: 0000000000000000
[   26.176607] x17: 0000000000000000 x16: 0000000000000000
[   26.178997] x15: 0000000000000000 x14: 0000000000000000
[   26.181508] x13: 00009178a5e077b2 x12: 0000000000000001
[   26.183863] x11: 0000000000000000 x10: 0000000000000980
[   26.186437] x9 : ffff000013003a80 x8 : ffff800025048a20
[   26.189006] x7 : ffff8000250481c0 x6 : ffff80002ffe9118
[   26.191359] x5 : ffff80002ffe9118 x4 : 0000000000000000
[   26.193863] x3 : ffff80002ffefe98 x2 : 44c06ddd107d1f00
[   26.196642] x1 : 0000000000000000 x0 : 000000000000003e
[   26.198892] Call trace:
[   26.199893]  io_mem_free+0x9c/0xa8
[   26.201155]  io_ring_ctx_wait_and_kill+0xec/0x180
[   26.202688]  io_uring_setup+0x6c4/0x6f0
[   26.204091]  __arm64_sys_io_uring_setup+0x18/0x20
[   26.205576]  el0_svc_common.constprop.0+0x7c/0xe8
[   26.207186]  el0_svc_handler+0x28/0x78
[   26.208389]  el0_svc+0x8/0xc
[   26.209408] Code: aa0203e0 d0006861 9133a021 97fcdc3c (d4210000)
[   26.211995] ---[ end trace bdb81cd43a21e50d ]---

[   81.770626] ------------[ cut here ]------------
[   81.825015] virt_to_phys used for non-linear address: 000000000d42f2c7 (          (null))
[   81.827860] WARNING: CPU: 1 PID: 30171 at arch/arm64/mm/physaddr.c:15 __virt_to_phys+0x48/0x68
[   81.831202] Modules linked in:
[   81.832212] CPU: 1 PID: 30171 Comm: syz-executor.20 Not tainted 5.1.0-rc7-00004-g7d30b2ea43d6 #19
[   81.835616] Hardware name: linux,dummy-virt (DT)
[   81.836863] pstate: 60400005 (nZCv daif +PAN -UAO)
[   81.838727] pc : __virt_to_phys+0x48/0x68
[   81.840572] lr : __virt_to_phys+0x48/0x68
[   81.842264] sp : ffff80002cf67c70
[   81.843858] x29: ffff80002cf67c70 x28: ffff800014358e18
[   81.846463] x27: 0000000000000000 x26: 0000000020000080
[   81.849148] x25: 0000000000000000 x24: ffff80001bb01f40
[   81.851986] x23: ffff200011db06c8 x22: ffff2000127e3c60
[   81.854351] x21: ffff800014358cc0 x20: ffff800014358d98
[   81.856711] x19: 0000000000000000 x18: 0000000000000000
[   81.859132] x17: 0000000000000000 x16: 0000000000000000
[   81.861586] x15: 0000000000000000 x14: 0000000000000000
[   81.863905] x13: 0000000000000000 x12: ffff1000037603e9
[   81.866226] x11: 1ffff000037603e8 x10: 0000000000000980
[   81.868776] x9 : ffff80002cf67840 x8 : ffff80001bb02920
[   81.873272] x7 : ffff1000037603e9 x6 : ffff80001bb01f47
[   81.875266] x5 : ffff1000037603e9 x4 : dfff200000000000
[   81.876875] x3 : ffff200010087528 x2 : ffff1000059ecf58
[   81.878751] x1 : 44c06ddd107d1f00 x0 : 0000000000000000
[   81.880453] Call trace:
[   81.881164]  __virt_to_phys+0x48/0x68
[   81.882919]  io_mem_free+0x18/0x110
[   81.886585]  io_ring_ctx_wait_and_kill+0x13c/0x1f0
[   81.891212]  io_uring_setup+0xa60/0xad0
[   81.892881]  __arm64_sys_io_uring_setup+0x2c/0x38
[   81.894398]  el0_svc_common.constprop.0+0xac/0x150
[   81.896306]  el0_svc_handler+0x34/0x88
[   81.897744]  el0_svc+0x8/0xc
[   81.898715] ---[ end trace b4a703802243cbba ]---

Fixes: 2b188cc ("Add io_uring IO interface")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: linux-block@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
mhoseinzadeh pushed a commit that referenced this issue Dec 12, 2019
mhoseinzadeh pushed a commit that referenced this issue Dec 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants