Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spdk2401x version nvmf_tgt coredump #3336

Open
heyuncong opened this issue Apr 9, 2024 · 2 comments
Open

spdk2401x version nvmf_tgt coredump #3336

heyuncong opened this issue Apr 9, 2024 · 2 comments
Assignees
Labels

Comments

@heyuncong
Copy link

Sighting report

#0 0x00007f97cd0b9e80 in main_arena () from /lib64/libc.so.6
#1 0x00007f97d32632da in spdk_nvmf_qpair_disconnect (qpair=0x2037f80, cb_fn=0x0, ctx=0x0) at nvmf.c:1445
#2 0x00007f97d32634c4 in _nvmf_qpair_disconnect_msg (ctx=0x2031f10) at nvmf.c:1373
#3 0x00007f97d071ed27 in msg_queue_run_batch (max_msgs=, thread=0x203aaf0) at thread.c:1075
#4 thread_poll (thread=thread@entry=0x203aaf0, max_msgs=max_msgs@entry=0, now=now@entry=21323247041670336) at thread.c:1356
#5 0x00007f97d071f65f in spdk_thread_poll (thread=thread@entry=0x203aaf0, max_msgs=max_msgs@entry=0, now=21323247041670336) at thread.c:1449
#6 0x00007f97d27dcef9 in _reactor_run (reactor=0x2016a00) at reactor.c:914
#7 reactor_run (arg=0x2016a00) at reactor.c:952
#8 0x00007f97cf1bb236 in eal_thread_loop () from /lib64/librte_eal.so.24
#9 0x00007f97cf1cf249 in eal_worker_thread_loop () from /lib64/librte_eal.so.24
#10 0x00007f97cd0c69b4 in start_thread () from /lib64/libpthread.so.0
#11 0x00007f97ccded35f in clone () from /lib64/libc.so.6
(gdb) p *qpair_ctx->qpair->transport->tgt
$6 = {
name = "\360\247\233\002\000\000\000\000\360\247\233\002\000\000\000\000\300\235\v͗\177\000\000\300\235\v͗\177\000\000p\177\003\002\000\000\000\000p\177\003\002\000\000\000\000\340\235\v͗\177\000\000\340\235\v͗\177\000\000\360\235\v͗\177\000\000\360\235\v͗\177\000\000\353t\002\000\000\000\000\353t\002\000\000\000\000\000B\003\002\000\000\000\000p\244\003\002\000\000\000\000\020O\003\002\000\000\000\000\020O\003\002\000\000\000\000\060\236\v͗\177\000\000\060\236\v͗\177\000\000@\236\v͗\177\000\000@\236\v͗\177\000\000P\236\v͗\177\000\000P\236\v͗\177\000\000\260\355l\002\000\000\000\000\260\355l\002\000\000\000\000p\236\v͗\177\000\000p\236\v͗\177\000\000\200\236\v͗\177\000\000"..., mutex = {__data = {__lock = -854876496, __count = 32663, __owner = -854876496,
__nusers = 32663, __kind = 40686896, __spins = 0, __elision = 0, __list = {__prev = 0x27420b0, __next = 0x2034860}},
__size = "\260\236\v͗\177\000\000\260\236\v͗\177\000\000\060\325l\002\000\000\000\000\260 t\002\000\000\000\000`H\003\002\000\000\000", __align = 140289956880048},
discovery_genctr = 33769568, max_subsystems = 40686112, discovery_filter = SPDK_NVMF_TGT_DISCOVERY_MATCH_ANY, state = 40686112, subsystem_ids = 0x7f97cd0b9ef0 <main_arena+848>, subsystems = {
rbh_root = 0x7f97cd0b9ef0 <main_arena+848>}, transports = {tqh_first = 0x7f97cd0b9f00 <main_arena+864>, tqh_last = 0x7f97cd0b9f00 <main_arena+864>}, poll_groups = {
tqh_first = 0x7f97cd0b9f10 <main_arena+880>, tqh_last = 0x7f97cd0b9f10 <main_arena+880>}, referrals = {tqh_first = 0x7f97cd0b9f20 <main_arena+896>, tqh_last = 0x7f97cd0b9f20 <main_arena+896>},
next_poll_group = 0x26c97e0, destroy_cb_fn = 0x2036770, destroy_cb_arg = 0x7f97cd0b9f40 <main_arena+928>, crdt = {40768, 52491, 32663}, num_poll_groups = 0, link = {tqe_next = 0x26cea30,
tqe_prev = 0x26cea30}}

Segmentation fault
memory of qpair_ctx->qpair->transport->tgt is weired

Expected Behavior

no Segmentation fault

Current Behavior

Possible Solution

Steps to Reproduce

just ifconfig interface down

1.just ifconfig interface down
2.
3.
4.

Context (Environment including OS version, SPDK version, etc.)

same with #3284

@eugene-kobyak eugene-kobyak self-assigned this Apr 10, 2024
@eugene-kobyak
Copy link
Contributor

Hi @heyuncong!
Can you provide more details about this issue? Steps to reproduce, configs etc.
Thank you in advance!

@heyuncong
Copy link
Author

nvmf_tgt.json
bdeperf.json
there are three nvmf_tgt, so there are three nvmf_tgt.json, but they are different at ip address, so i just give one.
three bdevperf use one same bdeperf.json conf file.
1.bdevperf press io depth 256 with io size 1MB.
2.kill -2 nvmf_tgt , and start nvmf_tgt
3.kill -2 bdevperf, and start bdevperf
4. nvmf_tgt coredump.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants