-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NVMeofTCP SPDK host abort test #2048
Comments
Hi @gp2017, Can you clarify "my NVMeofTCP target"? Is this something different than the Linux kernel NVMe/TCP or SPDK NVMe/TCP target? Thanks, Jim |
Hello Jim, |
Hello Jim,
I am using the SPDK abort test with my hardware NVMe/TCP target which is
different from Linux Kernel or SPDK NVMe/TCP target.
Thanks,
GP
…On Fri, Jul 16, 2021 at 11:45 AM Jim Harris ***@***.***> wrote:
I am using NVMeofTCP SPDK host and running IO abort test ( build/examples
folder) to test my NVMeofTCP target. I am running abort test with different
queue depth like 4, 8, 16, 32.
Hi @gp2017 <https://github.com/gp2017>,
Can you clarify "my NVMeofTCP target"? Is this something different than
the Linux kernel NVMe/TCP or SPDK NVMe/TCP target?
Thanks,
Jim
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2048 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF5D7ZOGNYQUBOQY3VH7IZDTYB44JANCNFSM5AQBXCYA>
.
|
I am suspicious that this fails at QD=32 and that the default admin queue size is also 32. We certainly should make sure the admin queue is big enough to handle all of the aborts that the test app is sending. Could you try this patch? https://review.spdk.io/gerrit/c/spdk/spdk/+/8812 The patch has only been compile tested, I've run no actual tests with it. If the patch works for the QD=32 case, please try higher queue depths as well (64 and 128). If the patch does not work, please try reproducing this with the Linux kernel or SPDK target and confirm the issue happens there as well. It will make it easier for someone else to reproduce and root cause. Thanks, Jim |
sure, I will try this patch and let you know.
Thanks,
GP
…On Fri, Jul 16, 2021 at 12:46 PM Jim Harris ***@***.***> wrote:
I am suspicious that this fails at QD=32 and that the default admin queue
size is also 32. We certainly should make sure the admin queue is big
enough to handle all of the aborts that the test app is sending.
Could you try this patch? https://review.spdk.io/gerrit/c/spdk/spdk/+/8812
The patch has only been compile tested, I've run no actual tests with it.
If the patch works for the QD=32 case, please try higher queue depths as
well (64 and 128).
If the patch does not work, please try reproducing this with the Linux
kernel or SPDK target and confirm the issue happens there as well. It will
make it easier for someone else to reproduce and root cause.
Thanks,
Jim
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2048 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF5D7ZNIC4JTUKRCZC5EF3LTYCECDANCNFSM5AQBXCYA>
.
|
[Bug scrub] @mikeBashStuff Will take a look at adding functional test case covering this issue. |
I am afraid that I am not able to reproduce the very same issue for the SPDK or kernel target. I see some other issues though. With or without the patch I randomly see something like this for the SPDK target regardless of the qd (though most often it happens when it's >= 64):
With the patch the issue above still can be seen for qd < 128, popping up in a random fashion, but for qd=128 I get this:
But the For the kernel target (configured locally) it seems like everything works as expected, but the
So it succeeds, but the I added these tests https://review.spdk.io/gerrit/c/spdk/spdk/+/8962 on top of the https://review.spdk.io/gerrit/c/spdk/spdk/+/8812 to see how they behave under CI (considering the above I expect them to fail). Not sure if they properly cover the actual use-case here so any comments, suggestions on gerrit would be appreciated. :) |
Here's one build run: https://ci.spdk.io/results/autotest-per-patch/builds/54449/archive/nvmf-tcp-phy-autotest/build.log Surprisingly, successful, however, for the SPDK target and qd=128 there can be seen this (as described above):
|
Thanks @mikeBashStuff!
This error would go away if we specify I just realized you're using -c 0xF, while the original submitter passed no -c option (meaning just a single core). Do you have any luck reproducing the submitter's issue if you omit the -c option? I think the -c option is causing the get_cc errors - since my patch is just calculating queue depth based on a single worker. Also @gp2017, could you try the patch provided above? -Jim |
@jimharris Initially I did use |
Sounds good @mikeBashStuff. Thanks! |
@jimharris I run the very same tests but with io size bumped up to 40960. Still can't reproduce the main issue, however, with or without the patch the qd=64 is now hit by
From time to time (but quite rarely) I also see this for any potential
This doesn't force |
Thanks @mikeBashStuff. I'll take a look at this some more on my system. I was hoping this would be a relatively straightforward fix, but you've clearly found some other issues in the abort path with this testing. Let's still wait to see if my patch fixes the original issue for @gp2017. If so, we can merge it, and then open a new issue for some of these others issues you've found. |
@mikeBashStuff - can you describe in more detail how you hit these most recent issues? I've been using the abort.sh test script with a few modifications.
|
@jimharris I've been using this instead https://review.spdk.io/gerrit/c/spdk/spdk/+/8962 - my idea was to use actual nvme controller for the bdev since I reckoned such setup would be a bit closer to what was reported. So with this patch you could simply run |
That is a really good test @mikeBashStuff. I think what I'd like to do is move this new testing approach, and the other failures/dumps you've found, to a new issue. The stuff you are finding seems unrelated to what @gp2017 reported, but is still something we need to continue working on. |
@jimharris Moved what I found to #2063. |
This abort test app will send a lot of abort commands on the admin queue. The default admin queue size is relatively small (32) so increase it if necessary to account for the expected number of outstanding abort commands as well as any extra admin commands that may be sent during test execution such as Keep Alive. Fixes issue #2048. Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: I5f64b7fc72a028299b860f09e30d430a64c95d2a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8812 Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Dong Yi <dongx.yi@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
I am using NVMeofTCP SPDK host and running IO abort test ( build/examples folder) to test my NVMeofTCP target. I am running abort test with different queue depth like 4, 8, 16, 32.
Test completes fine for queue depth 4, 8, and 16 but it hangs and seems to be in some kind of continuous loop for queue depth 32.
Expected Behavior
Abort test should not hang with "-q 32" parameter.
Abort test should complete with "-q 32" parameter in same manner as it is completing with "-q 4", "-q 8" and "-q 16" parameter.
Current Behavior
Abort test hangs with "-q 32" parameter.
Expected behavior is that test should complete for "-q 32" parameter in the same way as it is completing for "-q 4", . . . . ,"-q 16".
Possible Solution
From trace, it looks like host sent a write command and target responded with r2t and then host is not sending the requested data to target. Host needs to send the requested data and then should send the abort request as part of abort test.
Host is not sending the requested data in response to r2t from target.
Steps to Reproduce
./abort -q 32 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
./abort -q 4 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
./abort -q 8 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
./abort -q 16 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
Console output with q depth=4 (passed)
./abort -q 4 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
[2021-07-16 09:55:04.024076] Starting SPDK v21.07-pre git sha1 b73d3e6 / DPDK 21.02.0 initialization...
[2021-07-16 09:55:04.024156] [ DPDK EAL parameters: [2021-07-16 09:55:04.024169] abort [2021-07-16 09:55:04.024180] --no-shconf [2021-07-16 09:55:04.024190] -c 0x1 [2021-07-16 09:55:04.024198] -m 4096 [2021-07-16 09:55:04.024207] --no-pci [2021-07-16 09:55:04.024217] --log-level=lib.eal:6 [2021-07-16 09:55:04.024226] --log-level=lib.cryptodev:5 [2021-07-16 09:55:04.024235] --log-level=user1:6 [2021-07-16 09:55:04.024247] --iova-mode=pa [2021-07-16 09:55:04.024257] --base-virtaddr=0x200000000000 [2021-07-16 09:55:04.024268] --match-allocations [2021-07-16 09:55:04.024279] --file-prefix=spdk_pid126833 [2021-07-16 09:55:04.024289] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420: nqn.2015-09.com.cdw:nvme.1
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 with lcore 0
Initialization complete. Launching workers.
NS: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 I/O completed: 30102, failed: 15
CTRLR: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) abort submitted 45, failed to submit 30072
success 15, unsuccess 30, failed 0
Console output with q depth= 16 (passed)
./abort -q 16 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.c:nvme.1'
[2021-07-16 09:55:32.400777] Starting SPDK v21.07-pre git sha1 b73d3e6 / DPDK 21.02.0 initialization...
[2021-07-16 09:55:32.400855] [ DPDK EAL parameters: [2021-07-16 09:55:32.400868] abort [2021-07-16 09:55:32.400876] --no-shconf [2021-07-16 09:55:32.400886] -c 0x1 [2021-07-16 09:55:32.400896] -m 4096 [2021-07-16 09:55:32.400905] --no-pci [2021-07-16 09:55:32.400915] --log-level=lib.eal:6 [2021-07-16 09:55:32.400924] --log-level=lib.cryptodev:5 [2021-07-16 09:55:32.400934] --log-level=user1:6 [2021-07-16 09:55:32.400944] --iova-mode=pa [2021-07-16 09:55:32.400953] --base-virtaddr=0x200000000000 [2021-07-16 09:55:32.400963] --match-allocations [2021-07-16 09:55:32.400971] --file-prefix=spdk_pid126846 [2021-07-16 09:55:32.400980] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420: nqn.2015-09.com.cdw:nvme.1
controller IO queue size 16 less than required
Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 with lcore 0
Initialization complete. Launching workers.
NS: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 I/O completed: 49920, failed: 16
CTRLR: TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) abort submitted 43, failed to submit 49893
success 16, unsuccess 27, failed 0
Console output with q depth = 32 (failed)
./abort -q 32 -s 4096 -w rw -M 50 -o 40960 -r 'trtype:tcp adrfam:IPv4 traddr:10.10.10.167 trsvcid:4420 subnqn:nqn.2015-09.com.cdw:nvme.1'
[2021-07-16 09:55:49.649538] Starting SPDK v21.07-pre git sha1 b73d3e6 / DPDK 21.02.0 initialization...
[2021-07-16 09:55:49.649617] [ DPDK EAL parameters: [2021-07-16 09:55:49.649630] abort [2021-07-16 09:55:49.649641] --no-shconf [2021-07-16 09:55:49.649652] -c 0x1 [2021-07-16 09:55:49.649660] -m 4096 [2021-07-16 09:55:49.649669] --no-pci [2021-07-16 09:55:49.649680] --log-level=lib.eal:6 [2021-07-16 09:55:49.649691] --log-level=lib.cryptodev:5 [2021-07-16 09:55:49.649702] --log-level=user1:6 [2021-07-16 09:55:49.649713] --iova-mode=pa [2021-07-16 09:55:49.649724] --base-virtaddr=0x200000000000 [2021-07-16 09:55:49.649736] --match-allocations [2021-07-16 09:55:49.649746] --file-prefix=spdk_pid126855 [2021-07-16 09:55:49.649758] ]
EAL: No available 1048576 kB hugepages reported
EAL: No legacy callbacks, legacy socket not created
Initializing NVMe Controllers
Attached to NVMe over Fabrics controller at 10.10.10.167:4420: nqn.2015-09.com.cdw:nvme.1
controller IO queue size 16 less than required
Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
Associating TCP (addr:10.10.10.167 subnqn:nqn.2015-09.com.cdw:nvme.1) NSID 1 with lcore 0
Initialization complete. Launching workers.
[2021-07-16 09:55:52.997426] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997463] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997474] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22110 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997483] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997490] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997497] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997503] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22120 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997510] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997515] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997523] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997529] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22130 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997534] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997542] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997549] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997557] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22140 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997565] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997571] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997577] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997583] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22150 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997590] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997595] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997600] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997611] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22160 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997619] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997627] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997634] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997642] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22170 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997650] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997658] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997666] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997674] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22180 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997681] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997689] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997697] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997704] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22190 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997711] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997719] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997726] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997733] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22200 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997740] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997747] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997754] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997762] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22210 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997768] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997775] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997782] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997789] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22220 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997796] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997803] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997810] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997817] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22230 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997826] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997834] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997840] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997848] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22240 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997855] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997862] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997870] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997877] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22250 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997885] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997892] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997898] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997905] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22260 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997912] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997919] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997926] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997937] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22270 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997945] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997953] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997959] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997967] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22280 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.997974] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.997981] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.997989] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.997996] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22290 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998003] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.998010] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.998017] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.998024] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: READ sqid:1 cid:0 nsid:1 lba:22300 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998032] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.998040] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.998047] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.998055] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22310 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998062] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.998070] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.998076] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.998083] nvme_qpair.c: 272:nvme_io_qpair_print_command: NOTICE: WRITE sqid:1 cid:0 nsid:1 lba:22320 len:10 PRP1 0x0 PRP2 0x0
[2021-07-16 09:55:52.998090] nvme_qpair.c: 455:spdk_nvme_print_completion: NOTICE: ABORTED - BY REQUEST (00/07) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:1
[2021-07-16 09:55:52.998097] nvme_qpair.c: 594:nvme_qpair_abort_queued_reqs: ERROR: aborting queued i/o
[2021-07-16 09:55:52.998103] nvme_qpair.c: 536:nvme_qpair_manual_complete_request: NOTICE: Command completed manually:
[2021-07-16 09:55:52.998110] nvme_qpair.c: 272:nvme_io_qpair_print_command: *N
Context (Environment including OS version, SPDK version, etc.)
Ubuntu 18.04:
cat /etc/issue
Ubuntu 18.04.5 LTS \n \l
Kernel:
uname -sr
Linux 5.12.17-051217-generic
SPDK Version:
SPDK v21.07-pre
The text was updated successfully, but these errors were encountered: