Skip to content

spdk_nvme_ns_cmd_write returns -ENOMEM #3532

@m-kru

Description

@m-kru

I use both DPDK and SPDK in my application. I reserve 16 GB of hugepages during the boot by adding GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1G hugepagesz=1G hugepages=16" in the /etc/default/grub.

Calling spdk_nvme_ns_cmd_write returns -ENOMEM. When I execute ./scripts/setup.sh I get the following message

0000:82:00.0 (144d a80a): Already using the vfio-pci driver
0000:06:00.0 (144d a80a): Already using the vfio-pci driver
INFO: Requested 2 hugepages but 4 already allocated on node0
"mkru" user memlock limit: 8036 MB

This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as user "mkru".
To change this, please adjust limits.conf memlock limit for user "mkru".

Looks like I have memlock limit set for 8 GB by default. I have changed it to unlimited. This time when I call ./scripts/setup.sh I get:

0000:82:00.0 (144d a80a): Already using the vfio-pci driver
0000:06:00.0 (144d a80a): Already using the vfio-pci driver
INFO: Requested 2 hugepages but 4 already allocated on node0

It looks like no errors this time.

However, when I call spdk_nvme_ns_cmd_write I still get -ENOMEM. I do not understand why. What is more, the buffer I pass to the spdk_nvme_ns_cmd_write is already allocated in the rte mempool by the DPDK code. Why spdk_nvme_ns_cmd_write would like to allocate some memory is even more unclear to me.

Linux host 6.8.0-45-generic #45-Ubuntu SMP PREEMPT_DYNAMIC x86_64 x86_64 x86_64 GNU/Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions