Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow TX/RX descriptor ring size to be configurable. VMXNET3 pmd defa… #280

Closed
wants to merge 1 commit into from

Conversation

vincentmli
Copy link
Contributor

…ult minimal TX

descriptor ring size is 512, this change allows user to configure TX ring size to support
mTCP runs in ESXi VM, for example:

in epwget.conf:

num_tx = 512
num_rx = 128

then run epwget in ESXi VM with VMXNET3 PMD:

(Hardware checksum also needs to be disabled in ESXi VM when compiling mTCP in ESXi VM)
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --disable-hwcsum

/usr/src/mtcp# ./apps/example/epwget 10.1.72.68 160000000 -f /etc/mtcp/config/epwget.conf -N 1 -c 160
Configuration updated by mtcp_setconf().
Application configuration:
URL: /
Concurrency: 160

Loading mtcp configuration from : /etc/mtcp/config/epwget.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
Total number of attached devices: 1
Interface name: dpdk0
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Number of TX ring descriptor: 512
Number of RX ring descriptor: 128
Number of source ip to use: 8
Maximum number of concurrency per core: 1000000
Maximum number of preallocated buffers per core: 1000000
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0

Interfaces:
name: dpdk0, ifindex: 0, hwaddr: 00:50:56:86:10:76, ipaddr: 10.1.72.28, netmask: 255.255.0.0
Number of NIC queues: 1

Loading routing configurations from : config/route.conf
Routes:
Destination: 10.1.0.0/16, Mask: 255.255.0.0, Masked: 10.1.0.0, Route: ifdx-0

Loading ARP table from : config/arp.conf
ARP Table:
IP addr: 10.1.72.68, dst_hwaddr: 00:50:56:86:22:BA

Initializing port 0... Ethdev port_id=0 tx_queue_id=0, new added offloads 0x8011 must be within pre-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()

done:
rte_eth_dev_flow_ctrl_get: Function not supported
[dpdk_load_module: 765] Failed to get flow control info!
rte_eth_dev_flow_ctrl_set: Function not supported
[dpdk_load_module: 772] Failed to set flow control info!: errno: -95

Checking link statusdone
Port 0 Link Up - speed 10000 Mbps - full-duplex
Configuration updated by mtcp_setconf().
CPU 0: initialization finished.
[mtcp_create_context:1359] CPU 0 is now the master thread.
[CPU 0] dpdk0 flows: 0, RX: 26(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 0, RX: 26(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
[CPU 0] dpdk0 flows: 0, RX: 12(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
[ ALL ] dpdk0 flows: 0, RX: 12(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps)
Thread 0 handles 160000000 flows. connecting to 10.1.72.68:80
rte_eth_stats_reset: Function not supported
Response size set to 86
[ ALL ] connect: 8334, read: 0 MB, write: 0 MB, completes: 8174 (resp_time avg: 2890, max: 6412 us)
[CPU 0] dpdk0 flows: 247, RX: 95984(pps) (err: 0), 0.09(Gbps), TX: 120431(pps), 0.11(Gbps)
[ ALL ] dpdk0 flows: 247, RX: 95984(pps) (err: 0), 0.09(Gbps), TX: 120431(pps), 0.11(Gbps)
rte_eth_stats_reset: Function not supported
[ ALL ] connect: 33905, read: 2 MB, write: 3 MB, completes: 33905 (resp_time avg: 2512, max: 12273 us)
[CPU 0] dpdk0 flows: 233, RX: 137128(pps) (err: 0), 0.12(Gbps), TX: 172068(pps), 0.15(Gbps)
[ ALL ] dpdk0 flows: 233, RX: 137128(pps) (err: 0), 0.12(Gbps), TX: 172068(pps), 0.15(Gbps)

…ult minimal TX

descriptor ring size is 512, this change allows user to configure TX ring size to support
mTCP runs in ESXi VM, for example:

in epwget.conf:

num_tx = 512
num_rx = 128

then run epwget in ESXi VM with VMXNET3 PMD:

(Hardware checksum also needs to be disabled in ESXi VM when compiling mTCP in ESXi VM)
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --disable-hwcsum

/usr/src/mtcp#  ./apps/example/epwget 10.1.72.68 160000000 -f /etc/mtcp/config/epwget.conf -N 1 -c 160
Configuration updated by mtcp_setconf().
Application configuration:
URL: /
Concurrency: 160
---------------------------------------------------------------------------------
Loading mtcp configuration from : /etc/mtcp/config/epwget.conf
Loading interface setting
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
Total number of attached devices: 1
Interface name: dpdk0
EAL: Auto-detected process type: PRIMARY
Configurations:
Number of CPU cores available: 1
Number of CPU cores to use: 1
Number of TX ring descriptor: 512
Number of RX ring descriptor: 128
Number of source ip to use: 8
Maximum number of concurrency per core: 1000000
Maximum number of preallocated buffers per core: 1000000
Receive buffer size: 1024
Send buffer size: 1024
TCP timeout seconds: 30
TCP timewait seconds: 0
NICs to print statistics: dpdk0
---------------------------------------------------------------------------------
Interfaces:
name: dpdk0, ifindex: 0, hwaddr: 00:50:56:86:10:76, ipaddr: 10.1.72.28, netmask: 255.255.0.0
Number of NIC queues: 1
---------------------------------------------------------------------------------
Loading routing configurations from : config/route.conf
Routes:
Destination: 10.1.0.0/16, Mask: 255.255.0.0, Masked: 10.1.0.0, Route: ifdx-0
---------------------------------------------------------------------------------
Loading ARP table from : config/arp.conf
ARP Table:
IP addr: 10.1.72.68, dst_hwaddr: 00:50:56:86:22:BA
---------------------------------------------------------------------------------
Initializing port 0... Ethdev port_id=0 tx_queue_id=0, new added offloads 0x8011 must be within pre-queue offload capabilities 0x0 in rte_eth_tx_queue_setup()

done:
rte_eth_dev_flow_ctrl_get: Function not supported
[dpdk_load_module: 765] Failed to get flow control info!
rte_eth_dev_flow_ctrl_set: Function not supported
[dpdk_load_module: 772] Failed to set flow control info!: errno: -95

Checking link statusdone
Port 0 Link Up - speed 10000 Mbps - full-duplex
Configuration updated by mtcp_setconf().
CPU 0: initialization finished.
[mtcp_create_context:1359] CPU 0 is now the master thread.
[CPU 0] dpdk0 flows:      0, RX:      26(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[ ALL ] dpdk0 flows:      0, RX:      26(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[CPU 0] dpdk0 flows:      0, RX:      12(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
[ ALL ] dpdk0 flows:      0, RX:      12(pps) (err:     0),  0.00(Gbps), TX:       0(pps),  0.00(Gbps)
Thread 0 handles 160000000 flows. connecting to 10.1.72.68:80
rte_eth_stats_reset: Function not supported
Response size set to 86
[ ALL ] connect:    8334, read:    0 MB, write:    0 MB, completes:    8174 (resp_time avg: 2890, max:   6412 us)
[CPU 0] dpdk0 flows:    247, RX:   95984(pps) (err:     0),  0.09(Gbps), TX:  120431(pps),  0.11(Gbps)
[ ALL ] dpdk0 flows:    247, RX:   95984(pps) (err:     0),  0.09(Gbps), TX:  120431(pps),  0.11(Gbps)
rte_eth_stats_reset: Function not supported
[ ALL ] connect:   33905, read:    2 MB, write:    3 MB, completes:   33905 (resp_time avg: 2512, max:  12273 us)
[CPU 0] dpdk0 flows:    233, RX:  137128(pps) (err:     0),  0.12(Gbps), TX:  172068(pps),  0.15(Gbps)
[ ALL ] dpdk0 flows:    233, RX:  137128(pps) (err:     0),  0.12(Gbps), TX:  172068(pps),  0.15(Gbps)
Copy link
Member

@ajamshed ajamshed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your commit.

(1) Can you please re-submit the PR on the devel branch?
(2) This is a strictly dpdk-specific update. It will be a good idea if we could add some commentary lines and update the *.conf files as well: i.e. apps/example/epserver.conf, apps/example/epwget.conf, and config/sample_mtcp.conf files

@vincentmli
Copy link
Contributor Author

ok, I will submit through devel branch and add the comments in *.conf files

@vincentmli vincentmli closed this Jan 6, 2020
@vincentmli vincentmli deleted the vli-esxi-pr branch January 6, 2020 17:58
@vincentmli
Copy link
Contributor Author

Hi Asim, could you please give me git commands tips on how to create PR with upstream mTCP "devel" branch? the github workflow I got from other project is to fork the repo from upstream github repo, clone the fork to local workstation, checkout development branch, making/commit changes, push to github, then create PR from the development branch.

@ajamshed
Copy link
Member

Sorry.. got around to this very, very late. But I think you figured it out yourself! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants