-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to ping after run opendp #9
Comments
Which opendp version do you run? Please download the latest version. |
Hi @bluenet13 here is the original redis result: here's the dpdk-redis: one more thing is when i configure the request size to be 1024bytes, the LRANGE test fails to run and hang there. i also tried the ping with message size equal to 1473bytes (1500bytes sent), then it also fails. Is this due to MTU limitation (1500BYTES)? if so, where i can tune it? Thanks |
For the ping with 1473 bytes, please tcpdump the packets. opendp support IP fragment, the max packet size is about 5k. |
Hi @bluenet13 here's the updated performance for dpdk-redis: it's better than before, but still slower than the original redis. i use isolcpus=7 to detach core 7 from kernel, and dedicate opendp on core 7. THanks |
HI @bluenet13 i think the performance issue may caused by a smaller MTU (1500bytes) in dpdk case. is there a way to change it to 9000bytes? Thanks |
hi,
Yes, I will change the code to support 9000bytes. |
Hi, |
hi, |
If you want to enable jombo frame, you shall update odp_main.c to set eth0 interface MTU as 9000 by below API. |
Hi Thanks |
hi, |
Hi, you can use "-d" to specify the request size when you run redis-bench. i will try your link after work today. On Mon, Apr 4, 2016 at 5:32 PM, bluestar notifications@github.com wrote:
|
I know the issue for larger request size. because netdp don't support tso, so the tranfer speed is slower, netdp will support it soon. |
How to sign in the slack? I guess i need your invite?
|
Yes, please send your email. i will invite you. |
I have sent the invite. |
Hi,
i successfully started opendp with command: sudo ./build/opendp -c 0x1 -n 1 -- -P -p 0x1 --config="(0,0,0)"
the run info:
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x5400000 bytes
EAL: Virtual area found at 0x7f9130200000 (size = 0x5400000)
EAL: Ask a virtual area of 0x12400000 bytes
EAL: Virtual area found at 0x7f911dc00000 (size = 0x12400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d600000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d000000 (size = 0x400000)
EAL: Ask a virtual area of 0x1400000 bytes
EAL: Virtual area found at 0x7f911ba00000 (size = 0x1400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911b400000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f911b000000 (size = 0x200000)
EAL: Ask a virtual area of 0x65c00000 bytes
EAL: Virtual area found at 0x7f90b5200000 (size = 0x65c00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f90b4e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4800000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4200000 (size = 0x400000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~3690749 KHz
EAL: Master lcore 0 is ready (tid=37be1900;cpuset=[0])
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f9135600000
EAL: PCI memory mapped at 0x7f9135680000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: Not managed by a supported kernel driver, skipped
Promiscuous mode selected
param nb 1 ports 1
port id 0
Start to Init port
port 0:
port name rte_ixgbe_pmd:
max_rx_queues 128: max_tx_queues:128
rx_offload_capa 31: tx_offload_capa:63
Creating queues: rx queue number=1 tx queue number=1...
MAC Address:00:1B:21:BB:7C:24
Deault-- tx pthresh:32, tx hthresh:0, tx wthresh:0, txq_flags:0xf01
lcore id:0, tx queue id:0, socket id:0
Conf-- tx pthresh:36, tx hthresh:0, tx wthresh:0, txq_flags:0xfffff1ff
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f90b4328dc0 hw_ring=0x7f90b432ae00 dma_addr=0xbc892ae00
PMD: ixgbe_set_tx_function(): Using full-featured tx code path
PMD: ixgbe_set_tx_function(): - txq_flags = fffff1ff [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
Allocated mbuf pool on socket 0, mbuf number: 16384
Initializing rx queues on lcore 0 ...
Default-- rx pthresh:8, rx hthresh:8, rx wthresh:0
port id:0, rx queue id: 0, socket id:0
Conf-- rx pthresh:8, rx hthresh:8, rx wthresh:4
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f90b42d82c0 sw_sc_ring=0x7f90b42d7d80 hw_ring=0x7f90b42d8800 dma_addr=0xbc88d8800
core mask: 1, sockets number:1, lcore number:1
start to init netdp
USER8: LCORE[0] lcore mask 0x1
USER8: LCORE[0] lcore id 0 is enable
USER8: LCORE[0] lcore number 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER8: LCORE[0] UDP layer init successfully, Use memory:4194304 bytes
USER8: LCORE[0] TCP hash table init successfully, tcp pcb size 448 total size 27525120
USER8: LCORE[0] so shm memory 17039360 bytes, so number 133120, sock shm size 128 bytes
USER8: LCORE[0] Sock init successfully, allocated of 42598400 bytes
add eth0 device
add IP 2020202 on device eth0
Show interface
eth0 HWaddr 00:1b:21:bb:7c:24
inet addr:2.2.2.2
inet addr:255.255.255.0
add static route
Destination Gateway Netmask Flags Iface
2.2.2.0 * 255.255.255.0 U C 0
2.2.2.5 * 255.255.255.255 U H L 0
3.3.3.0 2.2.2.5 255.255.255.0 U G 0
USER8: LCORE[-1] NETDP mgmt thread startup
PMD: ixgbe_set_rx_function(): Port[0] doesn't meet Vector Rx preconditions or RTE_IXGBE_INC_VECTOR is not enabled
PMD: ixgbe_set_rx_function(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0.
Checking link status .done
Port 0 Link Up - speed 10000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8: -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 3690749763
I assume after this, i should be able to ping 2.2.2.2 of the NIC. However, it fails.
can you give any suggestion?
Thanks
The text was updated successfully, but these errors were encountered: