Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to ping after run opendp #9

Closed
ghost opened this issue Mar 31, 2016 · 16 comments
Closed

failed to ping after run opendp #9

ghost opened this issue Mar 31, 2016 · 16 comments

Comments

@ghost
Copy link

ghost commented Mar 31, 2016

Hi,

i successfully started opendp with command: sudo ./build/opendp -c 0x1 -n 1 -- -P -p 0x1 --config="(0,0,0)"
the run info:
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x5400000 bytes
EAL: Virtual area found at 0x7f9130200000 (size = 0x5400000)
EAL: Ask a virtual area of 0x12400000 bytes
EAL: Virtual area found at 0x7f911dc00000 (size = 0x12400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d600000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d000000 (size = 0x400000)
EAL: Ask a virtual area of 0x1400000 bytes
EAL: Virtual area found at 0x7f911ba00000 (size = 0x1400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911b400000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f911b000000 (size = 0x200000)
EAL: Ask a virtual area of 0x65c00000 bytes
EAL: Virtual area found at 0x7f90b5200000 (size = 0x65c00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f90b4e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4800000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4200000 (size = 0x400000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~3690749 KHz
EAL: Master lcore 0 is ready (tid=37be1900;cpuset=[0])
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f9135600000
EAL: PCI memory mapped at 0x7f9135680000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: Not managed by a supported kernel driver, skipped
Promiscuous mode selected
param nb 1 ports 1
port id 0

Start to Init port
port 0:
port name rte_ixgbe_pmd:
max_rx_queues 128: max_tx_queues:128
rx_offload_capa 31: tx_offload_capa:63
Creating queues: rx queue number=1 tx queue number=1...
MAC Address:00:1B:21:BB:7C:24
Deault-- tx pthresh:32, tx hthresh:0, tx wthresh:0, txq_flags:0xf01
lcore id:0, tx queue id:0, socket id:0
Conf-- tx pthresh:36, tx hthresh:0, tx wthresh:0, txq_flags:0xfffff1ff
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f90b4328dc0 hw_ring=0x7f90b432ae00 dma_addr=0xbc892ae00
PMD: ixgbe_set_tx_function(): Using full-featured tx code path
PMD: ixgbe_set_tx_function(): - txq_flags = fffff1ff [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]

Allocated mbuf pool on socket 0, mbuf number: 16384

Initializing rx queues on lcore 0 ...
Default-- rx pthresh:8, rx hthresh:8, rx wthresh:0
port id:0, rx queue id: 0, socket id:0
Conf-- rx pthresh:8, rx hthresh:8, rx wthresh:4
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f90b42d82c0 sw_sc_ring=0x7f90b42d7d80 hw_ring=0x7f90b42d8800 dma_addr=0xbc88d8800
core mask: 1, sockets number:1, lcore number:1
start to init netdp
USER8: LCORE[0] lcore mask 0x1
USER8: LCORE[0] lcore id 0 is enable
USER8: LCORE[0] lcore number 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER8: LCORE[0] UDP layer init successfully, Use memory:4194304 bytes
USER8: LCORE[0] TCP hash table init successfully, tcp pcb size 448 total size 27525120
USER8: LCORE[0] so shm memory 17039360 bytes, so number 133120, sock shm size 128 bytes
USER8: LCORE[0] Sock init successfully, allocated of 42598400 bytes
add eth0 device
add IP 2020202 on device eth0
Show interface

eth0 HWaddr 00:1b:21:bb:7c:24
inet addr:2.2.2.2
inet addr:255.255.255.0
add static route

Destination Gateway Netmask Flags Iface
2.2.2.0 * 255.255.255.0 U C 0
2.2.2.5 * 255.255.255.255 U H L 0
3.3.3.0 2.2.2.5 255.255.255.0 U G 0

USER8: LCORE[-1] NETDP mgmt thread startup
PMD: ixgbe_set_rx_function(): Port[0] doesn't meet Vector Rx preconditions or RTE_IXGBE_INC_VECTOR is not enabled
PMD: ixgbe_set_rx_function(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0.

Checking link status .done
Port 0 Link Up - speed 10000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8: -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 3690749763

I assume after this, i should be able to ping 2.2.2.2 of the NIC. However, it fails.

can you give any suggestion?

Thanks

@bluenet13
Copy link
Member

Which opendp version do you run? Please download the latest version.
Please tcpdump the icmp packets on your PC which run ping command, and also share the your network topology and ip configuration. send these information to zimeiw@163.com

@ghost
Copy link
Author

ghost commented Mar 31, 2016

Hi @bluenet13
i have got the ping works now. but seems the opendp performance with redis is not that good.
the redis server machine:
Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz, 48GB DDR3 RAM, NIC Intel Corporation 82599ES 10-Gigabit.
the benchmark client machine:
Intel(R) Core(TM) i5-3470 CPU@3.2GHz, 6GB DDR3 RAM, NIC Intel Corporation 82599ES 10-Gigabit.
the two machine's 10Gb NIC is connected directly with the cable. All the setups are same as what described in the dpdk-redis page.

here is the original redis result:
./src/redis-benchmark -h 10.0.0.1 -p 6379 -n 100000 -c 50 -q
PING_INLINE: 125470.52 requests per second
PING_BULK: 139664.80 requests per second
SET: 145560.41 requests per second
GET: 150150.14 requests per second
INCR: 139664.80 requests per second
LPUSH: 136054.42 requests per second
RPUSH: 139275.77 requests per second
LPOP: 150150.14 requests per second
RPOP: 153374.23 requests per second
SADD: 141242.94 requests per second
SPOP: 157977.88 requests per second
LPUSH (needed to benchmark LRANGE): 132626.00 requests per second
LRANGE_100 (first 100 elements): 50709.94 requests per second
LRANGE_300 (first 300 elements): 30637.26 requests per second
LRANGE_500 (first 450 elements): 19018.64 requests per second
LRANGE_600 (first 600 elements): 16614.05 requests per second
MSET (10 keys): 98039.22 requests per second

here's the dpdk-redis:
./src/redis-benchmark -h 2.2.2.2 -p 6379 -n 100000 -c 50 -q
PING_INLINE: 78926.60 requests per second
PING_BULK: 84175.09 requests per second
SET: 74294.21 requests per second
GET: 77579.52 requests per second
INCR: 77101.00 requests per second
LPUSH: 74571.22 requests per second
LPOP: 76335.88 requests per second
SADD: 74850.30 requests per second
SPOP: 77459.34 requests per second
LPUSH (needed to benchmark LRANGE): 74571.22 requests per second
LRANGE_100 (first 100 elements): 47236.65 requests per second
LRANGE_300 (first 300 elements): 27292.58 requests per second
LRANGE_500 (first 450 elements): 20842.02 requests per second
LRANGE_600 (first 600 elements): 16220.60 requests per second
MSET (10 keys): 57077.62 requests per second

one more thing is when i configure the request size to be 1024bytes, the LRANGE test fails to run and hang there. i also tried the ping with message size equal to 1473bytes (1500bytes sent), then it also fails. Is this due to MTU limitation (1500BYTES)? if so, where i can tune it?

Thanks

@bluenet13
Copy link
Member

For the ping with 1473 bytes, please tcpdump the packets. opendp support IP fragment, the max packet size is about 5k.
For performance issue, please isolate dpdk core from kernel by "isolcpus", same with interrupt.

@ghost
Copy link
Author

ghost commented Apr 1, 2016

Hi @bluenet13

here's the updated performance for dpdk-redis:
./src/redis-benchmark -h 2.2.2.2 -p 6379 -n 100000 -c 50 -q
PING_INLINE: 104493.20 requests per second
PING_BULK: 106837.61 requests per second
SET: 107066.38 requests per second
GET: 106951.88 requests per second
INCR: 102459.02 requests per second
LPUSH: 100502.52 requests per second
LPOP: 108577.63 requests per second
SADD: 108108.11 requests per second
SPOP: 106723.59 requests per second
LPUSH (needed to benchmark LRANGE): 101936.80 requests per second
LRANGE_100 (first 100 elements): 72939.46 requests per second
LRANGE_300 (first 300 elements): 28082.00 requests per second
LRANGE_500 (first 450 elements): 19677.29 requests per second
LRANGE_600 (first 600 elements): 15666.61 requests per second
MSET (10 keys): 86956.52 requests per second

it's better than before, but still slower than the original redis. i use isolcpus=7 to detach core 7 from kernel, and dedicate opendp on core 7.
regarding interrupts, there are only local TIMER and RCU on core 7.
is this enough or what else i need ?

THanks

@ghost
Copy link
Author

ghost commented Apr 1, 2016

HI @bluenet13

i think the performance issue may caused by a smaller MTU (1500bytes) in dpdk case. is there a way to change it to 9000bytes?

Thanks

@bluenet13
Copy link
Member

hi,
In my ENV, I ping successfully with 5000 bytes. Why ping failed with 1473 bytes in your ENV? please share the tcpdump file to me. Do you enable jumbo frame in your client PC ?

ping 2.2.2.2 -s 5000
PING 2.2.2.2 (2.2.2.2) 5000(5028) bytes of data.
5008 bytes from 2.2.2.2: icmp_seq=1 ttl=64 time=0.183 ms
5008 bytes from 2.2.2.2: icmp_seq=2 ttl=64 time=0.198 ms
5008 bytes from 2.2.2.2: icmp_seq=3 ttl=64 time=0.259 ms
5008 bytes from 2.2.2.2: icmp_seq=4 ttl=64 time=0.254 ms

Yes, I will change the code to support 9000bytes.

@ghost
Copy link
Author

ghost commented Apr 2, 2016

Hi,
My client side is fine, mtu is set to 9000. the tcpdump show no reply from server.
I think problem is server side dpdk configuration. I am not sure if jumbo frame is enabled at server side. I tried to change the dpdk mtu, but failed. Do you change any dpdk driver configuration ?

@bluenet13
Copy link
Member

hi,
Currently, opendp don't support jumbo frame, can't handle so large packets. so please change you client MTU to 1500.
When opendp support jumbo frame, will notice you.
Thanks.

@bluenet13
Copy link
Member

If you want to enable jombo frame, you shall update odp_main.c to set eth0 interface MTU as 9000 by below API.
int netdp_intf_set_mtu(caddr_t name, int *mtu);
Then startup opendp with below command.
./build/opendp -c 0x2 -n 1 --base-virtaddr=0x2aaa2aa0000 -- -p 0x1 --config="(0,0,1)" --enable-jumbo --max-pkt-len 9000

@ghost
Copy link
Author

ghost commented Apr 4, 2016

Hi
i tried your changes you suggest. it works fine with redis-dpdk with default request size. if i specify the -d with larger request size, i.e. 128, 512 or 1024. it might fail during LRANGE tests. i am not sure if you have tried using different request size.
basically i am trying to integrate DPDK with MongoDB, are you interested in this?

Thanks

@bluenet13
Copy link
Member

hi,
What is the command with larger request size? I will try it.
Yes, i am very interest in integrating DPDK with mogonDB. maybe we can discuss by IM tool (https://dpdk-odp.slack.com).

@ghost
Copy link
Author

ghost commented Apr 5, 2016

Hi,

you can use "-d" to specify the request size when you run redis-bench.

i will try your link after work today.

On Mon, Apr 4, 2016 at 5:32 PM, bluestar notifications@github.com wrote:

hi,
What is the command with larger request size? I will try it.
Yes, i am very interest in integrating DPDK with mogonDB. maybe we can
discuss by IM tool (https://dpdk-odp.slack.com).


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#9 (comment)

@bluenet13
Copy link
Member

I know the issue for larger request size. because netdp don't support tso, so the tranfer speed is slower, netdp will support it soon.

@ghost
Copy link
Author

ghost commented Apr 6, 2016

How to sign in the slack? I guess i need your invite?
On Apr 4, 2016 5:32 PM, "bluestar" notifications@github.com wrote:

hi,
What is the command with larger request size? I will try it.
Yes, i am very interest in integrating DPDK with mogonDB. maybe we can
discuss by IM tool (https://dpdk-odp.slack.com).


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#9 (comment)

@bluenet13
Copy link
Member

Yes, please send your email. i will invite you.

@bluenet13
Copy link
Member

I have sent the invite.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant