New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance: Throughput with OVS drops from 9.9Gbps to 6Gbps with VXLAN tunneling #4
Comments
This should be fixed with my last commit, can you try it and verify? In my setup, VXLAN actually performs better than GRE even. |
Thanks! I'll check and let you know. Was there a specific issue that caused the throughput drop? I was just curious. I do understand if the problem is too involved to explain or was a result of multiple small things. |
Hi Kyle, Thanks for the update. I tried the latest branch, but I don’t see an improvement. Perhaps there is some config parameter that I am missing? I’d love to know insights on what are the knobs I could use to change the perf that I am seeing. Here are my testbed details: physical links (both ends have 9000 MTU): 15.0.113.3 and 15.0.101.3 RX, Baseline (on physical link): 9.9Gbps TX, Baseline (on physical link): 8.32Gbps Output of ovs-ofctl show, ovs-dpctl show, ovs-vsctl show: ovs-dpctl show ovs-vsctl show Server 2) ovs-ofctl show ovsbr ovs-dpctl show ovs-vsctl show Here’s the modinfo for the openvswitch module: Server 2) modinfo openvswitch Please let me know if there is any other debug info I can provide. Thanks |
Radhika: Can you do me a favor: Please baseline this test against GRE tunnels as well. In my testing, I was comparing performance results against GRE tunnels, and in fact the VXLAN tunnels seem to run slightly faster than the GRE tunnels. I just compared against non tunneled traffic, and the drop-off in tunneling is pretty significant. I'll keep digging into that now. But if you could verify that GRE tunnels perform the same as VXLAN, at least that particular performance drop-off can be marked as addressed. Thanks, |
Thanks Kyle, I'll get back to you with the tests but I wanted to confirm that in all my previous testing of GRE vs VXLAN I have always found GRE tunnels to have poorer throughput than VXLAN tunnels. HTH. |
The throughput with VXLAN tunneling drops to 6 Gbps:
Client connecting to 10.0.85.3, TCP port 5001
TCP window size: 96.7 KByte (default)
[ 3] local 10.0.101.3 port 51781 connected with 10.0.85.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 7.46 GBytes 6.41 Gbits/sec
I am using an MTU of 9000. Without vxlan, the same ovsbridge gives 9.89 Gbps.
Here are other setup details:
My set up is as follows: OVS (10.0.85.3) <-----VXLAN ----PHYLINK (15.0.85.3) ===============PHYLINK(15.0.101.2)---VXLAN-----> OVS(10.0.101.2)
PHYLINK is my 10Gbps physical link.
Here is the ovs configuration
Host 1:
ovs-vsctl show
9399da11-e2ae-49d5-b5c8-08c6864ad7ab
Bridge ovsbr
Port "vx1"
Interface "vx1"
type: vxlan
options: {remote_ip="15.0.101.3"}
Port ovsbr
Interface ovsbr
type: internal
Host 2:
ovs-vsctl show
94389297-4998-4960-b120-a83f4f2cc4d1
Bridge ovsbr
Port "vx1"
Interface "vx1"
type: vxlan
options: {remote_ip="15.0.85.3"}
Port ovsbr
Interface ovsbr
type: internal
I have a lot of interfaces, therefore showing only the relevant ones:
Host 1:
ovsbr: flags=4163 mtu 9000
inet 10.0.85.3 netmask 255.255.0.0 broadcast 10.0.255.255
inet6 fe80::f0f4:67ff:fe89:2348 prefixlen 64 scopeid 0x20
ether f2:f4:67:89:23:48 txqueuelen 0 (Ethernet)
RX packets 6076890 bytes 51004101244 (47.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1941275 bytes 6326190860 (5.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p1p1: flags=4163 mtu 9000
inet 15.0.85.3 netmask 255.255.0.0 broadcast 15.0.255.255
inet6 fe80::92e2:baff:fe26:82d4 prefixlen 64 scopeid 0x20
ether 90:e2:ba:26:82:d4 txqueuelen 1000 (Ethernet)
RX packets 5584777 bytes 49719271982 (46.3 GiB)
RX errors 0 dropped 906 overruns 0 frame 0
TX packets 1729674 bytes 200678713 (191.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Host 2:
ovsbr Link encap:Ethernet HWaddr 16:03:b5:06:56:40
inet addr:10.0.101.3 Bcast:10.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::260:ddff:fe46:515f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:8727829 errors:0 dropped:12 overruns:0 frame:0
TX packets:7412586 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:14258252105 (14.2 GB) TX bytes:156935144728 (156.9 GB)
eth5 Link encap:Ethernet HWaddr 00:60:dd:46:51:5f
inet addr:15.0.101.3 Bcast:15.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::260:ddff:fe46:515f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:56992363 errors:0 dropped:0 overruns:0 frame:0
TX packets:69611348 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17908270611 (17.9 GB) TX bytes:526649690661 (526.6 GB)
Interrupt:77
The text was updated successfully, but these errors were encountered: