Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance: Throughput with OVS drops from 9.9Gbps to 6Gbps with VXLAN tunneling #4

Closed
radhikaniranjan opened this issue Oct 26, 2012 · 5 comments

Comments

@radhikaniranjan
Copy link

The throughput with VXLAN tunneling drops to 6 Gbps:

Client connecting to 10.0.85.3, TCP port 5001

TCP window size: 96.7 KByte (default)
[ 3] local 10.0.101.3 port 51781 connected with 10.0.85.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 7.46 GBytes 6.41 Gbits/sec

I am using an MTU of 9000. Without vxlan, the same ovsbridge gives 9.89 Gbps.

Here are other setup details:
My set up is as follows: OVS (10.0.85.3) <-----VXLAN ----PHYLINK (15.0.85.3) ===============PHYLINK(15.0.101.2)---VXLAN-----> OVS(10.0.101.2)
PHYLINK is my 10Gbps physical link.

Here is the ovs configuration

Host 1:
ovs-vsctl show
9399da11-e2ae-49d5-b5c8-08c6864ad7ab
Bridge ovsbr
Port "vx1"
Interface "vx1"
type: vxlan
options: {remote_ip="15.0.101.3"}
Port ovsbr
Interface ovsbr

type: internal
Host 2:
ovs-vsctl show
94389297-4998-4960-b120-a83f4f2cc4d1
Bridge ovsbr
Port "vx1"
Interface "vx1"
type: vxlan
options: {remote_ip="15.0.85.3"}
Port ovsbr
Interface ovsbr

type: internal
I have a lot of interfaces, therefore showing only the relevant ones:

Host 1:

ovsbr: flags=4163 mtu 9000
inet 10.0.85.3 netmask 255.255.0.0 broadcast 10.0.255.255
inet6 fe80::f0f4:67ff:fe89:2348 prefixlen 64 scopeid 0x20
ether f2:f4:67:89:23:48 txqueuelen 0 (Ethernet)
RX packets 6076890 bytes 51004101244 (47.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1941275 bytes 6326190860 (5.8 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

p1p1: flags=4163 mtu 9000
inet 15.0.85.3 netmask 255.255.0.0 broadcast 15.0.255.255
inet6 fe80::92e2:baff:fe26:82d4 prefixlen 64 scopeid 0x20
ether 90:e2:ba:26:82:d4 txqueuelen 1000 (Ethernet)
RX packets 5584777 bytes 49719271982 (46.3 GiB)
RX errors 0 dropped 906 overruns 0 frame 0
TX packets 1729674 bytes 200678713 (191.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Host 2:
ovsbr Link encap:Ethernet HWaddr 16:03:b5:06:56:40
inet addr:10.0.101.3 Bcast:10.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::260:ddff:fe46:515f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:8727829 errors:0 dropped:12 overruns:0 frame:0
TX packets:7412586 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:14258252105 (14.2 GB) TX bytes:156935144728 (156.9 GB)

eth5 Link encap:Ethernet HWaddr 00:60:dd:46:51:5f
inet addr:15.0.101.3 Bcast:15.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::260:ddff:fe46:515f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:56992363 errors:0 dropped:0 overruns:0 frame:0
TX packets:69611348 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17908270611 (17.9 GB) TX bytes:526649690661 (526.6 GB)
Interrupt:77

@mestery
Copy link
Owner

mestery commented Nov 16, 2012

This should be fixed with my last commit, can you try it and verify? In my setup, VXLAN actually performs better than GRE even.

@mestery mestery closed this as completed Nov 16, 2012
@radhikaniranjan
Copy link
Author

Thanks! I'll check and let you know. Was there a specific issue that caused the throughput drop? I was just curious. I do understand if the problem is too involved to explain or was a result of multiple small things.

@radhikaniranjan
Copy link
Author

Hi Kyle,

Thanks for the update. I tried the latest branch, but I don’t see an improvement. Perhaps there is some config parameter that I am missing? I’d love to know insights on what are the knobs I could use to change the perf that I am seeing.

Here are my testbed details: physical links (both ends have 9000 MTU): 15.0.113.3 and 15.0.101.3
openvswitch with vxlan(both ends have 9000 MTU): 11.0.113.3 and 11.0.101.3

RX, Baseline (on physical link): 9.9Gbps
RX, VXLAN(openvswitch): 5.42 Gbps

TX, Baseline (on physical link): 8.32Gbps
TX, VXLAN(openvswitch): 4.57 Gbps

Output of ovs-ofctl show, ovs-dpctl show, ovs-vsctl show:
Server 1) ovs-ofctl show ovsbr
OFPT_FEATURES_REPLY (xid=0×1): dpid:00009e713aab6145
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
1(vx1): addr:5a:30:46:d7:73:b1
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(ovsbr): addr:9e:71:3a:ab:61:45
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0×3): frags=normal miss_send_len=0

ovs-dpctl show
system@ovs-system:
lookups: hit:3044610 missed:30 lost:0
flows: 0
port 0: ovs-system (internal)
port 1: ovsbr (internal)
port 2: vx1 (vxlan: remote_ip=15.0.101.3)

ovs-vsctl show
b7081fed-9be3-4243-893e-94b4b50211d8
Bridge ovsbr
Port ovsbr
Interface ovsbr
type: internal
Port “vx1″
Interface “vx1″
type: vxlan
options: {remote_ip=”15.0.101.3″}

Server 2) ovs-ofctl show ovsbr
OFPT_FEATURES_REPLY (xid=0×1): dpid:00001eb955408244
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
1(vx1): addr:5a:b1:02:dd:b7:15
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(ovsbr): addr:1e:b9:55:40:82:44
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0×3): frags=normal miss_send_len=0

ovs-dpctl show
system@ovs-system:
lookups: hit:2794001 missed:30 lost:0
flows: 0
port 0: ovs-system (internal)
port 1: ovsbr (internal)
port 2: vx1 (vxlan: remote_ip=15.0.113.3)

ovs-vsctl show
6a1f48dd-30d8-4d1b-b52a-4698354e5c26
Bridge ovsbr
Port “vx1″
Interface “vx1″
type: vxlan
options: {remote_ip=”15.0.113.3″}
Port ovsbr
Interface ovsbr
type: internal

Here’s the modinfo for the openvswitch module:
Server 1) modinfo openvswitch
filename: /lib/modules/3.5.0-030500-generic/kernel/net/openvswitch/openvswitch.ko
version: 1.9.90
license: GPL
description: Open vSwitch switching datapath
srcversion: 726FDD22BBD59C95CB6769A
depends:
vermagic: 3.5.0-030500-generic SMP mod_unload modversions

Server 2) modinfo openvswitch
filename: /lib/modules/3.5.0-17-generic/kernel/net/openvswitch/openvswitch.ko
version: 1.9.90
license: GPL
description: Open vSwitch switching datapath
srcversion: 726FDD22BBD59C95CB6769A
depends:
vermagic: 3.5.0-17-generic SMP mod_unload modversions

Please let me know if there is any other debug info I can provide.

Thanks
Radhika

@mestery
Copy link
Owner

mestery commented Nov 19, 2012

Radhika:

Can you do me a favor: Please baseline this test against GRE tunnels as well. In my testing, I was comparing performance results against GRE tunnels, and in fact the VXLAN tunnels seem to run slightly faster than the GRE tunnels. I just compared against non tunneled traffic, and the drop-off in tunneling is pretty significant. I'll keep digging into that now. But if you could verify that GRE tunnels perform the same as VXLAN, at least that particular performance drop-off can be marked as addressed.

Thanks,
Kyle

@radhikaniranjan
Copy link
Author

Thanks Kyle, I'll get back to you with the tests but I wanted to confirm that in all my previous testing of GRE vs VXLAN I have always found GRE tunnels to have poorer throughput than VXLAN tunnels. HTH.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants