Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the real performance of flannel #738

Closed
jhaohai opened this issue May 31, 2017 · 2 comments
Closed

What is the real performance of flannel #738

jhaohai opened this issue May 31, 2017 · 2 comments

Comments

@jhaohai
Copy link

jhaohai commented May 31, 2017

Hi all

We are deploying openshift platform and I have some simple investigation on flannel.
Here is our environment:
AWS VPC cn-north-1
CentOS 7.3 with enhanced networking enabled
kernel 3.10.0-514.21.1.el7.x86_64
flannel 0.7.0 with vxlan backend

Below is my procedures:

  1. start iperf3 on host01
    iperf -s

  2. iperf test with host ip on another host
    iperf -c hostip -t 60

  3. iperf test with overlay ip on another host
    iperf -c overlayip -t 60

From our test results I have seen 99.5% performance compared to native host network.
I have done this test for both c4 and m4 instances types, and instances in different subnets and available zones.

However, I searched google and most people say the performance is 50% approximately.

Did I do something wrong in this test ? Or the flannel auto leverage the aws overlay network infrustructure that accelerating the vxlan ?

@tomdee
Copy link
Contributor

tomdee commented Jun 9, 2017

I think your test is fine. vxlan can do many GBPS so you will only see the performance drop compared to no encapsulation if you have a fast enough network.

@tomdee tomdee closed this as completed Jun 9, 2017
@FarhadF
Copy link

FarhadF commented Jan 14, 2018

Just to confirm @tomdee , I performed this benchmark:
On a virtualized host (VMware ESXi) I have 2 VM's running debian stretch. On both flannel is configured to use vxlan. Using flanneld v0.9.1 for this. VMs on the same host using virtual network and thats why it's pretty fast.

Test Results:

  1. Using the VM's network interface (VM to VM traffic on same host benchmark):
iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.161.75, port 44782
[  5] local 192.168.161.219 port 5201 connected to 192.168.161.75 port 44784
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   765 MBytes  6.41 Gbits/sec                  
[  5]   1.00-2.00   sec   809 MBytes  6.79 Gbits/sec                  
[  5]   2.00-3.00   sec   828 MBytes  6.94 Gbits/sec                  
[  5]   3.00-4.00   sec  1022 MBytes  8.57 Gbits/sec                  
[  5]   4.00-5.00   sec   686 MBytes  5.75 Gbits/sec                  
[  5]   5.00-6.00   sec   671 MBytes  5.63 Gbits/sec                  
[  5]   6.00-7.00   sec   823 MBytes  6.90 Gbits/sec                  
[  5]   7.00-8.00   sec   798 MBytes  6.70 Gbits/sec                  
[  5]   8.00-9.00   sec   747 MBytes  6.26 Gbits/sec                  
[  5]   9.00-10.00  sec   796 MBytes  6.67 Gbits/sec                  
[  5]  10.00-11.00  sec   879 MBytes  7.37 Gbits/sec                  
[  5]  11.00-12.00  sec   792 MBytes  6.65 Gbits/sec                  
[  5]  12.00-13.00  sec   767 MBytes  6.43 Gbits/sec                  
[  5]  13.00-14.00  sec   934 MBytes  7.84 Gbits/sec                  
[  5]  14.00-15.00  sec   798 MBytes  6.70 Gbits/sec                  
[  5]  15.00-16.00  sec   708 MBytes  5.94 Gbits/sec                  
[  5]  16.00-17.00  sec   776 MBytes  6.51 Gbits/sec                  
[  5]  17.00-18.00  sec   731 MBytes  6.13 Gbits/sec                  
[  5]  18.00-19.00  sec   833 MBytes  6.99 Gbits/sec                  
[  5]  19.00-20.00  sec   788 MBytes  6.61 Gbits/sec                  
[  5]  20.00-21.00  sec   759 MBytes  6.37 Gbits/sec                  
[  5]  21.00-22.00  sec   820 MBytes  6.88 Gbits/sec                  
[  5]  22.00-23.00  sec   774 MBytes  6.49 Gbits/sec                  
[  5]  23.00-24.00  sec   771 MBytes  6.47 Gbits/sec                  
[  5]  24.00-25.00  sec   817 MBytes  6.85 Gbits/sec                  
[  5]  25.00-26.00  sec   813 MBytes  6.82 Gbits/sec                  
[  5]  26.00-27.00  sec   753 MBytes  6.32 Gbits/sec                  
[  5]  27.00-28.00  sec   838 MBytes  7.03 Gbits/sec                  
[  5]  28.00-29.00  sec   798 MBytes  6.70 Gbits/sec                  
[  5]  29.00-30.00  sec   821 MBytes  6.88 Gbits/sec                  
[  5]  30.00-30.04  sec  25.9 MBytes  5.50 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-30.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-30.04  sec  23.4 GBytes  6.68 Gbits/sec                  receiver

So 23.4GBytes Total data transfered in 30 secs with average 6.68Gbit/sec bandwidth.

  1. Using flanneld vxlan ip:
Accepted connection from 10.201.51.0, port 41426
[  5] local 10.201.8.1 port 5201 connected to 10.201.51.0 port 41428
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   543 MBytes  4.55 Gbits/sec                  
[  5]   1.00-2.00   sec   374 MBytes  3.14 Gbits/sec                  
[  5]   2.00-3.00   sec   411 MBytes  3.45 Gbits/sec                  
[  5]   3.00-4.00   sec   367 MBytes  3.08 Gbits/sec                  
[  5]   4.00-5.00   sec   383 MBytes  3.22 Gbits/sec                  
[  5]   5.00-6.00   sec   339 MBytes  2.84 Gbits/sec                  
[  5]   6.00-7.00   sec   350 MBytes  2.93 Gbits/sec                  
[  5]   7.00-8.00   sec   361 MBytes  3.03 Gbits/sec                  
[  5]   8.00-9.00   sec   402 MBytes  3.37 Gbits/sec                  
[  5]   9.00-10.00  sec   364 MBytes  3.06 Gbits/sec                  
[  5]  10.00-11.00  sec   406 MBytes  3.41 Gbits/sec                  
[  5]  11.00-12.00  sec   349 MBytes  2.92 Gbits/sec                  
[  5]  12.00-13.00  sec   322 MBytes  2.70 Gbits/sec                  
[  5]  13.00-14.00  sec   371 MBytes  3.11 Gbits/sec                  
[  5]  14.00-15.00  sec   367 MBytes  3.08 Gbits/sec                  
[  5]  15.00-16.00  sec   367 MBytes  3.08 Gbits/sec                  
[  5]  16.00-17.00  sec   398 MBytes  3.34 Gbits/sec                  
[  5]  17.00-18.00  sec   322 MBytes  2.70 Gbits/sec                  
[  5]  18.00-19.00  sec   367 MBytes  3.07 Gbits/sec                  
[  5]  19.00-20.00  sec   339 MBytes  2.84 Gbits/sec                  
[  5]  20.00-21.00  sec   360 MBytes  3.02 Gbits/sec                  
[  5]  21.00-22.00  sec   356 MBytes  2.99 Gbits/sec                  
[  5]  22.00-23.00  sec   359 MBytes  3.01 Gbits/sec                  
[  5]  23.00-24.00  sec   375 MBytes  3.15 Gbits/sec                  
[  5]  24.00-25.00  sec   336 MBytes  2.81 Gbits/sec                  
[  5]  25.00-26.00  sec   338 MBytes  2.83 Gbits/sec                  
[  5]  26.00-27.00  sec   483 MBytes  4.05 Gbits/sec                  
[  5]  27.00-28.00  sec   496 MBytes  4.16 Gbits/sec                  
[  5]  28.00-29.00  sec   359 MBytes  3.01 Gbits/sec                  
[  5]  29.00-30.00  sec   355 MBytes  2.98 Gbits/sec                  
[  5]  30.00-30.04  sec  23.7 MBytes  4.70 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-30.04  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-30.04  sec  11.1 GBytes  3.17 Gbits/sec                  receiver

So 11.1GBytes Total data transfered in 30 secs with average 3.17Gbit/sec bandwidth.
I'm seeing about 53% degrade in performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants