-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf (throughput, latency) results compared other k8s networking solutions #102
Comments
Deployed 4 node cluster with kube-router on AWS with 2 nodes each in us-west-2a,us-west-2b zones with t2.medium instance
For pod-to-pod connectivity of pods on same node below is iperf3 results
For pod-to-pod connectivity of pods across nodes in same zone below is iperf3 results
For pod-to-pod connectivity of pods across nodes in different zones below is iperf3 results
|
Results with calico
Across pods on same node
Across pods on different nodes in same zone
Across pods on different nodes in different zone
|
Results for Flannel
Pods on the same node
Pods acress the nodes in same AZ
Pods acress the nodes in different AZ
|
Results for Weave
Pods on the same node
Pods across the nodes in same AZ
Pods across the nodes in different AZ
|
@murali-reddy have you guys looked into why weave, which uses vxlan tunneling (fast datapath is still vxlan in kernel with ovs) is significantly faster and more stable (esp. for cross AZ traffic) than kube-router and calico, which should always use in kernel L3 datapaths. I expected that kube-router and calico should always be faster than vxlan based solutions... I'm also more interested in perf results with NetworkPolicy turned on with DefaultDeny. |
I would be curious to see what the results are with: a) jumbo frames |
We already have a cluster setup with 10, 40 and 100G nodes and PerfSonar measurements between all hosts, and so far the results are not great because of lack of jumbo frames support. Once it's done, I can provide some info |
Closing this as old. |
Its good to get comparative performance like in this Kubernetest Network Performance Testing document.
Throughput results for cluster IP/Node port service should be lot better with kube-rotuer due to use of IPVS. But instead of hand-waving, we need to get the real reproducable metrics to prove that.
using iperf to get the kube-router results should be stright forward.
The text was updated successfully, but these errors were encountered: