-
Notifications
You must be signed in to change notification settings - Fork 295
Add [experimental] option for using IPVS proxy mode #1074
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1074 +/- ##
==========================================
+ Coverage 35.13% 35.28% +0.15%
==========================================
Files 59 59
Lines 3393 3401 +8
==========================================
+ Hits 1192 1200 +8
Misses 2039 2039
Partials 162 162
Continue to review full report at Codecov.
|
Hi @ivanilves did you create a cluster using these changes? From what I'm seeing, in 1.8.4, kube-dns and flannel fail to start. Didn't have time to investigate further. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for your contribution - I'm really looking forward to try it myself!
Mostly questions/nits. Would you mind addressing these?
- mountPath: /etc/kubernetes/kubeconfig | ||
name: kubeconfig | ||
readOnly: true | ||
- mountPath: /etc/kubernetes/kube-proxy | ||
name: kube-proxy-config | ||
readOnly: true | ||
volumes: | ||
- name: lib-modules | ||
hostPath: | ||
path: /lib/modules |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we enclose this volume and the corresponding volumemount inside {{if .Experimental.IPVSProxy.Enabled}}
if it is necessary when and only when IPVSProxy is enabled?
test/integration/maincluster_test.go
Outdated
ipvsProxy: | ||
enabled: true | ||
scheduler: lc | ||
syncPeriod: 900s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reference for properly configuring this setting?
At first glance, a sync period of 15min seems like a bit long - does it mean that a newly added pod/svc IP becomes accessible from the entire cluster after at most 15min?
@@ -1261,6 +1261,14 @@ experimental: | |||
kube2IamSupport: | |||
enabled: false | |||
|
|||
# Use IPVS kube-proxy mode instead of [default] iptables one (requires Kubernetes 1.8.3+) | |||
# This is intended to address performance issues of iptables mode for clusters with big number of nodes and services | |||
ipvsProxy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this to kubeProxy.ipvsMode
?
Sorry for taking your time but we're migrating away from the experimental
settings because not only experimental
but everything could change while we're in pre v1.0 😉
Hi @ianilves , I managed to fix env:
- name: KUBEPROXY_MODE
value: ipvs
command:
- /hyperkube
- proxy
- --config=/etc/kubernetes/kube-proxy/kube-proxy-config.yaml
- --feature-gates=SupportIPVSProxyMode=true They are mentioned here: However, it started to crash again after rebooting the nodes I1218 12:01:17.148395 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E1218 12:01:17.148904 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: i/o timeout
E1218 12:01:17.149031 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.3.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.3.0.1:443: i/o timeout
I1218 12:01:17.648421 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... Also:
|
OK, at least kube-proxy starts well with settings from PR (Using
|
FYI @camilb I've run a decent amout of cluster creations and upgrades. And I may state this:
and does not load DNS service (which has IP 172.16.0.10 in my cluster and is UDP). Still I can connect to other cluster services (which are TCP) by I think it's something with a way how kube-proxy in IPVS mode is getting initialized. Meanwhile I'm working towards:
Thank you again for your input! |
Hey @camilb I was running the IPVS thing for the last few days, and what I've found:
How could I know it is decent ?I've made my own version of hyperkube |
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@mumoshu could you take a look on this PR now? (I've changed it) Due to some concerns of IPVS integration state I would like to not merge it now, but maintain it until |
BTW Heptio tests ran well: https://scanner.heptio.com/d30af2b8fc3d94de31f341036da6165e/diagnostics/ |
@ivanilves Hi, thanks a lot for your efforts! After seeing your great explanations in cluster.yaml about various gotchas to get it running, I'm willing to merge this now to reduce hassle of resolving conflicts again and again. Would it be ok for you, too? 😃 |
@mumoshu YES!!! Please!!! 🙏 |
@ivanilves Thanks again for your contribution 👍 |
@ivanilves @mumoshu Tested the Google's hyperkube 1.9.1 image with IPVS in #1104 . |
Great! |
👏 👏 👏 |
Add option for using IPVS proxy mode
These docs mention that ipvs falls back to iptables when kube-proxy is started with the |
Add [experimental] option to use IPVS kube-proxy mode instead of [default] iptables one.
Hope to address performance issues of iptables mode for clusters with big number of nodes and services!
(>20 nodes, >5k services is a big cluster to me)
We also hope IPVS will handle UDP traffic better (This needs to be validated, a hope-based guess only)