Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KubeOne improve support for MetalLB in L2 mode (Kubeproxy IPVS strict ARP) #1409

Closed
toschneck opened this issue Jul 5, 2021 · 4 comments · Fixed by #1420
Closed

KubeOne improve support for MetalLB in L2 mode (Kubeproxy IPVS strict ARP) #1409

toschneck opened this issue Jul 5, 2021 · 4 comments · Fixed by #1420
Assignees
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@toschneck
Copy link
Member

Similar Issue as for KKP: kubermatic/kubermatic#7309

KubeOne needs an issue option to set the stirct ARP mode over multiple release cycle of KubeOne. Otherwise MetalLB could have problems at customer envrionments.

Two options:

  • set strict ARP as default
  • provide to configure strict arp as an option by a kubeone option
@xmudrii
Copy link
Member

xmudrii commented Jul 5, 2021

@toschneck Looking at the following references:

It seems like the strict ARP mode is available only if the kube-proxy is running in the IPVS mode. KubeOne currently doesn't support IPVS mode. Instead, it runs the kube-proxy in the iptables mode (which is the default).

Does this mean that we also need to add support for running kube-proxy in the IPVS mode, or this issue can be ignored in the case of KubeOne?

@toschneck
Copy link
Member Author

@xmudrii I think we should be constant here, at KKP we use IPVS as default. I also had the impression that IPVS was choosen as default kube-proxy setting our 1.20.x seed clusters. I could not verify for 100%. Anyway it seams the documentation says, that it could change at some time (what potential could cause instability at customer systems), I would prefer to somehow specify it fixed to iptabels or ipvs with the option to configure it (both modes should be usable for customers)

On the longterm I get the feeling IPVS will be the default anyway as it has better performance. @rastislavs also recommended it as well. Maybe you could also check how other vendors are settings the defaults.
https://kubernetes.io/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-ProxyMode

Regards the strictARP. The only important stuff for me is, that we can configure it in a long term stable way and not overwriting it e.g. with a newer kubeproxy config.
Also I don't think the setting at all effects iptables as it under IPVS settins:

apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: true
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: ipvs
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""

@toschneck
Copy link
Member Author

@rastislavs
Copy link
Contributor

IPVS mode would be definitely a nice feature to support large-scale environments, as iptables does not scale well after several 1000s of services.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants