Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs, gsg: minor edits to kpr guide and note on hybrid use #16169

Merged
merged 1 commit into from
May 17, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
35 changes: 23 additions & 12 deletions Documentation/gettingstarted/kubeproxy-free.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,17 +40,6 @@ installation of the ``kube-proxy`` add-on:

kubeadm init --skip-phases=addon/kube-proxy

For existing installations with ``kube-proxy`` running as a DaemonSet, remove it
by using the following commands:

.. code:: bash

kubectl -n kube-system delete ds kube-proxy
# Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer)
kubectl -n kube-system delete cm kube-proxy
# Run on each node:
iptables-restore <(iptables-save | grep -v KUBE)

Afterwards, join worker nodes by specifying the control-plane node IP address and
the token returned by ``kubeadm init``:

Expand All @@ -68,6 +57,19 @@ the token returned by ``kubeadm init``:
each node has an ``InternalIP`` which is assigned to a device with the same
name on each node.

For existing installations with ``kube-proxy`` running as a DaemonSet, remove it
by using the following commands below. **Careful:** Be aware that this will break
existing service connections. It will also stop service related traffic until the
Cilium replacement has been installed:

.. code:: bash

kubectl -n kube-system delete ds kube-proxy
# Delete the configmap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade (works only for K8s 1.19 and newer)
kubectl -n kube-system delete cm kube-proxy
# Run on each node:
iptables-restore <(iptables-save | grep -v KUBE)

.. include:: k8s-install-download-release.rst

Next, generate the required YAML files and deploy them. **Important:** Replace
Expand Down Expand Up @@ -1004,7 +1006,16 @@ Cilium's eBPF kube-proxy replacement can be configured in several modes, i.e. it
replace kube-proxy entirely or it can co-exist with kube-proxy on the system if the
underlying Linux kernel requirements do not support a full kube-proxy replacement.

This section therefore elaborates on the various ``kubeProxyReplacement`` options:
**Careful:** When deploying the eBPF kube-proxy replacement under co-existence with
kube-proxy on the system, be aware that both mechanisms operate independent of each
other. Meaning, if the eBPF kube-proxy replacement is added or removed on an already
*running* cluster in order to delegate operation from respectively back to kube-proxy,
then it must be expected that existing connections will break since, for example,
both NAT tables are not aware of each other. If deployed in co-existence on a newly
spawned up node/cluster which does not yet serve user traffic, then this is not an
issue.

This section elaborates on the various ``kubeProxyReplacement`` options:

- ``kubeProxyReplacement=strict``: This option expects a kube-proxy-free
Kubernetes setup where Cilium is expected to fully replace all kube-proxy
Expand Down