New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm cri installation instructions #10186
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Deploy preview for kubernetes-io-master-staging ready! Built with commit 7876861 https://deploy-preview-10186--kubernetes-io-master-staging.netlify.com |
Deploy preview for kubernetes-io-master-staging ready! Built with commit 22ca1d1 https://deploy-preview-10186--kubernetes-io-master-staging.netlify.com |
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews @kubernetes/sig-node-pr-reviews |
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables | ||
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables | ||
echo 1 > /proc/sys/net/ipv4/ip_forward | ||
sysctl -p |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be telling users to modify /etc/sysctl.conf
instead of using echo? Otherwise sysctl -p
doesn't really have any effect here, and the changes will not persist through a reboot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Maybe we can add a file to /etc/sysctl.d/
? Something like 99-kubernetes-cri.conf
should do the trick.
systemctl start containerd | ||
|
||
# Setup kubelet to use containerd runtime endpoint and reload systemd. | ||
cat > /etc/systemd/system/kubelet.service.d/20-cri.conf <<EOF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We really shouldn't be telling users to override the unit in this way, instead they should be passing this info in to the kubeadm config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@detiber Something like
nodeRegistration:
kubeletExtraArgs:
container-runtime-endpoint: unix:///var/run/containerd/containerd.sock
would work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vincepri
thanks a lot for the PR!
added some minor comments.
@@ -0,0 +1,144 @@ | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should move this doc to somewhere else.
reference/setup-tools/kubeadm/
is the folder about the kubeadm CLI flags docs.
and move the kubeadm init
content too.
my initial suggestion would be:
-> docs/setup/independent
if this guide is kubeadm related.
if not it has to go somewhere else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. The goal is to have this as a general CRI installation document and reference it as needed. Where would you suggest to have it reside?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the structure of the k8s.io documentation is bit confusing.
we have both tasks
and setup
and the answer to the question is difficult.
i would add this under a new folder under docs/setup/
called cri
(title: Kubernetes CRI
):
the folder needs an _index.md
as this one to set the title:
https://github.com/kubernetes/website/tree/master/content/en/docs/setup/independent
the name cri-installation.md
as a file under that folder SGTM.
content/en/docs/reference/setup-tools/kubeadm/cri-installation.md
Outdated
Show resolved
Hide resolved
--- | ||
reviewers: | ||
- vincepri | ||
title: CRI Installation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please lowercase Installation
i think we should have this consistent for other pages too.
879ccaa
to
b5500e5
Compare
thank you for the update @vincepri |
@neolit123 @detiber Thank you for the feedback! I addressed all the comments. Let me know if there is anything else. |
7603ffb
to
8aa38ff
Compare
8aa38ff
to
5a295bc
Compare
5a295bc
to
75c68a4
Compare
Signed-off-by: Vince Prignano <vince@vincepri.com>
11ba78a
to
22ca1d1
Compare
@vincepri |
|
||
Refer to the [official Docker installation guides](https://docs.docker.com/engine/installation/) | ||
for more information. | ||
Refer to the [CRI installation](/docs/setup/cri/cri-installation/) guide for more information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will need to outline that Docker is still the default and if another CRI and outline the command line overrides in the kubeadm init / join below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might say something like:
By default, kubernetes is configured to work with docker (18.06). In order to enable other CRIs please consult https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ and https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/ instructions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the docker version relevant? In the CRI docs instructions we're still suggesting to install 17.03, should we update it? This is especially relevant for Ubuntu 18.04.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so that's a good question.
17.03 is the last verified version by SIG node and as Tim mentioned yesterday they are not going to verify a newer version.
these docs here seem out of date:
https://docs.docker.com/release-notes/docker-ce/
i think this shows the truth:
https://github.com/docker/docker-ce/releases
18.06.1-ce - latest edge
18.03.1-ce - latest stable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should include the information you just shared in the CRI document, under the Docker section. The install-kubeadm
one can be left version-less?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it really depends of what version is tested with k8s, because we cannot recommend even a stable docker version if it's broken with k8s latest.
i will defer to others for more comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@timothysc thoughts/preferences on the above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vincepri
PSA, we are soon going to merge a PR that updates the max validated version of docker for kubeadm to 18.06:
kubernetes/kubernetes#68495
we need to reflect this in the docs as recommended version.
at least attempt to, given all the distro flavors that we have to support.
/assign @Bradamant3 |
Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default. | ||
The container runtime used by default is Docker, which is enabled through the built-in | ||
`dockershim` CRI implementation inside of the `kubelet`. | ||
From v1.12.0 the suggested kubeadm CRI is containerd. For further information refer to [CRI Installation](/docs/setup/cri/cri-installation/) instructions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate on this a bit? Why containerd is suggested kubeadm CRI? Why not CRI-O or both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this was due to the lack of tests with docker, not totally sure about CRI-O. I'll defer to @timothysc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123 once we have CI-Signal back we need to double check the versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my vote would be to not go with this statement for 1.12.0.
docker is currently the CRI for most of kubeadm users and also all of the kubeadm e2e here are docker too:
https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @neolit123 here. Let's not promote switching to other runtimes just yet. Let's provide more information on how to set up CRI runtimes and how to use them. That should be enough for 1.12 I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are pending on decision for this.
i don't have strong opinions here.
the state of tests from sig-node for containerd are yellow-ish:
https://k8s-testgrid.appspot.com/sig-node-containerd
we do not have any tests for kubeadm and containerd yet, but our docker tests are passing at least.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Service] | ||
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint=$RUNTIME_ENDPOINT" | ||
EOF | ||
systemctl daemon-reload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instructions to update systemd config was quite useful. I'd suggest not to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that this is now covered by kubeadm directly, I think it's safe to remove. The kubelet systemd example should live under kubelet docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current documentation uses both ways of configuring kubelet - using configuration file and systemd drop-ins. In my opinion removing this can create confusion among users who prefer to use latter approach.
@@ -249,7 +249,7 @@ networking: | |||
podSubnet: "" | |||
serviceSubnet: 10.96.0.0/12 | |||
nodeRegistration: | |||
criSocket: /var/run/dockershim.sock | |||
criSocket: /var/run/containerd/containerd.sock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd leave this as is as docker is still a default runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, thanks!
@@ -368,41 +368,22 @@ Here's a breakdown of what/why: | |||
certificates from the `kube-apiserver` when the certificate expiration approaches. | |||
* `--cert-dir`the directory where the TLS certs are located. | |||
|
|||
### Use kubeadm with other CRI runtimes | |||
### Use kubeadm with containerd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd propose to leave this as it is and add setup instructions for at least containerd and CRI-O.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add instructions for other runtimes with kubeadm to this page later on, does that sound good? We should definitely track these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to leave links to other runtimes documentation even if we don't have our instructions for those. People at least would know that those runtimes exist and they can be also potentially used.
Let's get the CI signal back, double check versions and loop back to this once the mains are back online. |
After installing containerd, you should set `--cri-socket` in kubeadm init and kubeadm reset. Or, in alternative to command line flags, supply the containerd socket in your kubeadm configuration as shown in the example below: | ||
```yaml | ||
nodeRegistration: | ||
criSocket: /var/run/containerd/containerd.sock |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a fresh pull of containerd and the socket path I get is /run/containerd/containerd.sock
Closing in favor of #10299 |
Fixes kubernetes/kubeadm#1086
Fixes #9692