Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerd Support #94

Closed
de13 opened this issue Dec 3, 2017 · 39 comments
Closed

Containerd Support #94

de13 opened this issue Dec 3, 2017 · 39 comments

Comments

@de13
Copy link

de13 commented Dec 3, 2017

As Docker is not my best choice runtime engine for worker nodes, any chance to include support for Containerd in the roadmap?

gz#6781

@superseb superseb self-assigned this Feb 1, 2018
@saphoooo
Copy link

Up. Containerd is now production ready, and looks a better choice for K8s than docker-ce.

@hanej
Copy link

hanej commented Sep 19, 2018

Is this officially on the roadmap?

@egernst
Copy link

egernst commented Feb 21, 2019

Curious if there are any updates or visibility into how this fits into roadmap. Thx!

@deniseschannon deniseschannon added this to the Backlog milestone Apr 8, 2019
@sboulkour
Copy link

I'd be interested in CRI-O as well, especially since it joined CNCF.

@althunibat
Copy link

what is the status now?

@mitchellmaler
Copy link

@superseb Since the kubelet supports using containerd out of the box (supported today wiith kubeadm) can there be a way to mount that socket to the kublet container and set the 'container-runtime-endpoint' option?

Really should be a way to mount any supported kubelet container runtime endpoint. It's unfortunate that this has been put off for too long.

@DerFetzer
Copy link

DerFetzer commented Oct 10, 2019

It could work with something like the following:

services:
    kubelet:
      extra_binds:
        - "/run/containerd/containerd.sock:/run/containerd/containerd.sock"
      extra_args:
        container-runtime: remote
        container-runtime-endpoint: unix:///run/containerd/containerd.sock

But be aware that I did not test this, yet! I hope there will be some time over the next couple of days.
Maybe someone else could give it a try?

Edit: Since /run is already bind mounted into the kubelet container you probably could omit extra_binds.

@mjpitz
Copy link

mjpitz commented Oct 10, 2019

@DerFetzer : I tried something like that but an issue comes up in RKE where it does a cross check on pods coming up within the cluster (such as the networking plugin). These checks are coded to look in docker for the container. So when you configure kubelet to use containerd to run the processes, the cluster creation inevitably fails because RKE can't find the container in docker.

@DerFetzer
Copy link

Ok, thank you very much! I was afraid that there would be some kind of catch.
I'll have a quick look at the code some time.

@ninja-
Copy link

ninja- commented Jan 4, 2020

having this would be cool

@StingRayZA
Copy link

Hi!
Any updates on this? Having a runtime that works natively with Centos/RHEL 8 would be great (CRI-O?)
-R

@immanuelfodor
Copy link

I have a couple of CentOS 8 VMs ready to be transformed into k8s (RKE) nodes, and I've just ran into the weirdness of docker-ce installation.
There is no containerd available for docker 19, so you either go with docker 18, or manually install the compatible containerd from RPM. I went down this road, so I could start docker containers but they don't have DNS resolution and thus networking by default, as CentOS 8 also moved to nftables under firewalld. I needed to add masquerade to the public zone, or I could completely disable firewalld, which I didn't want, but it doesn't matter, docker containers' exposed ports are available as if there were no firewalling at all.
Now, I'm considering putting all these VMs into a separate VLAN and disabling firewalld completely to get rid of its iptables errors produced by docker. However, I'm not so sure that if I install RKE on these VMs, everything will be just fine and working with the underlying nftables/iptables change.
If you have a better alternative than docker-ce that is able to run on CentOS 8 with firewalld enabled, and with a pinky promise that it will work as intended, I'm all ears :D This is how I found CRI-O as a possible alternative to docker-ce that might work together with RKE just fine out of the box, but looking through this thread is not so promising.
Is here somebody who managed to run an RKE cluster on CentOS 8 and could confirm that it will work without or with only minimal workarounds over docker-ce, or over cri-o? If yes, how did you do it? :)

@Sharma-Rajat
Copy link

Sharma-Rajat commented Sep 30, 2020

I came across this today as well. We really need to have CRI-O support for RKE. Is anyone on the RKE side exploring this? @superseb

@ninja-
Copy link

ninja- commented Oct 1, 2020

@Sharma-Rajat I moved on from Rancher to MicroK8S which does have containerd. It's fine if not even better for prod use.

@mitchellmaler
Copy link

RKE2 will support integrated containerd just like k3s with the option of bringing your own cri as well. Will just have to wait until that goes GA.

@Sharma-Rajat
Copy link

@mitchellmaler that's really awesome. Do you know where I can find some projected dates for this?

@mitchellmaler
Copy link

@Sharma-Rajat I do not, I have just been following the project as a community member.
You can check out the project here along with how to test it out: https://github.com/rancher/rke2

The Rancher 2.5 wiki does mention rke2 so it could be "released" soon
https://github.com/rancher/rancher/wiki/Rancher-2.5

@immanuelfodor
Copy link

With Kubernetes deprecating Dockershim and so Docker as runtime, containerd and/or CRI-O support is even more of an issue:

@irLinja
Copy link

irLinja commented Dec 6, 2020

@superseb
Can we expect to have cri-o as runtime or an optional runtime in rke?
I'm going to use MicroOS as cluster nodes OS and both cri-o and podman is installed out of the box also k8s is depricating dockershim starting from 1.20 and completely removed from 1.23 in late 2021.

@dirien
Copy link

dirien commented Dec 10, 2020

+1 from my side. With the latest announcements to remove docker, we see a real threat for the future with RKE.

@mscbpi
Copy link

mscbpi commented Dec 11, 2020

It's RKE2. RKE1 depends on docker IMHO.

@hanej
Copy link

hanej commented Dec 11, 2020

RKE2 should have been mentioned by Rancher months ago. I didn't know this even existed.

@mscbpi
Copy link

mscbpi commented Dec 11, 2020

Importantly, RKE2 does not rely on Docker as RKE1 does. RKE1 leveraged Docker for deploying and managing the control plane components as well as the container runtime for Kubernetes. RKE2 launches control plane components as static pods, managed by the kubelet. The embedded container runtime is containerd.

It is known as RKE2 because it is the future of the RKE distribution. Right now, it is entirely independent from RKE1, but our next phase of development will focus on a seamless upgrade path and feature parity with RKE1 when integrated with the Rancher multi-cluster management platform.

Once we've completed the upgrade path and Rancher-integration feature parity work, RKE1 and RKE Government will converge into a single distribution.

@mdl-oerag
Copy link

When is RKE2 released or integrated into RKE? Is a roadmap available?

@immanuelfodor
Copy link

You can follow the migration issue here: rancher/rke2#562

@tomerleib
Copy link

@immanuelfodor so containerd will only be supported in RKE2?
If existing users of RKE don't plan to migrate to RKE2 and want to use Kubernetes 1.20, it won't work?

@immanuelfodor
Copy link

Dockershim is only deprecated in 1.20, you can still use RKE until 1.23.

Can I still use Docker in Kubernetes 1.20?
Yes, the only thing changing in 1.20 is a single warning log printed at kubelet startup if using Docker as the runtime.

When will dockershim be removed?
Given the impact of this change, we are using an extended deprecation timeline. It will not be removed before Kubernetes 1.22, meaning the earliest release without dockershim would be 1.23 in late 2021. We will be working closely with vendors and other ecosystem groups to ensure a smooth transition and will evaluate things as the situation evolves.

https://kubernetes.io/blog/2020/12/02/dockershim-faq/

@tomerleib
Copy link

And once 1.23 reached, will it means the end of RKEv1? I'm asking because I'm in the middle of migrating our clusters to be based on RKE.

@immanuelfodor
Copy link

I'd migrate to RKE2 at this point 😃

I'm not part of the Rancher team, so I can only suppose they wouldn't release a new hyperkube image or wouldn't update the rke binary other than the hyperkube patch versions after 1.22.

@tomerleib
Copy link

Yes, I've figured it out by your profile :)
I too had thoughts of running RKE2 from the beginning, however, there are some issues in RKE2 that blocks such migration so this is something to consider prior to moving on.

@immanuelfodor
Copy link

Yes, it's still in the works but looks so promising and much easier to setup. What is blocker for you if I may ask?

Note: The RKE->RKE2 migration issue rancher/rke2#562 is part of Rancher v2.6 milestone which is due to April 21: https://github.com/rancher/rke2/milestone/12 so we should have a migration option before 1.23 is released in late 2021.

@tomerleib
Copy link

@superseb
Copy link
Contributor

The dockershim deprecation is covered in https://rancher.zendesk.com/hc/en-us/articles/360053308831-Rancher-Operational-Advisory-Related-to-deprecation-of-dockershim-in-Kubernetes-v1-20, the part that is being removed and separately maintained by Mirantis is already available (see #2565) and will continue to work after the deprecation.

Currently, for any other runtime support you can look at the alternatives mentioned here but there are no plans currently to support anything else in RKE.

@rodnymolina
Copy link

rodnymolina commented Aug 25, 2021

@immanuelfodor, @tomerleib, @irLinja, @Sharma-Rajat or anyone interested in running RKE with CRI-O right now, you can make use of the Sysbox runtime installer to have both CRI-O and the Sysbox runtime installed in your k8s nodes. From that moment on you can deploy your pods the usual way, it's up to you if you want to rely on Sysbox low-level runtime or the traditional OCI runc one.

See here for more details.

@immanuelfodor
Copy link

Pods launched with the Sysbox Community Edition are limited to 16 pods per worker node.

@rodnymolina
Copy link

rodnymolina commented Aug 26, 2021

Pods launched with the Sysbox Community Edition are limited to 16 pods per worker node.

Not quite. That limit only applies to pods that rely on containers launched in different user-namespaces for extra isolation (rootless). You can still launch as many traditional pods as your hardware allows, so there's no limit in the typical scenario.

Also, we will be adjusting this logic soon to ensure that those 16 pods only account for Sysbox pods, so that user can launch as many oci-runc pods as they wish, regardless of their 'rootless' character.

@Nowaker
Copy link

Nowaker commented Oct 27, 2021

For visibility:

@testn
Copy link

testn commented Oct 28, 2021

@Nowaker do you recommend that the new rancher cluster should go for RKE2?

@zube zube bot removed the [zube]: Done label Nov 19, 2021
@cloudcafetech
Copy link

But what the option for RKE1 using as CRI containerd ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests