-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing cri.name=containerd
to cri=nil
still leaves containerd
running on the node as systemd service
#4254
Comments
I think what we're seeing here is two things coming together
WDYT? |
/assign |
As the
After re-configuring the pool, a new nodes gets created and the old one gets reconfigured to also use
So in contrast to my initial assumptions above, the only solution seems to be to include the Note that I wasn't able to reproduce this the other way: when changing from
I'm not sure why this is different? |
Our change didn't quite work as expected: #4390 |
cri.name=containerd
to cri=nil
doesn't work as expectedcri.name=containerd
to cri=nil
still leaves containerd
running on the node as systemd service
We agreed that we're not going to fix this one, but rather wait until everyone is on k8s >= 1.22 where this issue goes away automatically (as there's no way to configure |
OK, thanks @voelzmo, then let's |
How to categorize this issue?
/kind bug
/priority 3
What happened:
When changing a worker pool from using
containerd
as a container runtime todocker
, the new nodes still runcontainerd
as systemd serviceThe kubelet is correctly configured to not use
containerd
as external container runtimeThe node is correctly reported to use the
docker
runtimeWhat you expected to happen:
Only docker is started on the new nodes. There should be no signs of
containerd
How to reproduce it (as minimally and precisely as possible):
cri.name=containerd
. This gets you a node usingcontainerd
as container runtimecri
andcri.name
properties from your worker pool. This gets you a node usingdocker
as container runtime, as this is the current default whencri==nil
kubectl get nodes -o wide
to verify you're getting a new node withdocker
as container runtime while the old node withcontainerd
container runtime is drained and deletedAnything else we need to know?:
userData
for the new node seems to incorrectly containcontainerd
(thanks @prashanth26)operatingsystemconfig
-original is older than themachineset
and hasspec.criconfig.name=containerd
containerd
as a unit.status.units
hasoperatingsystemconfig
is re-used for the newmachineset
du to the fact how we compute the name:gardener/pkg/operation/botanist/component/extensions/operatingsystemconfig/operatingsystemconfig.go
Line 330 in 2e0ba2a
cri.name
on existing worker pools, we also need to consider this property in the name computation.Environment:
kubectl version
): 1.21.0The text was updated successfully, but these errors were encountered: