New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mounts permission denied starting in 33.20210426.3.0 #818
Comments
The OS change in question had 45 package updates. https://getfedora.org/en/coreos?stream=stable Some new observations narrow this to docker or SELinux related packages:
EDIT: typo |
I ran into a similar issue, this was due to a change in podman no longer relabeling ( Sadly we did not catch it in testing because the volumes of upgraded nodes were already relabeled by the previous version of podman. Only when trying to (re)build a node this became an issue. As a work-around, I've added manual relabeling in the systemd unit:
|
For all the
Just for clarity.. Do you mean |
Here's the set of packages that changed:
The container related updates I see are:
I don't think this is podman related since I doubt k8s clusters that use docker-shim leverage podman at all. I don't think So that would leave |
@dghubble can you try out |
The policies for Podman and Docker do share access to the same Looks like Typhoon is doing exactly this: https://github.com/poseidon/typhoon/blob/bc9644371017921fb484c1317026ca25a8734180/aws/fedora-coreos/kubernetes/workers/fcc/worker.yaml#L39-L43 |
Ahh - didn't realize they were using That |
@dustymabe yeah I meant using docker-shim runtime is affected and containerd is not affected (testing that configuration is part of why I didn't notice this break). Thanks for mentioning the podman issue containers/podman#10209 @basvdlei. On-host units like etcd and Kubelet use podman, and I'll checkout the AMI mentioned. |
Using testing-devel build |
You also run:
And get the same fix. |
This is a
or by executing a separate dummy
|
NOTE: as @basvdlei mentioned in #818 (comment) this is not as observable on upgraded nodes since older versions of |
Thanks all! |
Is this likely to make it into an OS image this week? I need to decide about adding a temporary chcon script in the Kuberntes distro or waiting, since right now users can't choose any channel. |
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
I tested FCOS testing |
The fix for this went into next stream release The fix for this went into testing stream release |
@dghubble - it will land in |
Yep, thanks for getting this in 👍 |
The fix for this went into stable stream release |
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Fixes podman selinux labelling regression. coreos/fedora-coreos-tracker#818
Describe the bug
Kubernetes Pod workloads on nodes starting in
33.20210426.3.0
now seepermission denied
errors accessing mounts.From my own collection of clusters (mix of clouds, FCOS/Flatcar, channels, configurations), I observed this with new clusters created today having all nodes NotReady (flannel, Calico, and Cilium all need to write plugins to a host mount) and other workloads having denials accessing configmaps. Existing clusters on older releases or new clusters that intentionally use an old release were ok.
When
For the stable channel, the issue appears in
33.20210426.3.0
(released yesterday).Nodes with earlier release
33.20210412.3.0
(both existing and new ones created today) work as expected.Scope
Likely across platforms. I've checked AWS and DigitalOcean myself.
Workaround
You can set CNI pods to run with privileged (acts as
spc_r
). This allows other pods to start, but any that need to read configmaps or other mounts will seepermission denied
errors. Effectively you cannot get a working cluster with the new OS image.Reproduction steps
Steps to reproduce the behavior:
System details
33.20210426.3.0
Additional information
Examples of DaemonSet manifest host mounts.
The text was updated successfully, but these errors were encountered: