Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

systemd-nspawn fails to mount /sys/fs/selinux in the container after dcff2fa #16032

Closed
amessina opened this issue Jun 1, 2020 · 3 comments · Fixed by #16194
Closed

systemd-nspawn fails to mount /sys/fs/selinux in the container after dcff2fa #16032

amessina opened this issue Jun 1, 2020 · 3 comments · Fixed by #16194
Labels
Milestone

Comments

@amessina
Copy link

amessina commented Jun 1, 2020

systemd version the issue has been seen with

systemd 245 (v245.6-1.fc32)
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified

Used distribution

Fedora 32 x86_64

Unexpected behaviour you saw

systemd-nspawn[312147]: Bind-mounting /sys/fs/selinux on /var/lib/machines/fedora/sys/fs/selinux (MS_BIND "")...
systemd-nspawn[312147]: Failed to mount /sys/fs/selinux (type n/a) on /var/lib/machines/fedora/sys/fs/selinux (MS_BIND ""): No such file or directory
systemd-nspawn[312147]: Remounting /var/lib/machines/fedora/sys/fs/selinux (MS_RDONLY|MS_NOSUID|MS_NODEV|MS_NOEXEC|MS_REMOUNT|MS_BIND "")...
systemd-nspawn[312147]: Failed to mount n/a (type n/a) on /var/lib/machines/fedora/sys/fs/selinux (MS_RDONLY|MS_NOSUID|MS_NODEV|MS_NOEXEC|MS_REMOUNT|MS_BIND ""): No such file or directory

Steps to reproduce the problem
After upgrading to systemd-245.6-1.fc32.x86_64 which includes the fix for #15475: dcff2fa, the /sys/fs/selinux directory is no longer created in the container and the above errors are logged.

While unrelated, the container still shows SELinux as being disabled -- so if containers aren't able to support SELinux inside, why bind mount this directory at all (just a question--I really don't know)?

Inside the container...

~]# getenforce
Disabled

~]# ll -a /sys/fs
total 0
drwxr-xr-x. 3 root root  60 Jun  1 10:28 .
dr-xr-xr-x. 9 root root 180 Jun  1 10:28 ..
drwxr-xr-x. 5 root root   0 Jun  1 10:28 cgroup
@poettering poettering added this to the v246 milestone Jun 2, 2020
@poettering
Copy link
Member

While unrelated, the container still shows SELinux as being disabled -- so if containers aren't able to support SELinux inside, why bind mount this directory at all (just a question--I really don't know)?

We mount it into the contain in read-only fashion. That tells the payload that while selinux is available it shouldn't make use of it. i.e. the libselinux libraries explicitly check for the R/O status of selinuxfs, and understand that read-only means "existant, but not managed by us".

Hiding selinuxfs in the container entirely doesn't really work, as the fact that selinux is there is leaked all over the place in /proc. Moreover PID 1 generally assumes that what isn't mounted already it needs to mount itself, and we like if we can keep the same codepaths in place when systemd runs in a container and on bare metal. Hence instead of not mounting selinuxfs (so that PID 1 would then try to mount it) we do pre-mount it but mark it read-only, so that we explicitly communicate that the container payload should not mount it, but also not manage it.

@keszybz
Copy link
Member

keszybz commented Jun 2, 2020

Hmm, I don't see this with the lastest git (and neither systemd-245.5-2.fc33.x86_64). What command are you running exactly? In what mode is selinux on the host?

poettering added a commit to poettering/systemd that referenced this issue Jun 16, 2020
Since systemd#15533 we didn't create the mount point for selinuxfs anymore.

Before it we created it twice because we mount selinuxfs twice: once the
superblock, and once we remount its bind mound read-only. The second
mkdir would mean we'd chown() the host version of selinuxfs (since
there's only one selinuxfs superblock kernel-wide).

The right time to create mount point point is once: before we mount the
selinuxfs. But not a second time for the remount.

Fixes: systemd#16032
@poettering
Copy link
Member

Fix waiting in #16194

keszybz pushed a commit that referenced this issue Jun 23, 2020
Since #15533 we didn't create the mount point for selinuxfs anymore.

Before it we created it twice because we mount selinuxfs twice: once the
superblock, and once we remount its bind mound read-only. The second
mkdir would mean we'd chown() the host version of selinuxfs (since
there's only one selinuxfs superblock kernel-wide).

The right time to create mount point point is once: before we mount the
selinuxfs. But not a second time for the remount.

Fixes: #16032
vbatts pushed a commit to kinvolk/systemd that referenced this issue Nov 12, 2020
Since systemd#15533 we didn't create the mount point for selinuxfs anymore.

Before it we created it twice because we mount selinuxfs twice: once the
superblock, and once we remount its bind mound read-only. The second
mkdir would mean we'd chown() the host version of selinuxfs (since
there's only one selinuxfs superblock kernel-wide).

The right time to create mount point point is once: before we mount the
selinuxfs. But not a second time for the remount.

Fixes: systemd#16032
(cherry picked from commit 6fe01ce)
vbatts pushed a commit to kinvolk/systemd that referenced this issue Nov 12, 2020
Since systemd#15533 we didn't create the mount point for selinuxfs anymore.

Before it we created it twice because we mount selinuxfs twice: once the
superblock, and once we remount its bind mound read-only. The second
mkdir would mean we'd chown() the host version of selinuxfs (since
there's only one selinuxfs superblock kernel-wide).

The right time to create mount point point is once: before we mount the
selinuxfs. But not a second time for the remount.

Fixes: systemd#16032
(cherry picked from commit 6fe01ce)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging a pull request may close this issue.

3 participants