Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"invalid entry point PID of container" when /var/lib/flatpak is mounted nosuid and/or nodev #1084

Closed
tonywalker1 opened this issue Aug 10, 2022 · 10 comments
Labels
1. Bug Something isn't working

Comments

@tonywalker1
Copy link

Describe the bug
For security in-depth, I have /var/lib/flatpak mounted as a Btrfs subvolume with nosuid and nodev. When /var/lib/flatpak is mounted nosuid, nodev, and both, I can't enter a toolbox. Creating a toolbox succeeds in all cases (no mount options, nosuid only, nodev only, and both nosuid and nodev). Entering the toolbox does not depend on mount options at the time of creation: mount options only matter when entering. All toolboxes tested were default. I have never observed a problem with images created with podman or docker: only toolbox.

NOTE: The last line of the output to podman start --attach stuff included below seems significant ("Unknown error 5005").

Because toolbox is very important for Fedora Silverblue and security in-depth is also important, I would like to use toolbox on Silverblue with /var/lib/flatpak mounted nosuid and nodev.

Steps how to reproduce the behaviour

  1. Mount /var/lib/flatpak with nosuid and/or nodev.
  2. Run toolbox create stuff
  3. Run toolbox enter stuff

Expected behaviour
Enter the toolbox as expected.

Actual behaviour
toolbox enter stuff fails with "Error: invalid entry point PID of container stuff"

Screenshots
Probably not helpful here.

Output of toolbox --version (v0.0.90+)
toolbox version 0.0.99.3

Toolbox package info (rpm -q toolbox)
fedora:fedora/36/x86_64/silverblue 36.20220810.0 (2022-08-10T00:46:50Z)

Output of podman version
podman version 4.1.1

Podman package info (rpm -q podman)
fedora:fedora/36/x86_64/silverblue 36.20220810.0 (2022-08-10T00:46:50Z)

Info about your OS
fedora:fedora/36/x86_64/silverblue 36.20220810.0 (2022-08-10T00:46:50Z)

Additional context

podman start --attach stuff

level=debug msg="Running as real user ID 0"
level=debug msg="Resolved absolute path to the executable as /usr/bin/toolbox"
level=debug msg="TOOLBOX_PATH is /usr/bin/toolbox"
level=debug msg="Migrating to newer Podman"
level=debug msg="Setting up configuration"
level=debug msg="Setting up configuration: file /etc/containers/toolbox.conf not found"
level=debug msg="Setting up configuration: file /root/.config/containers/toolbox.conf not found"
level=debug msg="Resolving image name"
level=debug msg="Distribution (CLI): ''"
level=debug msg="Image (CLI): ''"
level=debug msg="Release (CLI): ''"
level=debug msg="Resolved image name"
level=debug msg="Image: 'fedora-toolbox:36'"
level=debug msg="Release: '36'"
level=debug msg="Resolving container name"
level=debug msg="Container: ''"
level=debug msg="Image: 'fedora-toolbox:36'"
level=debug msg="Release: '36'"
level=debug msg="Resolved container name"
level=debug msg="Container: 'fedora-toolbox-36'"
level=debug msg="Creating /run/.toolboxenv"
level=debug msg="Monitoring host"
level=debug msg="Path /run/host/etc exists"
level=debug msg="Resolved /etc/localtime to *<deleted>*"
level=debug msg="Creating regular file /etc/machine-id"
level=debug msg="Binding /etc/machine-id to /run/host/etc/machine-id"
level=debug msg="Creating directory /run/libvirt"
level=debug msg="Binding /run/libvirt to /run/host/run/libvirt"
level=debug msg="Creating directory /run/systemd/journal"
level=debug msg="Binding /run/systemd/journal to /run/host/run/systemd/journal"
level=debug msg="Creating directory /run/systemd/resolve"
level=debug msg="Binding /run/systemd/resolve to /run/host/run/systemd/resolve"
level=debug msg="Creating directory /run/udev/data"
level=debug msg="Binding /run/udev/data to /run/host/run/udev/data"
level=debug msg="Creating directory /tmp"
level=debug msg="Binding /tmp to /run/host/tmp"
level=debug msg="Creating directory /var/lib/flatpak"
level=debug msg="Binding /var/lib/flatpak to /run/host/var/lib/flatpak"
mount: /var/lib/flatpak: filesystem was mounted, but any subsequent operation failed: Unknown error 5005.
Error: failed to bind /var/lib/flatpak to /run/host/var/lib/flatpak
@tonywalker1 tonywalker1 added the 1. Bug Something isn't working label Aug 10, 2022
@iavael
Copy link

iavael commented Sep 14, 2022

I hit exactly the same problem.

@py0xc3
Copy link

py0xc3 commented Jun 30, 2023

Issue still occurs in 0.0.99.4. I didn't dig deeper so far, but once I added nodev,nosuid,noexec to /var , the issue occurred with the same journalctl error log as shown above * by @tonywalker1 .

However, although the error log in journalctl is always the same, the error output of toolbox within the terminal differs, depending on how/when I try to enter the container:
Error: failed to initialize container abc
or
Error: invalid entry point PID of container abc


*

mount: /var/lib/flatpak: filesystem was mounted, but any subsequent operation failed: Unknown error 5005.
Error: failed to bind /var/lib/flatpak to /run/host/var/lib/flatpak

@py0xc3
Copy link

py0xc3 commented Jun 30, 2023

I just tested it with all three nodev,nosuid,noexec individually (so, each time with only one of the three set): each of the three on itself breaks toolbox. The error is always the same.

But on my system, it does not help to make /var/lib/flatpak a separated subvolume because the error then happens at the next folder again:

mount: /var/lib/systemd/coredump: filesystem was mounted, but any subsequent operation failed: Unknown error 5005.
Error: failed to bind /var/lib/systemd/coredump to /run/host/var/lib/systemd/coredump

@woolsgrs
Copy link

woolsgrs commented Jul 7, 2023

I do not fully understand why but if these bind mounts are not mounted "ro" this error does not happen for me, currently

`	{"/etc/machine-id", "/run/host/etc/machine-id", "ro"},
		{"/run/libvirt", "/run/host/run/libvirt", ""},
		{"/run/systemd/journal", "/run/host/run/systemd/journal", ""},
		{"/run/systemd/resolve", "/run/host/run/systemd/resolve", ""},
		{"/run/systemd/sessions", "/run/host/run/systemd/sessions", ""},
		{"/run/systemd/system", "/run/host/run/systemd/system", ""},
		{"/run/systemd/users", "/run/host/run/systemd/users", ""},
		{"/run/udev/data", "/run/host/run/udev/data", ""},
		{"/run/udev/tags", "/run/host/run/udev/tags", ""},
		{"/tmp", "/run/host/tmp", "rslave"},
		{"/var/lib/flatpak", "/run/host/var/lib/flatpak", "ro"},
		{"/var/lib/libvirt", "/run/host/var/lib/libvirt", ""},
		{"/var/lib/systemd/coredump", "/run/host/var/lib/systemd/coredump", "ro"},
		{"/var/log/journal", "/run/host/var/log/journal", "ro"},
		{"/var/mnt", "/run/host/var/mnt", "rslave"},
	}
)

If this changed to

		{"/run/libvirt", "/run/host/run/libvirt", ""},
		{"/run/systemd/journal", "/run/host/run/systemd/journal", ""},
		{"/run/systemd/resolve", "/run/host/run/systemd/resolve", ""},
		{"/run/systemd/sessions", "/run/host/run/systemd/sessions", ""},
		{"/run/systemd/system", "/run/host/run/systemd/system", ""},
		{"/run/systemd/users", "/run/host/run/systemd/users", ""},
		{"/run/udev/data", "/run/host/run/udev/data", ""},
		{"/run/udev/tags", "/run/host/run/udev/tags", ""},
		{"/tmp", "/run/host/tmp", "rslave"},
		{"/var/lib/flatpak", "/run/host/var/lib/flatpak", ""},
		{"/var/lib/libvirt", "/run/host/var/lib/libvirt", ""},
		{"/var/lib/systemd/coredump", "/run/host/var/lib/systemd/coredump", ""},
		{"/var/log/journal", "/run/host/var/log/journal", ""},
		{"/var/mnt", "/run/host/var/mnt", "rslave"},
	}
)

I do not understand why these need to be "ro" in toolbox as you would not have access unless truly root?

@debarshiray
Copy link
Member

Duplicate of #911

@debarshiray debarshiray marked this as a duplicate of #911 Jul 12, 2023
@debarshiray
Copy link
Member

I do not fully understand why but if these bind mounts are not mounted "ro" this error does not happen for me, currently

Well spotted!

The command that fails looks like this:

mount --rbind -o ro /run/host/var/lib/flatpak /var/lib/flatpak

The way mount(8) and mount(2) work, leaving out the -o ro, means that the bind mount has the same mount options as the underlying mount. If the -o ro is used, then mount(8) will try to clear all other options (like nodev, noexec and nosuid) from the underlying mount and only apply ro. In the case of a Toolbx container this runs into an EPERM because the bind mount is being attempted in the container's mount and user namespaces while the underlying mount is in the parent host namespace.

I do not understand why these need to be "ro" in toolbox as you would not have access unless truly root?

We had originally thrown in ro here and there just to be safe because a user isn't expected to write to these locations from inside the container. There was no thorough testing done with restricted mount options. I agree that we should take them out.

@debarshiray
Copy link
Member

@woolsgrs I have a rough patch ready for this and I would like to acknowledge you in the commit message. What name should I use? Your GitHub profile says Si. Is that a placeholder or your legal name?

@debarshiray
Copy link
Member

Closing in favour of the older duplicate: #911

@woolsgrs
Copy link

Thanks much Si is good

@debarshiray
Copy link
Member

Thanks much Si is good

Okay!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1. Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants