Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waiting very long for "x11docker=ready" | Xorg in rootless podman | bubblewrap setups #466

Open
jonleivent opened this issue Aug 20, 2022 · 72 comments

Comments

@jonleivent
Copy link

In Fedora CoreOS (in a virtualbox VM), with latest images of docker.io/x11docker/xserver and docker.io/x11docker/fluxbox, using the 7.4.2 version of x11docker script:
x11docker -D -V --backend=podman --desktop x11docker/fluxbox
loops apparently forever with:

...
DEBUGNOTE[16:27:10,027]: waitforlogentry(): tailstdout: Waiting since 703s for log entry "x11docker=ready" in store.info
DEBUGNOTE[16:27:10,027]: waitforlogentry(): tailstderr: Waiting since 703s for log entry "x11docker=ready" in store.info
...

I'm assuming docker.io/x11docker/{xserver,fluxbox} are your images. Am I wrong? Also, doesn't the x11docker script test such things?

@jonleivent
Copy link
Author

Also tried x11docker/openbox and x11docker/kde-plasma, and I get the same looping behavior.

@jonleivent
Copy link
Author

The above turned out to be a python issue (I was trying to use a containerized python, and it wasn't working right). After I installed python3 (uncontainerized), I get a different problem: (EE) xf86OpenConsole: Cannot open virtual console 8 (Permission denied). So I sudo chowned /dev/tty8, but that resulted in (EE) xf86OpenConsole: Switching VT failed .

@mviereck
Copy link
Owner

Thank you for the report!

The above turned out to be a python issue (I was trying to use a containerized python, and it wasn't working right).

Was this a custom setup of yours or something more common that x11docker should check and be able to handle?

I get a different problem: (EE) xf86OpenConsole: Cannot open virtual console 8 (Permission denied). So I sudo chowned /dev/tty8, but that resulted in (EE) xf86OpenConsole: Switching VT failed .

Likely x11docker wants to run Xorg on a different tty, but your system is not configured to allow this.
You can either run with sudo x11docker [...] or configure Xorg to allow the start. Compare https://github.com/mviereck/x11docker/wiki/Setup-for-option---xorg

@jonleivent
Copy link
Author

jonleivent commented Aug 21, 2022

The setup is very pure: Start with Fedora CoreOS (which comes with podman), install absolutely nothing else on it (although I needed python for your script - more on that later...), and use the x11docker script with xserver container and one or more window manager or desktop environment containers to get a choice of desktop enviornments running on it. I can use either VirtualBox or Qemu/KVM to house the CoreOS install for experimentation, but eventually my goal is for a bare metal install of CoreOS (with no additional installs on it) with a fully podman containerized single user no remote access desktop environment.

I will try sudo x11docker and report back, but I want to run x11docker completely rootless. If I wanted to configure Xorg, but am using the x11docker/xserver container, would I need to rebuild the xserver container to do so, or is there a path into its configuration from some x11docker script arg? Note: I may want to build my own xserver container anyway as I don't need nxagent or xpra (or xfishtank!), also probably not the hacked MIT-SHM (because nothing will be remote), but would benefit from virtualbox-guest-x11. Do you have advice on doing so? I see that the x11docker script is checking for config and labeling of the xserver container.

BTW: about x11docker script requirement for python. It seems the requirement is very light. Possibly the script would work with just using podman inspect --format, or by using jq (which is available probably wherever podman or docker are, and comes installed on CoreOS). Of course, my case is extreme, as CoreOS does not have any version of python installed, and I don't want to install one (although I did so due to x11docker's requirement).

@mviereck
Copy link
Owner

I will try sudo x11docker and report back, but I want to run x11docker completely rootless. If I wanted to configure Xorg, but am using the x11docker/xserver container, would I need to rebuild the xserver container to do so, or is there a path into its configuration from some x11docker script arg?

Oh, right, you are already using x11docker/xserver. The configuration of Xwrapper.config or running as root for --xorg should only be needed if using Xorg from host.
Here on Debian I don't have a configured Xwrapper.config on host but in the image x11docker/xserver only.
The lines in the Dockerfile are:

# configure Xorg wrapper
RUN echo 'allowed_users=anybody' >/etc/X11/Xwrapper.config && \
    echo 'needs_root_rights=yes' >>/etc/X11/Xwrapper.config

TTY switching works fine without root.
IIRC this succeeded in a fedora desktop VM, too.
I might set up a fedora CoreOS VM to reproduce your issue.

Note: I may want to build my own xserver container anyway as I don't need nxagent or xpra (or xfishtank!), also probably not the hacked MIT-SHM (because nothing will be remote), but would benefit from virtualbox-guest-x11. Do you have advice on doing so? I see that the x11docker script is checking for config and labeling of the xserver container.

You can reduce the Dockerfile of x11docker/xserver to your needs. Below a proposal for a Dockerfile reduced to Xorg. I've removed some of the tools, too (including the cute xfishtank); the LABEL list of available tools might be wrong now, would need a closer check:

FROM debian:bullseye

# cleanup script for use after apt-get
RUN echo '#! /bin/sh\n\
env DEBIAN_FRONTEND=noninteractive apt-get autoremove --purge -y\n\
apt-get clean\n\
find /var/lib/apt/lists -type f -delete\n\
find /var/cache -type f -delete\n\
find /var/log -type f -delete\n\
exit 0\n\
' > /apt_cleanup && chmod +x /apt_cleanup

# X servers
RUN apt-get update && \
    env DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        xserver-xorg \
        xserver-xorg-legacy && \
    /apt_cleanup

# Window manager openbox with disabled context menu
RUN apt-get update && \
    env DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        openbox && \
    sed -i /ShowMenu/d         /etc/xdg/openbox/rc.xml && \
    sed -i s/NLIMC/NLMC/       /etc/xdg/openbox/rc.xml && \
    /apt_cleanup

# tools
RUN apt-get update && \
    env DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
        catatonit \
        procps \
        psmisc \
        psutils \
        x11-utils \
        x11-xkb-utils \
        x11-xserver-utils \
        xauth \
        xinit && \
    /apt_cleanup

# configure Xorg wrapper
RUN echo 'allowed_users=anybody' >/etc/X11/Xwrapper.config && \
    echo 'needs_root_rights=yes' >>/etc/X11/Xwrapper.config

# HOME
RUN mkdir -p /home/container && chmod 777 /home/container
ENV HOME=/home/container

LABEL options='--xorg'
LABEL tools='catatonit cvt glxinfo setxkbmap \
             xauth xdpyinfo xdriinfo xev \
             xhost xinit xkbcomp xkill xlsclients xmessage \
             xmodmap xprop xrandr xrefresh xset xsetroot xvinfo xwininfo'
LABEL options_console='--xorg'
LABEL windowmanager='openbox'

ENTRYPOINT ["/usr/bin/catatonit", "--"]

BTW: about x11docker script requirement for python. It seems the requirement is very light. Possibly the script would work with just using podman inspect --format, or by using jq (which is available probably wherever podman or docker are, and comes installed on CoreOS). Of course, my case is extreme, as CoreOS does not have any version of python installed, and I don't want to install one (although I did so due to x11docker's requirement).

I am not entirely happy about the python dependency. x11docker also supports nerdctl that does not support nerdctl inspect --format yet. I've also tried jq but found that it is not installed by default everywhere, but it seemed that python is a widespread standard that can be expected.
I consider to check for jq if python is not installed and use it in that case.

@jonleivent
Copy link
Author

Running sudo x11docker... does not work as it won't find the user's pulled images when running as root. In other words sudo podman images returns nothing even though the user runing podman images has x11docker/xserver, x11docker/fluxbox among others.

@mviereck
Copy link
Owner

Running sudo x11docker... does not work as it won't find the user's pulled images when running as root. In other words sudo podman images returns nothing even though the user runing podman images has x11docker/xserver, x11docker/fluxbox among others.

Ok, right; I confused a bit here, sorry.

Now I remember: Running Xorg within a container works only with rootful podman.
So you would need at least x11docker/xserver in rootful podman.
To run containers in rootless podman, but with root for the Xorg container:

sudo x11docker --backend=podman --rootless --xorg [...]

To avoid the need of sudo, you would have to install Xorg on host.
I failed to get Xorg running in rootless podman and I am not sure if it is possible at all.

@mviereck mviereck changed the title waiting very long for "x11docker=ready" waiting very long for "x11docker=ready" & Xorg in rootless podman Aug 21, 2022
@mviereck mviereck added the bug label Aug 21, 2022
@jonleivent
Copy link
Author

Isn't the --xorg going to bypass my --xc=podman requirement to use the container version of xserver? It seems to do that when I try it, as it gives me the x11docker ERROR: Did not find a possibility to provide a display. error message. Dropping --xorg and keeping --xc=podman, I get back to my earlier (EE) xf86OpenConsole: Cannot open virtual console 8 (Permission denied) error in the log, even though now using sudo. Could this be a selinux issue about consoles?

Note that a wayland-based x11docker desktop such as x11docker/kde-plasma does no better. I have seen that Fedora Silverblue and Kinoite (the KDE variety of the same family) run wayland as the user, not as root. So that suggests the possibility of a container doing so on CoreOS. But, using x11docker/kde-plasma as the target desktop container did not change matters - I again get the same Cannot open virtual console 8... error in the log file.

@jonleivent
Copy link
Author

BTW: if you want to get CoreOS up and running in VirtualBox quickly, I can help with that. It's not your typical install-from-ISO-as-CD distro.

@mviereck
Copy link
Owner

Isn't the --xorg going to bypass my --xc=podman requirement to use the container version of xserver?

You don't need to specify option --xc if image x11docker/xserver is available. x11docker uses it automatically (and tells so in a message).
Regardless if the image is available or --xc is specified, you can choose a desired X server, here with --xorg.

It seems to do that when I try it, as it gives me the x11docker ERROR: Did not find a possibility to provide a display. error message. Dropping --xorg and keeping --xc=podman, I get back to my earlier (EE) xf86OpenConsole: Cannot open virtual console 8 (Permission denied) error in the log, even though now using sudo.

I am not sure yet if I understand right.
Do you have x11docker/xserver now if you check sudo podman images? If not, and if Xorg is not installed on host, x11docker will find no Xorg that it could start with sudo x11docker [...].
The special example sudo x11docker --backend=podman --rootless --xorg [...] runs a rootful podman for x11docker/xserver, but a rootless podman for the desired container.

Or maybe I misunderstand you? Please show me your commands and the resulting error messages.

Note that a wayland-based x11docker desktop such as x11docker/kde-plasma does no better. I have seen that Fedora Silverblue and Kinoite (the KDE variety of the same family) run wayland as the user, not as root. So that suggests the possibility of a container doing so on CoreOS. But, using x11docker/kde-plasma as the target desktop container did not change matters - I again get the same Cannot open virtual console 8... error in the log file.

It doesn't matter which image you use because x11docker sets up Wayland or X before the container is started.
Running X or Wayland from console always needs some sort of privileges. Aside from obvious sudo there are possibilities with suid X or some obscure logind privilege setups. So even if your process list shows that X or Wayland are running as an unprivileged user, something in background gave some privileges to the process.
I admit that I don't understand all ways how privileges are granted.

@mviereck
Copy link
Owner

mviereck commented Aug 21, 2022

BTW: if you want to get CoreOS up and running in VirtualBox quickly, I can help with that. It's not your typical install-from-ISO-as-CD distro.

Thank you for offer! I'll try to reproduce in my regular fedora VM first.
But very likely it is just the issue that Xorg in container does not run with rootless podman. x11docker should catch that case and print a message instead of producing an error.

@jonleivent
Copy link
Author

Running podman images as user, I have: x11docker/xserver, x11docker/fluxbox, x11docker/openbox, and x11docker/kde-plasma. But, sudo podman images sees no images, as they're all in the user's container storage, not root's.

Running in the x11docker git clone directory:

sudo ./x11docker --backend=podman --rootless --xorg --desktop x11docker/fluxbox

This produces

x11docker ERROR: Did not find a possibility to provide a display.
...

error message. I can't copy-n-paste it or transfer it out of the VirtualBox VM easily with CoreOS, because no VirtualBox guest extensions are present. If you need the whole output and/or log file, I will work on a transfer ability.

If I instead run

sudo ./x11docker --backend=podman --rootless --xc=podman --desktop x11/fluxbox

that's when I get

(EE) xf86OpenConsole: Cannot open virtual console 8 (Permission denied)

in the ~/.cache/x11docker/x11docker.log file

If I need other privileges to run rootless, I can worry about getting them later.

@mviereck
Copy link
Owner

mviereck commented Aug 21, 2022

I can't copy-n-paste it or transfer it out of the VirtualBox VM easily with CoreOS, because no VirtualBox guest extensions are present. If you need the whole output and/or log file, I will work on a transfer ability.

Thank you! I don't need the full text, I just needed to sort the commands and their error messages.

The first error is correct:

Running in the x11docker git clone directory:

sudo ./x11docker --backend=podman --rootless --xorg --desktop x11docker/fluxbox

This produces

x11docker ERROR: Did not find a possibility to provide a display.

Please provide x11docker/xserver in rootful podman. Than this should work. So run sudo podman pull x11docker/xserver or build the reduced Dockerfile example above with sudo podman build -t x11docker/xserver [...].

Your second command sudo ./x11docker --backend=podman --rootless --xc=podman --desktop x11/fluxbox should have produced the same error, but it did not. This is likely an x11docker bug. It seems that it does not try to use rootful podman for the Xorg container although it should. I'll check this.

@jonleivent
Copy link
Author

I did a podman pull x11docker/xserver as root, and now things are much closer to working. I am getting an X server with a gray background and mouse tracking, but no fluxbox desktop in it (which would have a root menu and a toolbar, by default). Also, no way to exit, except by ACPI shutdown. This with the sudo ./x11docker --backend=podman --rootless --xorg --desktop x11docker/fluxbox command. Same thing for x11docker/kde-plasma.

@jonleivent
Copy link
Author

I'll bet it isn't finding x11docker/fluxbox or any other desktop the user has pulled. Maybe I should run two x11docker instances, one as root for the server the other as user for the desktop? Is there a way to do that?

@jonleivent
Copy link
Author

Surprisingly, the fluxbox desktop appeared after a very long delay. I didn't need to start a separate x11docker instance as I thought, just waiting. Hopefully this long delay is a one-time issue.

@mviereck
Copy link
Owner

I've tried to reproduce the issue but now have the unrelated problem that my root partition has not enough space left for image x11docker/xserver to pull it in rootfull podman.

If you somehow can provide me the log file ~/.cache/x11docker/x11docker.log , I might find a hint what is blocking the process.

Also, no way to exit, except by ACPI shutdown.

At least CTRL+ALT+F(n) should be possible to switch to another console.

Maybe I should run two x11docker instances, one as root for the server the other as user for the desktop? Is there a way to do that?

That is basically possible, but should not be needed.

Surprisingly, the fluxbox desktop appeared after a very long delay. I didn't need to start a separate x11docker instance as I thought, just waiting. Hopefully this long delay is a one-time issue.

You could specify another tty with option --vt , e.g.--vt=8, and switch back to your current tty (check it with tty) with CTRL+ALT+F(n). Than you can read the x11docker terminal output. if you add --debug, this might give some useful hints.


It is late here, and I am tired. I'll look at this tomorrow again.

@jonleivent
Copy link
Author

I have both fluxbox and openbox working, but not kde-plasma. It looks like the x11docker/fluxbox and x11docker/openbox containers have no X client apps, such as xterm, in them. Of course they are designed to be used as base layers for other containers, so I will do that.

I noticed that on distros where I can start X as non-root (via the startx -> xinit route), they have Xorg.wrap, a setuid for doing just that. But a more secure scheme with just the necessary capabilities is probably possible: some setcap'ed way of launching the x11docker script as the user instead of root.

@mviereck
Copy link
Owner

mviereck commented Aug 22, 2022

I have both fluxbox and openbox working

Do you still have the long startup delay? It can be as well a podman issue that I once had, too. It can be solved temporary with podman system prune to clean up the podman storage.

but not kde-plasma

kde-plasma needs --init=systemd. Did you set this option?

Also add option --desktop for desktop environments, otherwise x11docker will run a window manager that might cause issues.

But a more secure scheme with just the necessary capabilities is probably possible: some setcap'ed way of launching the x11docker script as the user instead of root.

The ideal way would be to be able to run Xorg with rootless podman. We might give it a try again and ask the podman developers for help. x11docker already gives the bare minimum of capabilities to the container of x11docker/xserver that is needed to run Xorg.
One guess: Xorg in rootless podman might fail because podman's root in container is not root on host so Xorg.wrap fails.

@jonleivent
Copy link
Author

I tried podman system prune, and it did prune some things. I also pulled the x11docker/mate image to try a mid-sized X11 desktop instead of the large wayland kde-plasma desktop to see if I can get anything else beyond fluxbox and openbox. However, I'm getting mysterious errors with mate that suggests a podman bug - go backtraces, but only after a very long delay even after having done podman system prune. From the looks of things, it is something running within the desktop container or launched from podman for the desktop container called 'exe' running as the user. The behavior is strange: it isn't doing much but very slowly reading and writing disk - despite the disk being a fast SSD, the rates are 500K read/sec and 2M write/sec. With very low cpu usage. So, WTF is that?

I will keep --init=systemd in mind with kde-plasma, but all indications are that I never got close to that problem due to podman issues prior to that. I always have the --desktop option on.

I don't think Xorg.wrap would fail to run just because it is in a rootless container. But it would fail to deliver the necessary capabilities, which the rootless container didn't inherit from the user. I think that the necessary capabilities have to be delivered from the outside-in starting with the x11docker script itself. However, I know only enough about capabilities to know I don't know enough about capabilities :( But I do think this can be fixed externally to podman, assuming podman doesn't intentionally drop excess capabilities it has inherited when run rootless.

@jonleivent
Copy link
Author

And mate is finally running! After over an hour of that exe process doing whatever.

@mviereck
Copy link
Owner

mviereck commented Aug 22, 2022

Do you have a way to send me ~/.cache/x11docker/x11docker.log?
A useful logfile would be after you have terminated a container that had such a long startup delay.
I would like to find out what causes the delay.
I've build the Dockerfile above reduced to Xorg in x11docker/xserver in rootful podman. Here a startup of x11docker/fvwm is pretty fast.

From the looks of things, it is something running within the desktop container or launched from podman for the desktop container called 'exe' running as the user. The behavior is strange: it isn't doing much but very slowly reading and writing disk - despite the disk being a fast SSD, the rates are 500K read/sec and 2M write/sec. With very low cpu usage. So, WTF is that?

x11docker doesn't run anything called exe. However, at some points it waits for an event that one task of x11docker writes something to a logfile so another task of x11docker can continue.

@jonleivent
Copy link
Author

Once they finally start up the first time, each desktop has pretty fast (~10sec) startup times after that. So I will have to try a new one. I will try fvwm and send you the log if it is slow. If that isn't slow, I will try XFCE or LXDE.

I figured out a way to get files out of a vbox without guest additions by using a USB drive. I would rather have a way to mount something rw in the vbox that is ro outside, and there's probably a way to do that, but the USB drive trick works without further thought, so it wins.

I think that exe app must be part of podman itself. It isn't just waiting and writing small things to a log file. It's writing at a steady 2M/sec rate for over an hour, yet not growing the vbox disk image substantially over that time, hence not appending most of that to a log. I've seen what your logging does when it waits for that event - probably only writing at a few bytes/sec rate, and that would be pure appending.

@mviereck
Copy link
Owner

mviereck commented Aug 22, 2022

Once they finally start up the first time, each desktop has pretty fast (~10sec) startup times after that. So I will have to try a new one. I will try fvwm and send you the log if it is slow. If that isn't slow, I will try XFCE or LXDE.

You mean, once you have waited a long time for the first startup, later startups with x11docker are fast?
That sounds pretty much like a podman issue, not an x11docker issue. However, to be sure it makes sense to check the log file.
It might be worth to compare with rootful podman. For example, run:

sudo podman pull x11docker/fvwm
sudo x11docker --backend=podman --desktop x11docker/fvwm

@jonleivent
Copy link
Author

fvwm was fast as user. Nothing interesting in the log, but I've saved it in case it may prove useful by comparison to something slow.

You mean, once you have waited a long time for the first startup, later startups with x11docker are fast?

Yes. On to XFCE as user. If that's slow, I'll try it as root. Actually, I will pull it as root, then podman image scp it to the user, so I don't waste network bandwidth.

Also, if it is slow, I will try pstree to determine the provenance of that exe process.

@mviereck
Copy link
Owner

Just an idea: Maybe podman is pulling the image again instead of using the one you have pulled before?
x11docker sets option --pull=never so this should not happen.

@jonleivent
Copy link
Author

Maybe podman is pulling the image again instead of using the one you have pulled before?

There's no network activity, either. I just had to forcibly stop the vm that was trying to start XFCE - I was logged in on another console as root spying on it with pstree and other things, and something I did caused the vm to go crazy. But, I did see the exe process has as its only arg an image overlay file name, so it must be part of podman.

I'll take a look at the log file...

@jonleivent
Copy link
Author

No log file. So I restarted XFCE, and will be a bit more careful while spying on it.

doing ls -l /proc/3744/exe where 3744 is the pid of that exe process shows that it is /usr/bin/podman itself, run with a different name (probably doing arg[0] conditionalization).

@mviereck
Copy link
Owner

mviereck commented Aug 30, 2022

It looks like other parts of the script are denying the possibility.

Sorry, I should have checked it better before.

I have added a new inofficial option --experimental that allows me to add some experimental code.
For now it allows Xorg in rootless containers and also adds --pid=host to Xorg containers.
We can add further experimental code this way.

Yet you should be able to run rootless --xc=podman --xorg without an x11docker error but with Xorg errors only. As a shortcut it is enough to type --exp instead of --experimental.

@jonleivent
Copy link
Author

Still similar problems. Log file attached...
fluxbox-x11docker.log

@mviereck
Copy link
Owner

This time you get an Xorg error, same as me now:

(EE) 
Fatal server error:
(EE) xf86OpenConsole: Cannot open virtual console 7 (Permission denied)
(EE) 
(EE) 
Please consult the The X.Org Foundation support 
	 at http://wiki.x.org
 for help. 
(EE) Please also check the log file at "/var/log/Xorg.101.log" for additional information.
(EE) 
(WW) xf86CloseConsole: KDSETMODE failed: Bad file descriptor
(WW) xf86CloseConsole: VT_GETMODE failed: Bad file descriptor
(EE) Server terminated with error (1). Closing log file.

At least x11docker does not forbid running Xorg in a rootless container. That was the intention of my changes above. On this base further experimental code might be added for tests with Xorg in rootless podman.

btw, to see all and only the the Xorg messages, you can use inofficial option --verbose=xinit.

@jonleivent
Copy link
Author

On Alpine, where --unshare-all (which includes --unshare-pid) works when rootless bwraping Xorg, the Xorg version is newer than on Debian where it doesn't work to --unshare-pid. Also, Debian had loaded one extra extension: SELinux. But using Xorg's -extension arg to remove that had no impact on the inability to --unshare-pid.

The Xorg versions are:
Debian (10.12): Xorg is 1.20.4, with pixman 0.36.0
Alpine: Xorg is 1.21.1.4, with pixman 0.40.0

I have a feeling that the Alpine developers changed something (hence that last extra '.4') and compiled Xorg themselves, considering the other ways Alpine differs from almost every other distro by relying on busybox and musl. What that might mean is that if you want rootless capability in more places, it might be worth the effort to create an Alpine version of x11docker/xserver. That's just a theory. At the very least it might enable --unshare-pid.

@mviereck
Copy link
Owner

As a first quick attempt I've build an x11docker/xserver image based on debian bookworm instead of debian bullseye. Debian bookworm has the same Xorg and pixman versions as alpine.
The startup in rootless podman failed, but shows different and unexpected errors. So some sort of progress!
The key error message is (EE) open /dev/dri/card0: Permission denied although container and host user are in groups video and render.

I'll give an alpine based image a try. It might have the nice side effect of creating a smaller image overall.

@jonleivent
Copy link
Author

Did you try it without unsharing pid?

@mviereck
Copy link
Owner

mviereck commented Aug 30, 2022

Did you try it without unsharing pid?

No; using less namespacing would only ease things, not break them.

Yet I run some tests with an alpine based image, but I am running into the same error message (EE) open /dev/dri/card0: Permission denied.

Edit:
Now I've tried a brute force chmod 666 /dev/dri/card0 and could bypass the error.
New errors:

(EE) NOUVEAU(0): [drm] failed to set drm interface version.
(EE) NOUVEAU(0): [drm] error opening the drm
(EE) NOUVEAU(0): 910: 
(EE) Screen(s) found, but none have a usable configuration.
(EE) 
Fatal server error:
(EE) no screens found(EE) 

@jonleivent
Copy link
Author

No; using less namespacing would only ease things, not break them.

True. But do we know yet whether the problem is due to the pid namespace, groups, or even if there is something in /dev that should have been mounted? I know the error messages say certain things, but I wonder if Xorg tries several different things and would take the first success, if none succeed it only reports the final failure. Meaning that a possible successful route might not involve fixing what that error messages implies is the culprit.

@mviereck
Copy link
Owner

mviereck commented Aug 30, 2022

I know the error messages say certain things, but I wonder if Xorg tries several different things and would take the first success, if none succeed it only reports the final failure. Meaning that a possible successful route might not involve fixing what that error messages implies is the culprit.

Basically we can compare with a successful startup of Xorg in rootful podman. The Xorg log in rootless podman should be identical once all issues are solved.

But do we know yet whether the problem is due to the pid namespace, groups, or even if there is something in /dev that should have been mounted?

The list of shared devices is valid for rootful podman, so it must be valid for rootless podman as well.
As far as I can see, the only essential difference is user (and group?) namespacing.
podman has an option --userns, but this one does not take argument =host. It takes an argument =keep-id that maps the container user to the same uid as on host, but changes the uid of container root. (By default and without option --user, the container root uid 0 maps to host user uid.)


Currently I assume your attempt with bubblewrap is the way to go. It might be just impossible to run Xorg in rootless podman.

@jonleivent
Copy link
Author

Currently I assume your attempt with bubblewrap is the way to go.

Maybe so. I'd still like to unshare pid everywhere. Maybe bubblewrap with an Alpine chroot on Debian is worth a try to see where the problem is. Also, I like your idea of bundling several window servers together: Xorg, Xephyr, Wayland, XWayland, Xpra, etc. all in the same chroot. Maybe as a flatpak, which uses bubblewrap.

@jonleivent
Copy link
Author

On Alpine, even though I can bwrap Xorg with --unshare-pid, I get a strange behavior I don't on Debian. The bwrapped Xorg doesn't prevent keyboard input from reaching the tty it was launched from. In other words: from tty1, launch Xorg in a sandbox (either on tty1 or some other, it doesn't matter) and start some window manager (fluxbox in my case). Start up xterm in the window manager desktop. Type the date command and note its output. Exit the xterm by killing its window and exit the window manager through its root menu exit choice. The X screen will disappear and you'll be back at the original tty, but you will see the date command has been entered into the tty, and see that its output is more recent than the date output you saw in xterm. All input to Xorg was seen there (by the xterm) AND also sent to the original tty and buffered until the end of the Xorg session. This doesn't happen without the sandbox. I haven't determined what part of the sandbox is responsible, but it is not due to --unshare-pid.

@mviereck
Copy link
Owner

mviereck commented Sep 3, 2022

All input to Xorg was seen there (by the xterm) AND also sent to the original tty and buffered until the end of the Xorg session.

That's really weird. On the first glance I cannot see how this could happen, and why using namespacing would make a difference here.
If Xorg would take no input at all, I could imagine that the keypresses are buffered. But once Xorg takes a keypress, the buffer should be empty. Maybe an issue on kernel level? Or somewhere between Xorg and the kernel?

I'd still like to unshare pid everywhere.

In general that should be possible. x11docker/xserver containers all run with all namespaces enabled, and with all capabilities dropped. Only for Xorg x11docker sets a few privileges more:

  • userns not possible as seen above
  • device files in /dev/dri and /dev/input needed
  • current /dev/ttyN, also specified in Xorg command with argument vtN`
  • groups video and render (I did not set input). (For a setuid Xorg the groups are likely not needed at all)
  • capabilities SYS_TTY_CONFIG (mandatory for Xorg), also DAC_OVERRIDE and KILL (due to x11docker needs)
  • file /run/udev/data
  • currently x11docker also shares /var/run/dbus, but likely not needed

In the --debug output of x11docker you can check this in the shown X container command.
Example:

  docker run --pull=never \
  --detach \
  --name x11docker_X103_xserver_820555238598 \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/share,target=/home/lauscher/.cache/x11docker/820555238598-wine/share \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/etcpasswd.xcontainer,target=/etc/passwd,readonly \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/etcgroup.xcontainer,target=/etc/group,readonly \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/xcontainerrc,target=/xcontainerrc,readonly \
  --rm \
  --security-opt label=type:container_runtime_t \
  --network=none \
  --ipc=shareable \
  --runtime runc \
  --cap-drop ALL \
  --user 1000:1000 \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/tmp,target=/tmp \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/Xauthority.server,target=/home/lauscher/.cache/x11docker/820555238598-wine/Xauthority.server \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/modelines,target=/home/lauscher/.cache/x11docker/modelines,readonly \
  --env DISPLAY=:0.0 \
  --mount type=bind,source=/tmp/.X11-unix/X0,target=/X0,readonly \
  --env XAUTHORITY=/home/lauscher/.cache/x11docker/820555238598-wine/Xauthority.host.0-0 \
  --mount type=bind,source=/home/lauscher/.cache/x11docker/820555238598-wine/Xauthority.host.0-0,target=/home/lauscher/.cache/x11docker/820555238598-wine/Xauthority.host.0-0  \
  --device /dev/dri/card0:/dev/dri/card0 \
  --device /dev/dri/renderD128:/dev/dri/renderD128 \
  --device /dev/vga_arbiter:/dev/vga_arbiter \
  --group-add 44 \
  --group-add 133 \
  --cap-add SYS_TTY_CONFIG \
  --cap-add DAC_OVERRIDE \
  --cap-add KILL \
  --mount type=bind,source=/var/run/dbus,target=/var/run/dbus \
  --mount type=bind,source=/run/udev/data,target=/run/udev/data,readonly \
  --device=/dev/tty8 \
  --device=/dev/input/event0 \
  --device=/dev/input/event1 \
  --device=/dev/input/event10 \
  --device=/dev/input/event11 \
  --device=/dev/input/event12 \
  --device=/dev/input/event13 \
  --device=/dev/input/event14 \
  --device=/dev/input/event15 \
  --device=/dev/input/event16 \
  --device=/dev/input/event17 \
  --device=/dev/input/event18 \
  --device=/dev/input/event2 \
  --device=/dev/input/event3 \
  --device=/dev/input/event4 \
  --device=/dev/input/event5 \
  --device=/dev/input/event6 \
  --device=/dev/input/event7 \
  --device=/dev/input/event8 \
  --device=/dev/input/event9 \
  --device=/dev/input/mice \
  --device=/dev/input/mouse0 \
  --device=/dev/input/mouse1 \
  x11docker/xserver bash /xcontainerrc

@jonleivent
Copy link
Author

If Xorg would take no input at all, I could imagine that the keypresses are buffered. But once Xorg takes a keypress, the buffer should be empty. Maybe an issue on kernel level? Or somewhere between Xorg and the kernel?

The buffering at the tty is the normal buffering when keypresses occur during a previous command execution. In this case, that previous command is my startx-like script launching Xorg and the window manager. If I background that with &, the buffering disappears and the keypresses get simultaneously processed both within Xorg and at the original tty. It is the dual direction of input that is weird, not the buffering. To make it even weirder: once Xorg and the window manager are started, if I use the host-F# key sequence (this being a virtualbox VM) to switch ttys, I don't see the switch occur because the Xorg screen stays visible, but it does occur as any subsequent keyboard input is directed to the tty I switched to (as well as Xorg) instead of the original tty. I think that the bwrap sandbox is preventing some interaction between Xorg and the kernel that is supposed to tell the kernel not to direct any input to whatever tty is currently attached to the seat - but only on Alpine.

@jonleivent
Copy link
Author

I have lately been trying to sandbox weston-launch, and found that, like Xorg, it does not like unsharing the pid namespace. In addition, it does not like unsharing the cgroups namespace, either. This is on Debian.

However, non-top-level weston (backed by either Xorg or wayland) and Xwayland sandbox very well with all namespaces unshared. They also do not lose much performance when sandboxed. A fully sandboxed weston/Xwayland combination is faster than an unsandboxed Xephyr or Xpra, at least in my benchmarks. I am using hardinfo -b "GPU Drawing" -m benchmark.so -a to benchmark.

mviereck added a commit that referenced this issue Sep 22, 2022
@mviereck
Copy link
Owner

Sorry that I did not respond for a long time!
I am still quite interested in your experimental setups.
I will try out some setups with bubblewrap with containers created but not started by podman. This might take some time until I really do so, though.

@jonleivent
Copy link
Author

Sorry that I did not respond for a long time!
I am still quite interested in your experimental setups.

Ok, but should I keep posting to this issue?

My goal is a portable sandboxed desktop, including sandboxed display server, that is agnostic about window managers, and that can run on minimal distros that are more secure than typical (because immutable like CoreOS, small attack surface like Alpine, etc.). The target audience is single-login desktop metal users, not Docker users. Although there is overlap, I would guess that most of your audience cares more about the ability to run GUI apps from containers without lowering the security on those containers, and not so much about whether the display server (X or Wayland) is itself in a sandbox.

So, there is no need to apologize about not responding timely! You have already paid more attention to this issue than I anticipated, and I am grateful.

@mviereck
Copy link
Owner

mviereck commented Sep 29, 2022

Ok, but should I keep posting to this issue?

Yes, please!

Although there is overlap, I would guess that most of your audience cares more about the ability to run GUI apps from containers without lowering the security on those containers, and not so much about whether the display server (X or Wayland) is itself in a sandbox.

You are right. Running X or Wayland in container adds some isolation, but still needs so much privileges that only small improvement of security is possible.
Some people prefer X servers in container to keep the host clean.

My goal is a portable sandboxed desktop, including sandboxed display server, that is agnostic about window managers, and that can run on minimal distros that are more secure than typical

I am not sure what you mean with "agnostic of window managers".

@jonleivent
Copy link
Author

I am not sure what you mean with "agnostic of window managers".

No part of the sandbox setup or underlying distro depends on the user's choice of window manager and server pair. The user should be able to choose dwm on Xorg or KDE on Wayland, or pretty much anything in between, and switch easily without installing anything in the underlying distro (especially if it is immutable). This should be an easy thing to accomplish: package WM or DE as flatpak or docker/podman container, as you do. I add it as a goal to rule out any choices that are specific to some window managers or desktop environments but not others, even though I don't anticipate any.

I like what Fedora did with Silverblue and Kinoite, but I think they missed an opportunity by not moving the whole DE and display server into one or more flatpaks or podman containers. The user has to choose the DE they want at installation, cannot easily switch, and the OS and DE upgrade together - so these are a counterexample to the agnostic goal above. But, Fedora also has CoreOS without any WM/DE - so that was my initial target, CoreOS with WM/DEs and servers in flatpaks or podman containers.

@mviereck mviereck changed the title waiting very long for "x11docker=ready" & Xorg in rootless podman waiting very long for "x11docker=ready" | Xorg in rootless podman | bubblewrap setups Oct 3, 2022
@digitalsignalperson
Copy link

digitalsignalperson commented Jul 28, 2023

$ cat ~/.xserverrc
#!/bin/sh
exec /usr/bin/bwrap --dev-bind / / --cap-drop ALL --unshare-all -- /usr/libexec/Xorg -nolisten tcp "$@"

I was experimenting to see if it's possible to run wayland contained with bwrap in a similar way

Initially trying this in a VT with non-root user:

bwrap --dev-bind / / --bind ~/bwhome /home/$USER --cap-drop ALL --unshare-all -- /bin/bash

In this bash environment I can see the home folder is the host contents of ~/bwhome, and real home is hidden, and I can confirm there is no network capabilities (see ip addr only has loopback and can't ping internet). But if changing /bin/bash to e.g. /bin/startplasma-wayland, or just running startplasma-wayland from the bwrapped bash, the resulting session seems to have full access to original home folder, and all capabilities including internet etc.

Narrowed it to it escaping if it has a --ro-bind "$XDG_RUNTIME_DIR/bus" "$XDG_RUNTIME_DIR/bus", but without that I can't get a session to start. ...Maybe would need to fully contain systemd inside bwrap? Might be possible with bwrap-oci https://projectatomic.io/blog/2017/07/unprivileged-containers-with-bwrap-oci-and-bubblewrap/

@jonleivent
Copy link
Author

@digitalsignalperson
I have had considerable success recently running labwc within a bwrap sandbox, with clients and desktop components (including the taskbar I use with it, sfwbar) individually in separate bwrap sandboxes. Labwc does not require dbus, and I don't use a single session dbus with it. If some client needs dbus, I place that client in a bwrap sandbox along with its own session dbus (using dbus-run-session). I gave up on KDE/plasma because the interaction between desktop components is too complex for me to figure out how to effectively securitize components. I imagine GNOME would be similar, although I have not tried it. The labwc project focuses instead on building a simple wayland compositor in the openbox style, meaning very simple configuration and very few runtime interdependencies. It is much easier to tame with respect to sandboxing and other kinds of securitizing (I use AppArmor as well).
Very recently, I have been communicating with labwc's developers about how to get even finer grained security through limiting parts of the wayland protocol within different sandboxes: see labwc/labwc#1002. So that certain sandboxed apps cannot spy on the clipboard, or take screenshots, or know what other windows exist, while other utilities can.

@digitalsignalperson
Copy link

@jonleivent thanks for the links, I hadn't heard of labwc. It's surprising how dbus gives access to everything, and the list of wayland/wlroots protocols that can allow for monitoring/messing with many things (as described in the labwc discussion & with wl_monitor.py).

Do you have examples/snippets of your bwrap scripts you could share? E.g. how to run a sandboxed labwc, or how to run another client with dbus-run-session in a sandbox.

For isolating Xorg in general, some ideas here might be interesting/relevant:

@jonleivent
Copy link
Author

@digitalsignalperson

Do you have examples/snippets of your bwrap scripts you could share? E.g. how to run a sandboxed labwc, or how to run another client with dbus-run-session in a sandbox.

For labwc, the only devs that need to be bound into the bwrap sandbox are /dev/dri and /dev/input (at least on my old intel-only laptop). You will also need to grant write to XDG_RUNTIME_DIR so that labwc can create its socket there. It also needs read access to /var/cache/fontconfig. Also, it needs a writable ~/.cache - and you can either do that with a tmpfs or have its own cache persistently kept somewhere and --binded in.
Something like:

bwrap --ro-bind / / --dev /dev --proc /proc --dev-bind /dev/dri /dev/dir --dev-bind /dev/input /dev/input \
--bind $XDG_RUNTIME_DIR $XDG_RUNTIME_DIR --tmpfs $HOME/.cache \
--tmpfs /tmp --tmpfs /var/tmp --tmpfs /var/cache --ro-bind /var/cache/fontconfig /var/cache/fontconfig \
--unshare-all --new-session -- labwc

It's certainly possible to block out much more from the sandbox, but I haven't investigated that yet. I do have a labwc AppArmor profile that probably makes blocking out other things from the sandbox unnecessary. I also have a seccomp bpf for it , which is slightly more permissive than the seccomp bpf I use for everything else. I haven't tuned those tightly yet either.

Of course, if you want to be able to launch apps from labwc's menu, you'll need to allow that somehow. I do that with a background process running external to the sandbox that listens on a named pipe for requests to launch apps, and then only launches apps that are in a specific write-protected folder just for it. Which are all scripts that filter arguments passed in and start other sandboxes. The named pipe is then --ro-binded into the bwrap sandbox (and others, like the one I use for the sfwbar taskbar). The Exec rules in my labwc menu config file write to that named pipe. Think of this as a safe alternative to dbus, where you control the entire protocol. You can also configure your ~/.local/share/applications/*.desktop files so that their Execs write to that named pipe, so that other launchers pick those up.

The dbus-run-session is easy. Take atril, the pdf reader. It doesn't run without a session dbus. So I put it into a bwrap sandbox as bwrap .... -- dbus-run-session atril. You can tighten this up by limiting what each such dbus can do using its config file.

Interesting links. I once tried a multi-VM configuration, and found it was a huge consumer of resources. I don't have the hardware necessary to use that effectively. Also, probably my security requirements are likely lower than a typical Qubes-phile.

@digitalsignalperson
Copy link

digitalsignalperson commented Jul 29, 2023

@jonleivent thanks for all those details. I notice with that bwrap command binding $XDG_RUNTIME_DIR, it's possible to escape the sandbox through dbus, even if labwc doesn't use dbus. I ran that command with -- labwc -s kitty to start in a sandboxed terminal, then I ran d-feet and launched an unrestricted terminal with org.kde.krunner (ofc mileage may vary depending on what dbus services are available).

If I don't bind XDG_RUNTIME_DIR, it can't start labwc (or any other wayland compositor I've tried). labwc in particular:

00:00:00.000 [ERROR] [libseat] [libseat/backend/logind.c:642] Could not check if session was active: No such device or address
00:00:00.000 [ERROR] [libseat] [libseat/libseat.c:79] No backend was able to open a seat
00:00:00.000 [ERROR] [backend/session/session.c:83] Unable to create seat: Function not implemented
00:00:00.000 [ERROR] [backend/session/session.c:248] Failed to load session backend
00:00:00.000 [ERROR] [backend/backend.c:86] Failed to start a session
00:00:00.000 [ERROR] [backend/backend.c:357] Failed to start a DRM session
00:00:00.000 [ERROR] [../labwc-0.6.4/src/server.c:248] unable to create backend

If I do bind XDG_RUNTIME_DIR, monitoring dbus-monitor --system I can see stuff related to org.freedesktop.login1.Session & SetType "wayland", so I'm thinking dbus may be involved in getting the seat. Trying the same with Hyprland I see that same indicator in dbus-monitor, and also the stdout says [libseat] [libseat/libseat.c:73] Seat opened with backend 'logind'.

Ideas for approaches to get a seat without binding XDG_RUNTIME_DIR or without unrestricted dbus access:

  • use seatd and bind /run/seatd.sock into the sandbox
  • proxy/filter the dbus and only allow a limited list of destinations like org.freedesktop.login1.Session (I saw bubblejail uses XDG D-Bus Proxy for similar filtering capability)

For X11, I don't have any luck with the bwrap commands. I can use the same command as above for labwc, but use startx: First I get "Cannot open virtual console 1 (Permission denied)". If I open that up with chmod, I get "Switching VT failed". If I try startx with a different VT like -- vt3 (ensuring no getty running on it, and given access to it) I get

xf86EnableIO: failed to enable I/O ports 0000-03ff (Operation not permitted)
vesa: Refusing to run, Framebuffer or dri device present
(EE) 
Fatal server error:
(EE) no screens found(EE) 
(EE) 

Of course, if you want to be able to launch apps from labwc's menu, you'll need to allow that somehow. I do that with a background process running external to the sandbox that listens on a named pipe for requests to launch apps, and then only launches apps that are in a specific write-protected folder just for it. Which are all scripts that filter arguments passed in and start other sandboxes. The named pipe is then --ro-binded into the bwrap sandbox (and others, like the one I use for the sfwbar taskbar). The Exec rules in my labwc menu config file write to that named pipe. Think of this as a safe alternative to dbus, where you control the entire protocol. You can also configure your ~/.local/share/applications/*.desktop files so that their Execs write to that named pipe, so that other launchers pick those up.

I'm curious about this. Do I read this as you aren't running apps that are binded into the sandbox in /bin, but purposefully running apps outside the sandbox?

@jonleivent
Copy link
Author

jonleivent commented Jul 29, 2023

@digitalsignalperson

I notice with that bwrap command binding $XDG_RUNTIME_DIR, it's possible to escape the sandbox through dbus, even if labwc doesn't use dbus.

That doesn't happen in my case because I don't run a session dbus. In other words, I have patched the login startup for that user so it doesn't auto-launch the session dbus. I did this by masking the systemd unit for that user, so that no systemd services run for that user. There is nothing in that user's $XDG_RUNTIME_DIR initially.

You can always bind something other than $XDG_RUNTIME_DIR in to the sandbox as $XDG_RUNTIME_DIR, like:
--bind /tmp/my-wayland-socket-dir $XDG_RUMTIME_DIR where you create /tmp/my-wayland-socket-dir before. Then bind the socket that gets created there into the client sandboxes.

I'm curious about this. Do I read this as you aren't running apps that are binded into the sandbox in /bin, but purposefully running apps outside the sandbox?

All of my apps, with just one exception (a terminal), run in their own private sandboxes. Obviously, you can't start a bwrap sandbox directly from within an existing bwrap sandbox. Instead, you need to have something outside any sandbox that can start new sandboxes - a background process (daemon). When I raise labwc's primary menu, I have entries on it to for instance run firefox. I want that to run in its own sandbox which has entirely different permissions than the one that labwc runs in. In fact, labwc's sandbox has no network connection, so firefox wouldn't run there. What I have labwc's menu do is instead of directly exec'ing firefox, it writes a command to run firefox out on a named pipe that the daemon is monitoring. The daemon reads that, and launches a script that runs firefox in its own sandbox. The scripts that the daemon is allowed to run (17 so far) are very restricted, so that no sandbox can use them to escape in a bad way (such as run arbitrary code outside any sandbox). I may convert this later to using a unix socket instead of a named pipe (if for instance I need bidirectional communication, or the ability to pass fds), but for now the named pipe works well and is very easy to set up.

@digitalsignalperson
Copy link

@jonleivent thanks for explaining & sharing some details of your unique setup. It would be interesting to play with it in a VM and poke around a working demo or sounds like good material for a blog post too.

Obviously, you can't start a bwrap sandbox directly from within an existing bwrap sandbox

Is nested bwrap not possible (PR landed here containers/bubblewrap#164) or is that a design decision for your setup?

Still curious about getting the seat backend part for wayland to work, and also getting sandboxed x11 to work. I'm on an arch linux setup. Reading above about this working on alpine, I managed to get a working POC in a VM:

lxc launch images:alpine/3.18 alp --vm -c limits.cpu=4 -c limits.memory=4GiB --console=vga -c security.secureboot=false

# from another terminal set root password
lxc exec alp -- /bin/ash
passwd

# log in as root in the vga console
setup-xorg-base
apk add openbox xterm font-terminus
apk add bubblewrap
adduser person
addgroup person input
addgroup person video
exit

# login as person

# run X without sandbox - works
startx /usr/bin/openbox-session

# with sandbox - also works
bwrap --ro-bind / / --dev /dev --proc /proc --dev-bind /dev/dri /dev/dir --dev-bind /dev/input /dev/input \
--tmpfs /tmp --tmpfs /var/tmp --tmpfs /var/cache --ro-bind /var/cache/fontconfig /var/cache/fontconfig \
--tmpfs $HOME --unshare-all --new-session -- startx /usr/bin/openbox-session

Did you also get it to work with debian? I cannot

lxc launch images:debian/buster bust --vm -c limits.cpu=4 -c limits.memory=4GiB --console=vga -c security.secureboot=false

# from another terminal set root password
lxc exec bust -- /bin/bash
passwd

# log in as root in the vga console
apt install openbox xinit
apt install xterm
apt install bubblewrap
adduser person
addgroup person input
addgroup person video
addgroup person sudo
exit

# login as person

# run X without sandbox - works
startx /usr/bin/openbox-session

bwrap --ro-bind / / --dev /dev --proc /proc --dev-bind /dev/dri /dev/dir --dev-bind /dev/input /dev/input \
--tmpfs /tmp --tmpfs /var/tmp --tmpfs /var/cache --ro-bind /var/cache/fontconfig /var/cache/fontconfig \
--tmpfs $HOME --unshare-all --new-session -- startx /usr/bin/openbox-session
# cannot open /dev/tty0
# debian is weird, startx seems to be non-blocking

sudo chown person:tty /dev/tty2
bwrap --ro-bind / / --dev /dev --proc /proc  --dev-bind /dev/tty2 /dev/tty2 --dev-bind /dev/dri /dev/dir --dev-bind /dev/input /dev/input \
--tmpfs /tmp --tmpfs /var/tmp --tmpfs /var/cache --ro-bind /var/cache/fontconfig /var/cache/fontconfig \
--tmpfs $HOME --unshare-all --new-session -- startx /usr/bin/openbox-session -- vt2
# xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)

@jonleivent
Copy link
Author

@digitalsignalperson

Is nested bwrap not possible (PR landed here containers/bubblewrap#164) or is that a design decision for your setup?

Even if it is possible, the inner bwrap can't get more capabilities than the outer one. As I mentioned, my labwc sandbox has no network connection. If I started firefox in a nested bwrap inside that, it wouldn't have a network connection either.

Also I disable unprivileged (actually, all) user namespaces, which makes it impossible to nest because bwrap always sets no-new-privileges, and running bwrap without unprivileged user namespaces needs setuid, which is disabled under no-new-privileges.

And, something to remember about sandboxes. They prevent things from getting out, not in. With nested sandboxes, the inner apps are not protected from the more outer ones. The outer ones can do things like use the magic links in /proc/PID/fd of an inner app. Or ptrace it (actually, I have Yama turned up so ptrace doesn't work except for root - but if I didn't...!).

I abandoned my alpine setup a while ago. It was flaky in strange ways. The tty console always received input even if that input was intended for X11 when I had X11 bwrapped. I would start X11 in a bwrap and have fluxbox running, then start an xterm in that and type into the xterm. When I exited X11, I would see that input I typed in the tty console. Alpine is just not like other nixes. I'm on Debian 12, and will probably stay there for a while.

I have also been entirely focused on wayland because my experiments with using nested X11 servers (like Xephyr and Xpra) to isolate keyboard/mouse/clipboard/etc from apps was disappointing. It worked, but was difficult to manage and lost graphics performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants