Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use Docker in the ubuntu template because of AppArmor Permission denied #28

Closed
tristan-k opened this issue Feb 5, 2022 · 33 comments
Labels
🧐 Not a Script Issue Not a Script Issue

Comments

@tristan-k
Copy link

tristan-k commented Feb 5, 2022

I followed the offficial docker install instructions but I'm unable to use Docker because of an AppArmor error. Not sure if this related to your template or to Proxmox 7.1 itself but it would be nice to have a docker specific lxc template if there is any further configuration necessary. I think this might be related to the fact that the lxc container is privileged.

apt-get update
apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
docker run hello-world
docker: Error response from daemon: AppArmor enabled on system but the docker-default profile could not be loaded: running `/usr/sbin/apparmor_parser apparmor_parser -Kr /var/lib/docker/tmp/docker-default327733349` failed with output: apparmor_parser: Unable to replace "docker-default".  Permission denied; attempted to load a profile while confined?

error: exit status 243.

Removing AppArmor with apt remove apparmor works around the issue but this doesn't seem to be a good idea.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

From a fresh Ubuntu 21.10 LXC install, run sh <(curl -sSL https://get.docker.com) in the LXC console

the below needs to be added to the LXC .conf

#lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop: 

from the Proxmox Shell run ( replace 106 with the LXC ID) nano /etc/pve/lxc/106.conf

@tristan-k
Copy link
Author

I just did. There is essentially no difference in installing docker with the script or manually. The apparmor error is still there.

$ lxc-attach -n 110
root@ubuntu:~# sh <(curl -sSL https://get.docker.com)
# Executing docker install script, commit: 93d2499759296ac1f9c510605fef85052a2c32be
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl gnupg >/dev/null
+ sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /usr/share/keyrings/docker-archive-keyring.gpg
+ sh -c echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu impish stable" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends  docker-ce-cli docker-scan-plugin docker-ce >/dev/null
+ version_gte 20.10
+ [ -z  ]
+ return 0
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null
+ sh -c docker version
Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:33 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

...

root@ubuntu:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:507ecde44b8eb741278274653120c2bf793b174c06ff4eaa672b713b3263477b
Status: Downloaded newer image for hello-world:latest
docker: Error response from daemon: AppArmor enabled on system but the docker-default profile could not be loaded: running `/usr/sbin/apparmor_parser apparmor_parser -Kr /var/lib/docker/tmp/docker-default008678533` failed with output: apparmor_parser: Unable to replace "docker-default".  Permission denied; attempted to load a profile while confined?

error: exit status 243.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

The below needs to be added to the LXC .conf

#lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop: 

from the Proxmox Shell run ( replace 106 with the LXC ID) nano /etc/pve/lxc/106.conf
Save and exit the editor with “Ctrl+O”, “Enter” and “Ctrl+X”
Then reboot the LXC

@tristan-k
Copy link
Author

tristan-k commented Feb 5, 2022

lxc.apparmor.profile=unconfined makes your container run without apparmor confinement, that doesn’t however mean that profiles cannot be loaded and used by it nor that existing apparmor profiles on the host cannot apply to it.

That’s why that option is so terrible, it effectively allows the host to mess with apparmor profiles on the host and any host apparmor profile to randomly apply to container processes.

You should stay away from the lxc.apparmor.profile=unconfined and instead use raw.apparmor to allow anything which gets blocked by the stock profile.

Quote from here.

From a security point this seems like a bad choice. There are other ways to keep access to privileged features like /dev/dri.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Then OMIT that line!

@tteck tteck added the 🧐 Not a Script Issue Not a Script Issue label Feb 5, 2022
@tristan-k
Copy link
Author

Then OMIT that line!

That is no solution to the problem.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

What solution/problem? That line can be omitted and Docker will run fine. I'm guessing you haven't tried.
This is a Google search issue, not a script issue.

@tteck tteck closed this as completed Feb 5, 2022
@tristan-k
Copy link
Author

I've tried and the apparmor error is still there.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Did you reboot the LXC after the .conf change?

@tristan-k
Copy link
Author

Yes, I did.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

I'm going to install Ubuntu 21.10 LXC with the above method, omitting the apparmor line. I'll post my findings

@tristan-k
Copy link
Author

Great, thanks!

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

As I suspected, it works just fine.

root@ubuntu:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete 
Digest: sha256:507ecde44b8eb741278274653120c2bf793b174c06ff4eaa672b713b3263477b
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

@tristan-k
Copy link
Author

Which Proxmox are you running?

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Version 7.1-10

@tristan-k
Copy link
Author

That is strange. I'm on the same version. Can you post your .conf for the lxc?

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

arch: amd64
cores: 2
features: nesting=1
hostname: ubuntu
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=BA:97:7B:37:FC:66,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-109-disk-0,size=6G
swap: 512
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

I have a feeling that you have borked up your LXC
I'd suggest removing that one and install a fresh Ubuntu 21.10

@tristan-k
Copy link
Author

tristan-k commented Feb 5, 2022

I don't think so. I might know the cause. You are running it on a lvm volume and I'm running it on a zpool. I can't verify this because all my Proxmox machines are zfs based but this could be the reason. I've read in the past, that there are issues with running docker on a zpool with Proxmox. I was under the impression that this was only related to running on the hypervisor and not inside a lxc container.

$ cat /etc/pve/local/lxc/108.conf
arch: amd64
cores: 1
features: nesting=1
hostname: ubuntu
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=F6:1C:C7:EF:4B:25,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: dpool:subvol-108-disk-0,size=2G
swap: 512
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Try this, install Home Assistant Container LXC, which runs Docker, but if a ZFS filesystem is detected, it will automatically setup static fuse-overlayfs

@tristan-k
Copy link
Author

tristan-k commented Feb 5, 2022

I just did. Yes, it's using static fuse-overlayfs. What exactly does this proofs? Please help me out here.

bash -c "$(wget -qLO - https://raw.githubusercontent.com/tteck/Proxmox/main/ct/ha_container.sh)"
This will create a New Home Assistant Container LXC. Proceed(y/n)?y
[INFO] Using 'dpool' for storage location.
[INFO] Container ID is 109.
✔  Updating LXC Template List...
✔  Downloading LXC Template...
✔  Creating LXC Container...
[WARNING] Some containers may not work properly due to ZFS not supporting 'fallocate'.
✔  Starting LXC Container...
[INFO] Using fuse-overlayfs.
✔  Setting up Container OS...
✔  Network Connected:  192.168.1.194
✔  Updating Container OS...
✔  Installing Dependencies...
✔  Customizing Docker...
✔  Installing Docker...
✔  Pulling Portainer Image...
✔  Installing Portainer...
✔  Pulling Home Assistant Image...
✔  Installing Home Assistant...
✔  Customizing LXC...
✔  Cleanup...
[INFO] Successfully Created Home Assistant Container LXC to 109.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

now run docker run hello-world

@tristan-k
Copy link
Author

now run docker run hello-world

It works but my original issue still persists because I'm in a situation where I do need a Ubuntu 21.10 based lxc container with access to /dev/dri on a zpool.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Ubuntu is Debian based. Why just Ubuntu?

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Explain what you're trying to do...

@tristan-k
Copy link
Author

tristan-k commented Feb 5, 2022

Essentially because in the past I had some major issues with hardware encoding/decoding on Debian and Ubuntu is much more recent. The endgame is to run a GamesonWhale Docker for low latency based desktop streaming on a headless Proxmox. If you want to provide a lxc template for that I would be thrilled to help or a general based docker lxc which is based on ubuntu for that matter.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Try this

bash -c "$(wget -qLO - https://raw.githubusercontent.com/tteck/Proxmox/dev/ct/ubuntu_container.sh)"

That is a Gamers Docker setup with Hardware Acceleration already added

@tristan-k
Copy link
Author

tristan-k commented Feb 5, 2022

Thanks! I pulled it. Docker runs without the aforementioned apparmor error. So the original issue is closed.

There are some other things though in regards to the dependencies of the GamesOnWhales docker. It needs a docker-compose binary and access to /dev/uinput/ which apparently isn't in the lxc container. Sadly the docker container fails to execute properly with errors about DBus Errors. I'm not sure if this caused by other missing dependencies which are not documented in the GoW wiki.

root@GamesonWhale:~/gow# docker-compose up
Creating network "gow_default" with the default driver
Creating volume "gow_xorg" with default driver
Creating volume "gow_pulse" with default driver
Creating volume "gow_udev" with default driver
Creating gow_udevd_1 ... done
Creating gow_pulse_1 ... done
Creating gow_xorg_1  ... done
Creating gow_sunshine_1 ... done
Creating gow_retroarch_1 ... done
Attaching to gow_udevd_1, gow_xorg_1, gow_pulse_1, gow_sunshine_1, gow_retroarch_1
pulse_1      | Sat, 05 Feb 2022 19:52:24 +0000: /startup.sh: Starting pulseaudio
pulse_1      | W: [pulseaudio] main.c: This program is not intended to be run as root (unless --system is specified).
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to open cookie file '/home/retro/.config/pulse/cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to load authentication key '/home/retro/.config/pulse/cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to open cookie file '/home/retro/.pulse-cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to load authentication key '/home/retro/.pulse-cookie': No such file or directory
pulse_1      | W: [pulseaudio] server-lookup.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
pulse_1      | W: [pulseaudio] main.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
sunshine_1   | Sat, 05 Feb 2022 19:52:25 +0000: /startup.sh: Waiting for X Server :99 to be available
xorg_1       | Not modifying user groups ()
xorg_1       | Starting Xorg (:99)
xorg_1       |
xorg_1       | X.Org X Server 1.20.11
xorg_1       | X Protocol Version 11, Revision 0
xorg_1       | Build Operating System: linux Ubuntu
xorg_1       | Current Operating System: Linux GamesonWhale 5.11.22-7-pve #1 SMP PVE 5.11.22-12 (Sun, 07 Nov 2021 21:46:36 +0100) x86_64
xorg_1       | Kernel command line: initrd=\EFI\proxmox\5.11.22-7-pve\initrd.img-5.11.22-7-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt pci=realloc,assign-busses pcie_acs_override=downstream,multifunction kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 vfio_iommu_type1.allow_unsafe_interrupts=1
xorg_1       | Build Date: 06 July 2021  10:17:51AM
xorg_1       | xorg-server 2:1.20.11-1ubuntu1.1 (For technical support please see http://www.ubuntu.com/support)
xorg_1       | Current version of pixman: 0.40.0
xorg_1       | 	Before reporting problems, check http://wiki.x.org
xorg_1       | 	to make sure that you have the latest version.
xorg_1       | Markers: (--) probed, (**) from config file, (==) default setting,
xorg_1       | 	(++) from command line, (!!) notice, (II) informational,
xorg_1       | 	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
xorg_1       | (==) Log file: "/var/log/Xorg.99.log", Time: Sat Feb  5 19:52:24 2022
xorg_1       | (==) Using system config directory "/usr/share/X11/xorg.conf.d"
xorg_1       | (EE)
xorg_1       | Fatal server error:
xorg_1       | (EE) xf86OpenConsole: Cannot open virtual console 1 (No such file or directory)
xorg_1       | (EE)
xorg_1       | (EE)
xorg_1       | Please consult the The X.Org Foundation support
xorg_1       | 	 at http://wiki.x.org
xorg_1       |  for help.
udevd_1      | monitor will print the received events for:
udevd_1      | UDEV - the event which udev sends out after rule processing
udevd_1      | KERNEL - the kernel uevent
udevd_1      |
xorg_1       | (EE) Please also check the log file at "/var/log/Xorg.99.log" for additional information.
xorg_1       | (EE)
xorg_1       | (EE) Server terminated with error (1). Closing log file.
xorg_1       | error: could not open display
udevd_1      | KERNEL[15638.813521] remove   /devices/virtual/net/vethe0e4d2e (net)
udevd_1      | KERNEL[15639.077188] add      /devices/virtual/bdi/0:75 (bdi)
udevd_1      | KERNEL[15639.151181] remove   /devices/virtual/bdi/0:75 (bdi)
udevd_1      | KERNEL[15639.182402] add      /devices/virtual/bdi/0:75 (bdi)
udevd_1      | KERNEL[15639.189634] remove   /devices/virtual/bdi/0:75 (bdi)
udevd_1      | KERNEL[15639.257988] add      /devices/virtual/bdi/0:75 (bdi)
retroarch_1  | Sat, 05 Feb 2022 19:52:26 +0000: /startup.sh: Waiting for X Server :99 to be available
udevd_1      | KERNEL[15639.263007] add      /devices/virtual/net/vethc245b5c (net)
udevd_1      | KERNEL[15639.263064] add      /devices/virtual/net/vethc245b5c/queues/rx-0 (queues)
udevd_1      | KERNEL[15639.263083] add      /devices/virtual/net/vethc245b5c/queues/tx-0 (queues)
udevd_1      | KERNEL[15639.263109] add      /devices/virtual/net/veth8897c8c (net)
udevd_1      | KERNEL[15639.263127] add      /devices/virtual/net/veth8897c8c/queues/rx-0 (queues)
udevd_1      | KERNEL[15639.263145] add      /devices/virtual/net/veth8897c8c/queues/tx-0 (queues)
udevd_1      | KERNEL[15640.468364] remove   /devices/virtual/net/vethc245b5c (net)
udevd_1      | KERNEL[15640.708644] add      /devices/virtual/bdi/0:80 (bdi)
udevd_1      | KERNEL[15640.731404] remove   /devices/virtual/bdi/0:80 (bdi)
udevd_1      | KERNEL[15640.753513] add      /devices/virtual/bdi/0:80 (bdi)
udevd_1      | KERNEL[15640.761959] remove   /devices/virtual/bdi/0:80 (bdi)
udevd_1      | KERNEL[15640.792132] add      /devices/virtual/bdi/0:80 (bdi)

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

add this to the .conf

lxc.mount.entry: /dev/uinput dev/uinput none bind,optional,create=dir

@tristan-k
Copy link
Author

Now /dev/uinput is accessible but the X server still doesn't come up. I guess I'll have to take it to GoW issue tracker as well because I'm not sure who is the culprit.

ls -la /dev/uinput
total 0
drwxr-xr-x  2 root root  40 Feb  5 21:03 .
drwxr-xr-x 11 root root 620 Feb  5 21:03 ..
root@GamesonWhale:~/gow# docker-compose up
Starting gow_udevd_1 ... done
Starting gow_pulse_1 ... done
Starting gow_xorg_1  ... done
Starting gow_sunshine_1 ... done
Starting gow_retroarch_1 ... done
Attaching to gow_udevd_1, gow_xorg_1, gow_pulse_1, gow_sunshine_1, gow_retroarch_1
pulse_1      | Sat, 05 Feb 2022 20:03:57 +0000: /startup.sh: Starting pulseaudio
pulse_1      | W: [pulseaudio] main.c: This program is not intended to be run as root (unless --system is specified).
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | E: [pulseaudio] core-util.c: Failed to connect to system bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to open cookie file '/home/retro/.config/pulse/cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to load authentication key '/home/retro/.config/pulse/cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to open cookie file '/home/retro/.pulse-cookie': No such file or directory
pulse_1      | W: [pulseaudio] authkey.c: Failed to load authentication key '/home/retro/.pulse-cookie': No such file or directory
pulse_1      | W: [pulseaudio] server-lookup.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
pulse_1      | W: [pulseaudio] main.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
sunshine_1   | Sat, 05 Feb 2022 20:03:58 +0000: /startup.sh: Waiting for X Server :99 to be available
udevd_1      | monitor will print the received events for:
udevd_1      | UDEV - the event which udev sends out after rule processing
udevd_1      | KERNEL - the kernel uevent
udevd_1      |
udevd_1      | KERNEL[16332.096269] remove   /devices/virtual/net/veth7349d9c (net)
udevd_1      | KERNEL[16332.276989] add      /devices/virtual/bdi/0:89 (bdi)
udevd_1      | KERNEL[16332.282897] add      /devices/virtual/net/vethaff0569 (net)
udevd_1      | KERNEL[16332.283764] add      /devices/virtual/net/vethaff0569/queues/rx-0 (queues)
udevd_1      | KERNEL[16332.287658] add      /devices/virtual/net/vethaff0569/queues/tx-0 (queues)
udevd_1      | KERNEL[16332.288645] add      /devices/virtual/net/vetheb4ad13 (net)
udevd_1      | KERNEL[16332.289457] add      /devices/virtual/net/vetheb4ad13/queues/rx-0 (queues)
udevd_1      | KERNEL[16332.290162] add      /devices/virtual/net/vetheb4ad13/queues/tx-0 (queues)
udevd_1      | KERNEL[16333.155281] remove   /devices/virtual/net/vethaff0569 (net)
udevd_1      | KERNEL[16333.372665] add      /devices/virtual/bdi/0:94 (bdi)
xorg_1       | Not modifying user groups ()
xorg_1       | Starting Xorg (:99)
xorg_1       |
xorg_1       | X.Org X Server 1.20.11
xorg_1       | X Protocol Version 11, Revision 0
xorg_1       | Build Operating System: linux Ubuntu
xorg_1       | Current Operating System: Linux GamesonWhale 5.11.22-7-pve #1 SMP PVE 5.11.22-12 (Sun, 07 Nov 2021 21:46:36 +0100) x86_64
xorg_1       | Kernel command line: initrd=\EFI\proxmox\5.11.22-7-pve\initrd.img-5.11.22-7-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt pci=realloc,assign-busses pcie_acs_override=downstream,multifunction kvm.ignore_msrs=1 kvm.report_ignored_msrs=0 vfio_iommu_type1.allow_unsafe_interrupts=1
xorg_1       | Build Date: 06 July 2021  10:17:51AM
xorg_1       | xorg-server 2:1.20.11-1ubuntu1.1 (For technical support please see http://www.ubuntu.com/support)
xorg_1       | Current version of pixman: 0.40.0
xorg_1       | 	Before reporting problems, check http://wiki.x.org
xorg_1       | 	to make sure that you have the latest version.
xorg_1       | Markers: (--) probed, (**) from config file, (==) default setting,
xorg_1       | 	(++) from command line, (!!) notice, (II) informational,
xorg_1       | 	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
xorg_1       | (==) Log file: "/var/log/Xorg.99.log", Time: Sat Feb  5 20:03:57 2022
xorg_1       | (==) Using system config directory "/usr/share/X11/xorg.conf.d"
xorg_1       | (EE)
xorg_1       | Fatal server error:
xorg_1       | (EE) xf86OpenConsole: Cannot open virtual console 1 (No such file or directory)
xorg_1       | (EE)
xorg_1       | (EE)
xorg_1       | Please consult the The X.Org Foundation support
xorg_1       | 	 at http://wiki.x.org
xorg_1       |  for help.
xorg_1       | (EE) Please also check the log file at "/var/log/Xorg.99.log" for additional information.
xorg_1       | (EE)
xorg_1       | (EE) Server terminated with error (1). Closing log file.
xorg_1       | error: could not open display
retroarch_1  | Sat, 05 Feb 2022 20:03:58 +0000: /startup.sh: Waiting for X Server :99 to be available
^[[Bxorg_1       | Sat, 05 Feb 2022 20:05:59 +0000: /usr/bin/wait-x11: FATAL: /usr/bin/wait-x11: Gave up waiting for X server :99
udevd_1      | KERNEL[16454.914108] remove   /devices/virtual/bdi/0:81 (bdi)
gow_xorg_1 exited with code 11
sunshine_1   | Sat, 05 Feb 2022 20:06:01 +0000: /usr/bin/wait-x11: FATAL: /usr/bin/wait-x11: Gave up waiting for X server :99
retroarch_1  | Sat, 05 Feb 2022 20:06:01 +0000: /usr/bin/wait-x11: FATAL: /usr/bin/wait-x11: Gave up waiting for X server :99
udevd_1      | KERNEL[16456.486489] remove   /devices/virtual/bdi/0:94 (bdi)
gow_retroarch_1 exited with code 11
udevd_1      | KERNEL[16456.835398] add      /devices/virtual/net/vethaff0569 (net)
udevd_1      | KERNEL[16456.835460] move     /devices/virtual/net/vethaff0569 (net)
udevd_1      | KERNEL[16456.860262] remove   /devices/virtual/net/vethaff0569/queues/rx-0 (queues)
udevd_1      | KERNEL[16456.860311] remove   /devices/virtual/net/vethaff0569/queues/tx-0 (queues)
udevd_1      | KERNEL[16456.860343] remove   /devices/virtual/net/vethaff0569 (net)
udevd_1      | KERNEL[16456.891256] remove   /devices/virtual/net/vetheb4ad13/queues/rx-0 (queues)
udevd_1      | KERNEL[16456.891315] remove   /devices/virtual/net/vetheb4ad13/queues/tx-0 (queues)
udevd_1      | KERNEL[16456.891346] remove   /devices/virtual/net/vetheb4ad13 (net)
udevd_1      | KERNEL[16456.934480] remove   /devices/virtual/bdi/0:89 (bdi)
gow_sunshine_1 exited with code 11

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

You may want to fork this repo before I remove the dev branch. Good luck with your GoW

@tristan-k
Copy link
Author

Thanks again for your help. I forked it.

@tteck
Copy link
Owner

tteck commented Feb 5, 2022

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧐 Not a Script Issue Not a Script Issue
Projects
None yet
Development

No branches or pull requests

2 participants