Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Couldn't install distrobox container first time at phase 3 #28

Open
freemin7 opened this issue Feb 29, 2024 · 9 comments
Open

Couldn't install distrobox container first time at phase 3 #28

freemin7 opened this issue Feb 29, 2024 · 9 comments

Comments

@freemin7
Copy link

Hardware

https://linux-hardware.org/?probe=7837a817bf

Installation

ALVR Version:
commit 6936653

NAME="Rocky Linux"
VERSION="8.7 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.7"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.7 (Green Obsidian)"

setup.log

@Meister1593
Copy link
Collaborator

Meister1593 commented Feb 29, 2024

Hello
From what I can tell, there could be 2 issues:
1.SELinux permission errors (fedora probably had it included by default, but on rocky? probably not). Try temporarily disabling it (making it permissive) for the time of installing and see if it works.
2. Script starting from somewhere under symbolic link (unsure if its an issue, just vaguely found similar stuff, could be very much unrelated). Are scripts installed somewhere under symbolic link?

@freemin7
Copy link
Author

I have no idea what the symbolic link could be about. It's in ~/ALVR-Distrobox-Linux-Guide.
I've disabled SELinux for now and will rerun it. Also i switch from NVIDIA to AMD.

@Meister1593
Copy link
Collaborator

Meister1593 commented Feb 29, 2024

Symbolic link as in something that you create with ln command and potentially ran across different disks, which kind of breaks distrobox.

@Meister1593
Copy link
Collaborator

As i see, your path is relative to home folder and looks fine, so yeah try running without selinux enabled for now.

@freemin7
Copy link
Author

freemin7 commented Feb 29, 2024

setup.log
Is there a way to have a more verbose log?

I just noticed during uninstalls it says:

Missing dependency: we need a container manager.
Please install one of podman, docker or lilipod.

trying my distros podman for now. Maybe that's an error setup-phase-1-2.sh should catch?

@Meister1593
Copy link
Collaborator

Meister1593 commented Feb 29, 2024

Probably adding --verbose to

distrobox enter --name "$container_name" --additional-flags "--env XDG_CURRENT_DESKTOP=X-Generic --env prefix='$prefix' --env container_name='$container_name'" -- ./setup-phase-3.sh

@Meister1593
Copy link
Collaborator

Meister1593 commented Feb 29, 2024

setup.log Is there a way to have a more verbose log?

I just noticed during uninstalls it says:

Missing dependency: we need a container manager.
Please install one of podman, docker or lilipod.

trying my distros podman for now. Maybe that's an error setup-phase-1-2.sh should catch?

That's probably issue with uninstall script, rather than this, this setup doesn't use podman anymore at all

@freemin7
Copy link
Author

freemin7 commented Feb 29, 2024

ooff maybe i should put the logs here, not the discord server ...

 Checking dependencies...
 Downloading...
 Unpacking...
 Installation successful!
 Shell scripts are located in /home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/bin
 Manpages are located in /home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/share/man/man1
 : Phase 2
amd
Trying to pull docker.io/library/archlinux:latest...
Getting image source signatures
Copying blob sha256:9a82a64c3a8439c75d8e584181427b073712afd1454747bec3dcb84bcbe19ac5
Copying blob sha256:45f82ee8a39c5c15c641d1f420c193622b3d6e32716c90d7bf663111d1bedf2f
Copying config sha256:69f38d8f6347d027696923f4bfad86a036c5d9e67d717a7354d5f9216ea0bbd5
Writing manifest to image destination
69f38d8f6347d027696923f4bfad86a036c5d9e67d717a7354d5f9216ea0bbd5
Creating 'arch-alvr' using image docker.io/library/archlinux:latest	Error: statfs /etc/pki/entitlement: no such file or directory
 [ ERR ]
pipewire
+ '[' -z arch-alvr ']'
+ '[' '!' -t 0 ']'
+ '[' '!' -t 1 ']'
+ headless=1
+ case "${container_manager}" in
+ command -v podman
+ container_manager=podman
+ command -v podman
+ '[' 1 -ne 0 ']'
+ container_manager='podman --log-level debug'
+ '[' 0 -ne 0 ']'
+ container_home=/home/joto
+ container_path=/home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/home/joto/.juliaup/bin:/home/joto/.cargo/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/home/joto/.local/bin:/home/joto/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
+ '[' 0 -ne 0 ']'
+ container_status=unknown
++ podman --log-level debug inspect --type container --format 'container_status={{.State.Status}};
	{{range .Config.Env}}{{if slice . 0 5 | eq "HOME="}}container_home={{slice . 5 | printf "%q"}};{{end}}{{end}}
	{{range .Config.Env}}{{if slice . 0 5 | eq "PATH="}}container_path={{slice . 5 | printf "%q"}}{{end}}{{end}}' arch-alvr
time="2024-02-29T18:22:11+01:00" level=info msg="podman filtering at log level debug"
time="2024-02-29T18:22:11+01:00" level=debug msg="Called inspect.PersistentPreRunE(podman --log-level debug inspect --type container --format container_status={{.State.Status}};\n\t{{range .Config.Env}}{{if slice . 0 5 | eq \"HOME=\"}}container_home={{slice . 5 | printf \"%q\"}};{{end}}{{end}}\n\t{{range .Config.Env}}{{if slice . 0 5 | eq \"PATH=\"}}container_path={{slice . 5 | printf \"%q\"}}{{end}}{{end}} arch-alvr)"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-02-29T18:22:11+01:00" level=debug msg="Initializing boltdb state at /home/joto/.local/share/containers/storage/libpod/bolt_state.db"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using graph driver overlay"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using graph root /home/joto/.local/share/containers/storage"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using run root /run/user/1000/containers"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using static dir /home/joto/.local/share/containers/storage/libpod"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using volume path /home/joto/.local/share/containers/storage/volumes"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using transient store: false"
time="2024-02-29T18:22:11+01:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2024-02-29T18:22:11+01:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-02-29T18:22:11+01:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-02-29T18:22:11+01:00" level=debug msg="Cached value indicated that metacopy is not being used"
time="2024-02-29T18:22:11+01:00" level=debug msg="Cached value indicated that native-diff is usable"
time="2024-02-29T18:22:11+01:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
time="2024-02-29T18:22:11+01:00" level=debug msg="Initializing event backend file"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime crun initialization failed: no valid executable found for OCI runtime crun: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-02-29T18:22:11+01:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
time="2024-02-29T18:22:11+01:00" level=info msg="Setting parallel job count to 97"
Error: no such container arch-alvr
time="2024-02-29T18:22:11+01:00" level=debug msg="Shutting down engines"
+ eval ''
+ '[' unknown = unknown ']'
+ '[' 0 -eq 1 ']'
+ printf 'Create it now, out of image %s? [Y/n]: ' registry.fedoraproject.org/fedora-toolbox:38
Create it now, out of image registry.fedoraproject.org/fedora-toolbox:38? [Y/n]: + read -r response

trying again without podman installed

 : Installing distrobox
 Checking dependencies...
 Downloading...
 Unpacking...
 Installation successful!
 Shell scripts are located in /home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/bin
 Manpages are located in /home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/share/man/man1
 : Phase 2
amd
pulling image manifest: index.docker.io/library/archlinux:latest
pulling layer 9a82a64c3a8439c75d8e584181427b073712afd1454747bec3dcb84bcbe19ac5.tar.gz
layer 9a82a64c3a8439c75d8e584181427b073712afd1454747bec3dcb84bcbe19ac5.tar.gz already exists, skipping
pulling layer 45f82ee8a39c5c15c641d1f420c193622b3d6e32716c90d7bf663111d1bedf2f.tar.gz
layer 45f82ee8a39c5c15c641d1f420c193622b3d6e32716c90d7bf663111d1bedf2f.tar.gz already exists, skipping
saving manifest for index.docker.io/library/archlinux:latest
saving config for index.docker.io/library/archlinux:latest
saving metadata for index.docker.io/library/archlinux:latest
done
bff6ec3f9f1dfc505287643d8c56a967
Creating 'arch-alvr' using image docker.io/library/archlinux:latest	 [ OK ]
Distrobox 'arch-alvr' successfully created.
To enter, run:

distrobox enter arch-alvr

pipewire
+ '[' -z arch-alvr ']'
+ '[' '!' -t 0 ']'
+ '[' '!' -t 1 ']'
+ headless=1
+ case "${container_manager}" in
+ command -v podman
+ command -v podman-launcher
+ command -v docker
+ command -v lilipod
+ container_manager=lilipod
+ command -v lilipod
+ '[' 1 -ne 0 ']'
+ container_manager='lilipod --log-level debug'
+ '[' 0 -ne 0 ']'
+ container_home=/home/joto
+ container_path=/home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/home/joto/.juliaup/bin:/home/joto/.cargo/bin:/usr/local/ve/llvm-vec-2.3.0/bin:/home/joto/.local/bin:/home/joto/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
+ '[' 0 -ne 0 ']'
+ container_status=unknown
++ lilipod --log-level debug inspect --type container --format 'container_status={{.State.Status}};
	{{range .Config.Env}}{{if slice . 0 5 | eq "HOME="}}container_home={{slice . 5 | printf "%q"}};{{end}}{{end}}
	{{range .Config.Env}}{{if slice . 0 5 | eq "PATH="}}container_path={{slice . 5 | printf "%q"}}{{end}}{{end}}' arch-alvr
+ eval 'container_status=stopped;
	container_home="/home/joto";container_home="/home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/arch-alvr";
	container_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"'
++ container_status=stopped
++ container_home=/home/joto
++ container_home=/home/joto/ALVR-Distrobox-Linux-Guide/installation-lilipod/arch-alvr
++ container_path=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ '[' stopped = unknown ']'
+ '[' stopped '!=' running ']'
+ lilipod --log-level debug start arch-alvr
proc_utils.go:24 [debug] ensuring we're either root or fake-root
proc_utils.go:64 [debug] executing [/proc/self/exe rootless-helper --log-level debug lilipod --log-level debug start arch-alvr]
proc_utils.go:83 [debug] tty not specified, using cmd.Start
proc_utils.go:92 [debug] tty not specified, waiting for child to start
proc_utils.go:95 [debug] tty not specified, releasing child
++ lilipod --log-level debug inspect --type container --format '{{.State.Status}}' arch-alvr
+ '[' stopped '!=' running ']'
+ printf '\033[31m Error: could not start entrypoint.\n\033[0m'
 Error: could not start entrypoint.
++ lilipod --log-level debug logs arch-alvr
+ container_manager_log=
+ printf '%s\n' ''

+ exit 1
+ cleanup
+ rm -f /home/joto/.cache/.arch-alvr.fifo
+ '[' -n '' ']'
+ '[' 1 -eq 1 ']'
+ lilipod --log-level debug logs arch-alvr
 : Couldn't install distrobox container first time at phase 3, please report setup.log to https://github.com/alvr-org/ALVR-Distrobox-Linux-Guide/issues.

so the setup might use podman? Should it?

@freemin7
Copy link
Author

freemin7 commented Feb 29, 2024

If i say yes with podman installed this happens:

setup.log (Yes my internet is slow AF)

Not sure if: Error: statfs /etc/pki/entitlement: no such file or directory is relevant i will google it a bit
Mhh that's RedHat DRM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants