Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Second invocation of privileged rootless podman under OpenShift SIGSEGVs #20766

Closed
adelton opened this issue Nov 23, 2023 · 6 comments · Fixed by #20769
Closed

Second invocation of privileged rootless podman under OpenShift SIGSEGVs #20766

adelton opened this issue Nov 23, 2023 · 6 comments · Fixed by #20769
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@adelton
Copy link
Contributor

adelton commented Nov 23, 2023

Issue Description

I follow https://www.redhat.com/sysadmin/podman-inside-kubernetes, running podman in an OpenShift cluster. The privileged case works fine but the

     securityContext:
       privileged: true
       runAsUser: 1000

fails upon the second invocation of podman with SIGSEGV.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Log in with oc as a regular user. Check that oc whoami reports user.
  2. oc new-project podman-test
  3. Log in with oc as an admin user (user with cluster-admins group). Check that oc whoami reports admin.
  4. oc adm policy add-scc-to-user privileged -z default -n podman-test. We have to do this or the next oc apply step fails with
    Error from server (Forbidden): error when creating "STDIN": pods "podman-priv" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .containers[0].privileged: Invalid value: true: Privileged containers are not allowed, provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "pcap-dedicated-admins": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]
    
  5. Log in with oc back as a regular user. Check that oc whoami reports user.
  6. $ oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: podman-priv
    spec:
     containers:
       - name: priv
         image: quay.io/podman/stable
         args:
           - sleep
           - "1000000"
         securityContext:
           privileged: true
    <Ctrl-D>
    
  7. oc exec -it podman-priv -- podman info
  8. Again oc exec -it podman-priv -- podman info
  9. $ oc apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
     name: podman-rootless
    spec:
     containers:
       - name: rootless
         image: quay.io/podman/stable
         args:
           - sleep
           - "1000000"
         securityContext:
           privileged: true
           runAsUser: 1000
    <Ctrl-D>
    
  10. oc exec -it podman-rootless -- podman info
  11. Again oc exec -it podman-rootless -- podman info

Describe the results you received

The oc exec -it podman-priv -- podman info work all the time.

The oc exec -it podman-rootless -- podman info works the first time and the second time it prints

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x558b6cac00f3]

goroutine 1 [running]:
panic({0x558b6d3f32a0?, 0x558b6e1792b0?})
	/usr/lib/golang/src/runtime/panic.go:1017 +0x3ac fp=0xc00060f210 sp=0xc00060f160 pc=0x558b6bc7ad4c
runtime.panicmem(...)
	/usr/lib/golang/src/runtime/panic.go:261
runtime.sigpanic()
	/usr/lib/golang/src/runtime/signal_unix.go:861 +0x378 fp=0xc00060f270 sp=0xc00060f210 pc=0x558b6bc92738
github.com/containers/podman/v4/libpod.(*Runtime).hostInfo(0xc0004b0960)
	/builddir/build/BUILD/podman-4.7.2/libpod/info.go:125 +0x2f3 fp=0xc00060f7c8 sp=0xc00060f270 pc=0x558b6cac00f3
github.com/containers/podman/v4/libpod.(*Runtime).info(0xc0004b0960)
	/builddir/build/BUILD/podman-4.7.2/libpod/info.go:36 +0x1c5 fp=0xc00060fa68 sp=0xc00060f7c8 pc=0x558b6cabf785
github.com/containers/podman/v4/libpod.(*Runtime).Info(...)
	/builddir/build/BUILD/podman-4.7.2/libpod/runtime.go:896
github.com/containers/podman/v4/pkg/domain/infra/abi.(*ContainerEngine).Info(0xc000074078, {0x0?, 0x0?})
	/builddir/build/BUILD/podman-4.7.2/pkg/domain/infra/abi/system.go:28 +0x30 fp=0xc00060fb78 sp=0xc00060fa68 pc=0x558b6cbe6810
github.com/containers/podman/v4/cmd/podman/system.info(0x558b6e1af060?, {0x558b6e2ad1c0?, 0x0?, 0x0?})
	/builddir/build/BUILD/podman-4.7.2/cmd/podman/system/info.go:73 +0x73 fp=0xc00060fc30 sp=0xc00060fb78 pc=0x558b6cdf6ad3
github.com/spf13/cobra.(*Command).execute(0x558b6e1af060, {0xc0001c4090, 0x0, 0x0})
	/builddir/build/BUILD/podman-4.7.2/vendor/github.com/spf13/cobra/command.go:940 +0x87c fp=0xc00060fd68 sp=0xc00060fc30 pc=0x558b6c1d81bc
github.com/spf13/cobra.(*Command).ExecuteC(0x558b6e198620)
	/builddir/build/BUILD/podman-4.7.2/vendor/github.com/spf13/cobra/command.go:1068 +0x3a5 fp=0xc00060fe20 sp=0xc00060fd68 pc=0x558b6c1d89e5
github.com/spf13/cobra.(*Command).Execute(...)
	/builddir/build/BUILD/podman-4.7.2/vendor/github.com/spf13/cobra/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
	/builddir/build/BUILD/podman-4.7.2/vendor/github.com/spf13/cobra/command.go:985
main.Execute()
	/builddir/build/BUILD/podman-4.7.2/cmd/podman/root.go:114 +0xb8 fp=0xc00060fea8 sp=0xc00060fe20 pc=0x558b6ce0dc38
main.main()
	/builddir/build/BUILD/podman-4.7.2/cmd/podman/main.go:60 +0x467 fp=0xc00060ff40 sp=0xc00060fea8 pc=0x558b6ce0d367
runtime.main()
	/usr/lib/golang/src/runtime/proc.go:267 +0x2d2 fp=0xc00060ffe0 sp=0xc00060ff40 pc=0x558b6bc7dc12
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00060ffe8 sp=0xc00060ffe0 pc=0x558b6bcb1dc1

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000070fa8 sp=0xc000070f88 pc=0x558b6bc7e08e
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:404
runtime.forcegchelper()
	/usr/lib/golang/src/runtime/proc.go:322 +0xb8 fp=0xc000070fe0 sp=0xc000070fa8 pc=0x558b6bc7def8
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x558b6bcb1dc1
created by runtime.init.7 in goroutine 1
	/usr/lib/golang/src/runtime/proc.go:310 +0x1a

goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc00006c778 sp=0xc00006c758 pc=0x558b6bc7e08e
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
	/usr/lib/golang/src/runtime/mgcsweep.go:321 +0xdf fp=0xc00006c7c8 sp=0xc00006c778 pc=0x558b6bc6853f
runtime.gcenable.func1()
	/usr/lib/golang/src/runtime/mgc.go:200 +0x25 fp=0xc00006c7e0 sp=0xc00006c7c8 pc=0x558b6bc5d645
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006c7e8 sp=0xc00006c7e0 pc=0x558b6bcb1dc1
created by runtime.gcenable in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:200 +0x66

goroutine 19 [GC scavenge wait]:
runtime.gopark(0x1c687b?, 0x3b9aca00?, 0x0?, 0x0?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc00006cf70 sp=0xc00006cf50 pc=0x558b6bc7e08e
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x558b6e2769e0)
	/usr/lib/golang/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00006cfa0 sp=0xc00006cf70 pc=0x558b6bc65d89
runtime.bgscavenge(0x0?)
	/usr/lib/golang/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00006cfc8 sp=0xc00006cfa0 pc=0x558b6bc66339
runtime.gcenable.func2()
	/usr/lib/golang/src/runtime/mgc.go:201 +0x25 fp=0xc00006cfe0 sp=0xc00006cfc8 pc=0x558b6bc5d5e5
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006cfe8 sp=0xc00006cfe0 pc=0x558b6bcb1dc1
created by runtime.gcenable in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:201 +0xa5

goroutine 34 [finalizer wait]:
runtime.gopark(0x558b6d615cc0?, 0x16bc7f201?, 0x0?, 0x0?, 0x558b6bc862a5?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000070628 sp=0xc000070608 pc=0x558b6bc7e08e
runtime.runfinq()
	/usr/lib/golang/src/runtime/mfinal.go:193 +0x107 fp=0xc0000707e0 sp=0xc000070628 pc=0x558b6bc5c6c7
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000707e8 sp=0xc0000707e0 pc=0x558b6bcb1dc1
created by runtime.createfing in goroutine 1
	/usr/lib/golang/src/runtime/mfinal.go:163 +0x3d

goroutine 35 [GC worker (idle)]:
runtime.gopark(0x10400000000?, 0x2?, 0x0?, 0x0?, 0xc0004bec30?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000356750 sp=0xc000356730 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc0003567e0 sp=0xc000356750 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0003567e8 sp=0xc0003567e0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 3 [GC worker (idle)]:
runtime.gopark(0x7b395da6295?, 0x3?, 0xf7?, 0xad?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000071750 sp=0xc000071730 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000717e0 sp=0xc000071750 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000717e8 sp=0xc0000717e0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 4 [GC worker (idle)]:
runtime.gopark(0x7b395d952c3?, 0x3?, 0x71?, 0xe3?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000071f50 sp=0xc000071f30 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc000071fe0 sp=0xc000071f50 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 36 [GC worker (idle)]:
runtime.gopark(0x7b395da6e9b?, 0xc000433ab0?, 0x0?, 0x0?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000356f50 sp=0xc000356f30 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc000356fe0 sp=0xc000356f50 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000356fe8 sp=0xc000356fe0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 5 [GC worker (idle)]:
runtime.gopark(0x7b395da6b05?, 0x3?, 0x61?, 0x6?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000072750 sp=0xc000072730 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc0000727e0 sp=0xc000072750 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000727e8 sp=0xc0000727e0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 37 [GC worker (idle)]:
runtime.gopark(0x7b395d96343?, 0x1?, 0xda?, 0xbc?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000357750 sp=0xc000357730 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc0003577e0 sp=0xc000357750 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0003577e8 sp=0xc0003577e0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 6 [GC worker (idle)]:
runtime.gopark(0x7b395da93f7?, 0x3?, 0x2e?, 0xff?, 0x0?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000072f50 sp=0xc000072f30 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc000072fe0 sp=0xc000072f50 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 38 [GC worker (idle)]:
runtime.gopark(0x7b395a79fda?, 0x0?, 0x2?, 0x0?, 0x2?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000357f50 sp=0xc000357f30 pc=0x558b6bc7e08e
runtime.gcBgMarkWorker()
	/usr/lib/golang/src/runtime/mgc.go:1293 +0xe5 fp=0xc000357fe0 sp=0xc000357f50 pc=0x558b6bc5f205
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000357fe8 sp=0xc000357fe0 pc=0x558b6bcb1dc1
created by runtime.gcBgMarkStartWorkers in goroutine 1
	/usr/lib/golang/src/runtime/mgc.go:1217 +0x1c

goroutine 7 [select, locked to thread]:
runtime.gopark(0xc00006d7a8?, 0x2?, 0x29?, 0xe3?, 0xc00006d7a4?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc00006d638 sp=0xc00006d618 pc=0x558b6bc7e08e
runtime.selectgo(0xc00006d7a8, 0xc00006d7a0, 0x0?, 0x0, 0x0?, 0x1)
	/usr/lib/golang/src/runtime/select.go:327 +0x725 fp=0xc00006d758 sp=0xc00006d638 pc=0x558b6bc8e8a5
runtime.ensureSigM.func1()
	/usr/lib/golang/src/runtime/signal_unix.go:1014 +0x1a5 fp=0xc00006d7e0 sp=0xc00006d758 pc=0x558b6bca84a5
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00006d7e8 sp=0xc00006d7e0 pc=0x558b6bcb1dc1
created by runtime.ensureSigM in goroutine 1
	/usr/lib/golang/src/runtime/signal_unix.go:997 +0xc8

goroutine 39 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
	/usr/lib/golang/src/runtime/lock_futex.go:236 +0x29 fp=0xc0003547a0 sp=0xc000354768 pc=0x558b6bc4f429
os/signal.signal_recv()
	/usr/lib/golang/src/runtime/sigqueue.go:152 +0x29 fp=0xc0003547c0 sp=0xc0003547a0 pc=0x558b6bcae2e9
os/signal.loop()
	/usr/lib/golang/src/os/signal/signal_unix.go:23 +0x13 fp=0xc0003547e0 sp=0xc0003547c0 pc=0x558b6be18973
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0003547e8 sp=0xc0003547e0 pc=0x558b6bcb1dc1
created by os/signal.Notify.func1.1 in goroutine 1
	/usr/lib/golang/src/os/signal/signal.go:151 +0x1f

goroutine 40 [select]:
runtime.gopark(0xc000354fb0?, 0x2?, 0x0?, 0x0?, 0xc000354eac?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000080d38 sp=0xc000080d18 pc=0x558b6bc7e08e
runtime.selectgo(0xc000080fb0, 0xc000354ea8, 0xc00035fae0?, 0x0, 0x0?, 0x1)
	/usr/lib/golang/src/runtime/select.go:327 +0x725 fp=0xc000080e58 sp=0xc000080d38 pc=0x558b6bc8e8a5
github.com/containers/podman/v4/libpod/shutdown.Start.func1()
	/builddir/build/BUILD/podman-4.7.2/libpod/shutdown/handler.go:48 +0x87 fp=0xc000080fe0 sp=0xc000080e58 pc=0x558b6c9c1527
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000080fe8 sp=0xc000080fe0 pc=0x558b6bcb1dc1
created by github.com/containers/podman/v4/libpod/shutdown.Start in goroutine 1
	/builddir/build/BUILD/podman-4.7.2/libpod/shutdown/handler.go:47 +0xf1

goroutine 41 [chan receive]:
runtime.gopark(0xc0002f6690?, 0x1?, 0x0?, 0x0?, 0xd4?)
	/usr/lib/golang/src/runtime/proc.go:398 +0xce fp=0xc000355708 sp=0xc0003556e8 pc=0x558b6bc7e08e
runtime.chanrecv(0xc000576540, 0xc0003557c8, 0x1)
	/usr/lib/golang/src/runtime/chan.go:583 +0x3cd fp=0xc000355780 sp=0xc000355708 pc=0x558b6bc49d0d
runtime.chanrecv2(0xc0003faa80?, 0x0?)
	/usr/lib/golang/src/runtime/chan.go:447 +0x12 fp=0xc0003557a8 sp=0xc000355780 pc=0x558b6bc49932
github.com/containers/podman/v4/libpod.(*Runtime).startWorker.func1()
	/builddir/build/BUILD/podman-4.7.2/libpod/runtime_worker.go:6 +0x6c fp=0xc0003557e0 sp=0xc0003557a8 pc=0x558b6cb21bac
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0003557e8 sp=0xc0003557e0 pc=0x558b6bcb1dc1
created by github.com/containers/podman/v4/libpod.(*Runtime).startWorker in goroutine 1
	/builddir/build/BUILD/podman-4.7.2/libpod/runtime_worker.go:5 +0x8e
command terminated with exit code 134

Describe the results you expected

No SIGSEGV, good behaviour every time.

podman info output

This is from the first `oc exec -it podman-rootless -- podman info`, before it started failing:

host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.8-2.fc39.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: '
  cpuUtilization:
    idlePercent: 97.84
    systemPercent: 0.6
    userPercent: 1.56
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: container
    version: "39"
  eventLogger: file
  freeLocks: 2048
  hostname: podman-rootless
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
  kernel: 5.14.0-284.40.1.el9_2.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 18485194752
  memTotal: 33100292096
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.fc39.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: netavark-1.8.0-2.fc39.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.8.0
  ociRuntime:
    name: crun
    package: crun-1.11.2-1.fc39.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.11.2
      commit: ab0edeef1c331840b025e8f1d38090cfb8a0509d
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20231107.g56d9f6d-1.fc39.x86_64
    version: |
      pasta 0^20231107.g56d9f6d-1.fc39.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc39.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 2h 21m 4.00s (Approximately 0.08 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 321517498368
  graphRootUsed: 16652500992
  graphStatus:
    Backing Filesystem: overlayfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /tmp/containers-user-1000/containers
  transientStore: false
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 1698762721
  BuiltTime: Tue Oct 31 14:32:01 2023
  GitCommit: ""
  GoVersion: go1.21.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.2

Podman in a container

Yes

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@adelton adelton added the kind/bug Categorizes issue or PR as related to a bug. label Nov 23, 2023
@Luap99
Copy link
Member

Luap99 commented Nov 24, 2023

Line points to

NetworkBackendInfo: r.network.NetworkInfo(),

Can you run the command with --log-level debug it is not clear to me why that would ever be nil there, as best as I Can tell something is preventing us from rexec in the userns and we do not configure the storage or network when that happens.

giuseppe added a commit to giuseppe/libpod that referenced this issue Nov 24, 2023
Previously, the setup only checked for the CAP_SYS_ADMIN capability,
which could be not enough with containerized Podman where
CAP_SYS_ADMIN might be set for an unprivileged user.

Closes: containers#20766

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Nov 24, 2023
Previously, the setup only checked for the CAP_SYS_ADMIN capability,
which could be not enough with containerized Podman where
CAP_SYS_ADMIN might be set for an unprivileged user.

Closes: containers#20766

[NO NEW TESTS NEEDED] needs containerized Podman

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

opened a PR: #20769

This is a bit weird corner case as it requires podman to run with euid != 0 and CAP_SYS_ADMIN, which is a big hammer in this case since it applies to the host.

To reproduce with Podman you need:

$ podman run -it --rm --user 1000 --cap-add ALL -ti quay.io/podman/stable sh -c 'podman info; podman info'

@adelton
Copy link
Contributor Author

adelton commented Dec 4, 2023

Thanks for the fix, @giuseppe. Are there some (nightly?) images somewhere where the fixed build would be available before the next release?

@giuseppe
Copy link
Member

giuseppe commented Dec 5, 2023

you could use the copr build for each PR, e.g.: https://copr.fedorainfracloud.org/coprs/packit/containers-podman-20769/build/6691155/

@Luap99
Copy link
Member

Luap99 commented Dec 5, 2023

There should be a ready to use quay.io/podman/upstream image that is build based on the podman-next copr.

@adelton
Copy link
Contributor Author

adelton commented Dec 5, 2023

Thanks!

openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue Jan 30, 2024
Previously, the setup only checked for the CAP_SYS_ADMIN capability,
which could be not enough with containerized Podman where
CAP_SYS_ADMIN might be set for an unprivileged user.

Closes: containers#20766

[NO NEW TESTS NEEDED] needs containerized Podman

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Jan 30, 2024
Previously, the setup only checked for the CAP_SYS_ADMIN capability,
which could be not enough with containerized Podman where
CAP_SYS_ADMIN might be set for an unprivileged user.

Closes: containers#20766

[NO NEW TESTS NEEDED] needs containerized Podman

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
(cherry picked from commit 41a6b99)
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Mar 5, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 5, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants