Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIGSEGV when using 'podman run --pod="new:<pod-name>" <containerimage>' #2124

Closed
matyat opened this issue Jan 10, 2019 · 8 comments · Fixed by #2138
Closed

SIGSEGV when using 'podman run --pod="new:<pod-name>" <containerimage>' #2124

matyat opened this issue Jan 10, 2019 · 8 comments · Fixed by #2138
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@matyat
Copy link

matyat commented Jan 10, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When using the 'new:..' syntax in --pod, to create a new pod on run, podman panics.

Steps to reproduce the issue:

  1. podman pull gcr.io/google_containers/pause-amd64:3.0
  2. podman run --pod="new:test" gcr.io/google_containers/pause-amd64

Describe the results you received:

$ podman run --pod="new:test" gcr.io/google_containers/pause-amd64
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xf0 pc=0xc63982]

goroutine 1 [running]:
github.com/containers/libpod/vendor/github.com/containers/image/storage.storageReference.StringWithinTransport(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x152ab60, 0xc000551680, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/containers/image/storage/storage_reference.go:123 +0x42
github.com/containers/libpod/vendor/github.com/containers/image/storage.storageTransport.ParseStoreReference(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/containers/image/storage/storage_transport.go:175 +0x333
github.com/containers/libpod/libpod/image.(*Runtime).getImage(0xc000550e20, 0xc00032dfe0, 0x14, 0x0, 0x1523180, 0xc000551640)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:393 +0x7d
github.com/containers/libpod/libpod/image.(*Image).getLocalImage(0xc000132fc0, 0xc000132fc0, 0xc0005ae3c8, 0x76)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:230 +0x168
github.com/containers/libpod/libpod/image.(*Runtime).New(0xc000550e20, 0x15359a0, 0xc0000fe048, 0xc00032dfe0, 0x14, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:136 +0x47d
github.com/containers/libpod/libpod.(*Runtime).createInfraContainer(0xc000142240, 0x15359a0, 0xc0000fe048, 0xc0002fb680, 0x1, 0x0, 0xc0005ae5b8)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/runtime_pod_infra_linux.go:70 +0xb8
github.com/containers/libpod/libpod.(*Runtime).NewPod(0xc000142240, 0x15359a0, 0xc0000fe048, 0xc000552940, 0x7, 0x8, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/runtime_pod_linux.go:99 +0x357
main.parseCreateOpts(0x15359a0, 0xc0000fe048, 0xc00018a840, 0xc000142240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/create.go:544 +0x5c66
main.createContainer(0xc00018a840, 0xc000142240, 0x0, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/create.go:144 +0x183
main.runCmd(0xc00018a840, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/run.go:53 +0xed
github.com/containers/libpod/vendor/github.com/urfave/cli.HandleAction(0x11a07c0, 0x13fb6d8, 0xc00018a840, 0x0, 0xc0002feb40)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:501 +0xc8
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x135c58b, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1386b1d, 0x20, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:165 +0x459
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).Run(0xc0001dca80, 0xc000104040, 0x4, 0x4, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:259 +0x6bb
main.main()
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/main.go:263 +0x1577

Describe the results you expected:

To run the container in the newly created pod.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

$ podman --version
podman version 0.12.1.2

Output of podman info:

$ podman info
host:
  BuildahVersion: 1.6-dev
  Conmon:
    package: podman-0.12.1.2-1.git9551f6b.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 67ab7549b44484cc3f201d7bb2b58b922f8edc24'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 40201265152
  MemTotal: 50558132224
  OCIRuntime:
    package: runc-1.0.0-66.dev.gitbbb17ef.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: ead425507b6ba28278ef71ad06582df97f2d5b5f
      spec: 1.0.1-dev
  SwapFree: 32212250624
  SwapTotal: 32212250624
  arch: amd64
  cpus: 32
  hostname: myates-uk-rdlabs-hpecorp-net
  kernel: 4.19.13-300.fc29.x86_64
  os: linux
  rootless: true
  uptime: 20h 27m 24.2s (Approximately 0.83 days)
insecure registries:
  registries:
  - storex-k8s-1.uk.rdlabs.hpecorp.net:31631
registries:
  registries: null
store:
  ContainerStore:
    number: 34
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mount_program=/usr/bin/fuse-overlayfs
  - overlay.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /var/home/myates/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
  ImageStore:
    number: 13
  RunRoot: /run/user/1000

Additional environment details (AWS, VirtualBox, physical, etc.):

This occurs when running rootless.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 10, 2019
@rhatdan
Copy link
Member

rhatdan commented Jan 10, 2019

$ podman run --pod="new:test" -d fedora sleep 1000
c00fe261a5aae8acb0423f1e41a4a95df0568f4350b55f348eaa1f6b69be4ef2

This works for me. Not sure what you are trying to do. Of course the tool should not segfault.

@rhatdan
Copy link
Member

rhatdan commented Jan 10, 2019

# podman run --pod="new:test1" gcr.io/google_containers/pause-amd64
Trying to pull gcr.io/google_containers/pause-amd64...Failed
unable to pull gcr.io/google_containers/pause-amd64: unable to pull image: Error determining manifest MIME type for docker://gcr.io/google_containers/pause-amd64:latest: Error reading manifest latest in gcr.io/google_containers/pause-amd64: manifest unknown: Failed to fetch "latest" from request "/v2/google_containers/pause-amd64/manifests/latest".

@mheon
Copy link
Member

mheon commented Jan 10, 2019

The segfault is somewhere in c/image and happens when creating the infra container

Can you trigger this with podman pod create as well as podman run --pod=new:?

@mheon
Copy link
Member

mheon commented Jan 10, 2019

@Logibox Is this running as rootless?

@matyat
Copy link
Author

matyat commented Jan 10, 2019

@rhatdan the pause-amd64 container was just used as an example, but seems to occur with all other images too. I missed the pull command (which has the tag) i ran before the run in the original report.
podman pull gcr.io/google_containers/pause-amd64:3.0.

@mheon yes this is while running rootless.

podman pod create --name test works fine, and interestingly the pod does indeed get created despite the segfault when using podman run --pod=new:...

@matyat
Copy link
Author

matyat commented Jan 10, 2019

Running rootless

$ podman run --pod="new:test" -d fedora sleep 1000
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xf0 pc=0xc63982]

goroutine 1 [running]:
github.com/containers/libpod/vendor/github.com/containers/image/storage.storageReference.StringWithinTransport(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x152ab60, 0xc00055f6c0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/containers/image/storage/storage_reference.go:123 +0x42
github.com/containers/libpod/vendor/github.com/containers/image/storage.storageTransport.ParseStoreReference(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/containers/image/storage/storage_transport.go:175 +0x333
github.com/containers/libpod/libpod/image.(*Runtime).getImage(0xc00055edc0, 0xc000560a80, 0x14, 0x0, 0x1523180, 0xc00055f660)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:393 +0x7d
github.com/containers/libpod/libpod/image.(*Image).getLocalImage(0xc000134fc0, 0xc000134fc0, 0xc0005b63c8, 0x76)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:230 +0x168
github.com/containers/libpod/libpod/image.(*Runtime).New(0xc00055edc0, 0x15359a0, 0xc000100048, 0xc000560a80, 0x14, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/image/image.go:136 +0x47d
github.com/containers/libpod/libpod.(*Runtime).createInfraContainer(0xc000144240, 0x15359a0, 0xc000100048, 0xc00039b5f0, 0x1, 0x0, 0xc0005b65b8)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/runtime_pod_infra_linux.go:70 +0xb8
github.com/containers/libpod/libpod.(*Runtime).NewPod(0xc000144240, 0x15359a0, 0xc000100048, 0xc000562940, 0x7, 0x8, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/libpod/runtime_pod_linux.go:99 +0x357
main.parseCreateOpts(0x15359a0, 0xc000100048, 0xc00018cc60, 0xc000144240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/create.go:544 +0x5c66
main.createContainer(0xc00018cc60, 0xc000144240, 0x0, 0x0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/create.go:144 +0x183
main.runCmd(0xc00018cc60, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/run.go:53 +0xed
github.com/containers/libpod/vendor/github.com/urfave/cli.HandleAction(0x11a07c0, 0x13fb6d8, 0xc00018cc60, 0x0, 0xc0002ee9c0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:501 +0xc8
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x135c58b, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1386b1d, 0x20, 0x0, ...)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:165 +0x459
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).Run(0xc0001cea80, 0xc000108000, 0x7, 0x7, 0x0, 0x0)
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:259 +0x6bb
main.main()
        /builddir/build/BUILD/libpod-9551f6bb379d4af56dfb63ddf0f3682e40a6694e/_build/src/github.com/containers/libpod/cmd/podman/main.go:263 +0x1577
$ podman pod ps
POD ID         NAME   STATUS    CREATED              # OF CONTAINERS   INFRA ID
b050721da16a   test   Created   About a minute ago   0   

@mheon
Copy link
Member

mheon commented Jan 10, 2019

I can reproduce this with rootless, but not as root.

@mheon mheon added the rootless label Jan 10, 2019
@mheon
Copy link
Member

mheon commented Jan 10, 2019

@giuseppe PTAL

giuseppe added a commit to giuseppe/libpod that referenced this issue Jan 11, 2019
Closes: containers#2124

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
mheon pushed a commit to mheon/libpod that referenced this issue Feb 8, 2019
Closes: containers#2124

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
mheon pushed a commit to mheon/libpod that referenced this issue Feb 8, 2019
Closes: containers#2124

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants