Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the "podman play kube" feature available in the version v1.0.0? #2209

Closed
warmchang opened this issue Jan 23, 2019 · 13 comments
Closed

Is the "podman play kube" feature available in the version v1.0.0? #2209

warmchang opened this issue Jan 23, 2019 · 13 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@warmchang
Copy link
Contributor

warmchang commented Jan 23, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
There is an exciting feature in v1.0.0 release:
Added the podman play kube command to create pods and containers from Kubernetes pod YAML.

I can't help but want to try it, but there is an error. Is the way I use it wrong?

Steps to reproduce the issue:

  1. Prepare the POD manifest (pod-initContainer.yaml):
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: pod-initcontainer
  name: pod-initcontainer
spec:
  initContainers:
    - name: wait
      image: busybox
      imagePullPolicy: IfNotPresent
      command: ['sh', '-c', 'echo The wait app is running! && sleep 10']
  containers:
  - name: pod-initcontainer
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
  1. Run "podman --log-level=debug play kube ./pod-initContainer.yaml"

Describe the results you received:
Return an error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x10f5cb5]

goroutine 1 [running]:
main.kubeContainerToCreateConfig(0xc4202c09e0, 0x11, 0xc4203f5954, 0x7, 0xc42008b580, 0x2, 0x4, 0x0, 0x0, 0x0, ...)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/play_kube.go:210+0x145
main.playKubeYAMLCmd(0xc42013ac60, 0x0, 0x0)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/play_kube.go:153+0xd18
github.com/containers/libpod/vendor/github.com/urfave/cli.HandleAction(0x122fd80, 0x14b2100, 0xc42013ac60, 0x0, 0xc42029c600)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:501 +0xc8
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x1416039, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1445ddf, 0x23, 0x0, ...)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:165 +0x47d
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).RunAsSubcommand(0xc420136fc0, 0xc42013a9a0, 0x0, 0x0)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:383 +0xa6b
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.startApp(0x14161b1, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1430f2c, 0x17, 0x0, ...)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:377 +0x8d9
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x14161b1, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1430f2c, 0x17, 0x0, ...)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:103 +0x838
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).Run(0xc420136e00, 0xc42003a140, 0x5, 0x5, 0x0, 0x0)
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:259 +0x6e8
main.main()
        /build/podman-Sye8Gm/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/main.go:213 +0x115a

Check the result, and the POD just include the infra-container:

$ podman ps -a --pod
CONTAINER ID  IMAGE                 COMMAND  CREATED         STATUS             PORTS  NAMES    POD
8982dfccb367  k8s.gcr.io/pause:3.1           57 seconds ago  Up 57 seconds ago         2055df0dc635-infra  2055df0dc635
$ podman pod list
POD ID         NAME                STATUS    CREATED              # OF CONTAINERS   INFRA ID
2055df0dc635   pod-initcontainer   Running   About a minute ago   1                 8982dfccb367

Describe the results you expected:
The POD running well without error.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

$ podman --version
podman version 1.0.1-dev
$

Output of podman info:

$ podman infohost:
  BuildahVersion: 1.6-dev  Conmon:
    package: 'cri-o-1.12: /usr/lib/crio/bin/conmon'    path: /usr/lib/crio/bin/conmon
    version: 'conmon version 1.12.5-dev, commit: '  Distribution:
    distribution: ubuntu
    version: "16.04"  MemFree: 919945216
  MemTotal: 2587828224
  OCIRuntime:
    package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc'
    path: /usr/lib/cri-o-runc/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 2
  hostname: minikube  kernel: 4.4.0-138-generic
  os: linux  rootless: false
  uptime: 38m 26.51s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 2
  RunRoot: /var/run/containers/storage

$

Additional environment details (AWS, VirtualBox, physical, etc.):
The katacoda environment.

$ uname -a
Linux minikube 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 23, 2019
@warmchang
Copy link
Contributor Author

Try this on fedora 29, got the same result: 🤔

 ⚡ root@fedora1  /home/fedora  podman play kube ./yaml/pod-connectivity-container-with-initContainer.yaml
a951a24fa320d64a14cc042f295d729d4d0b7b23b55eb1e69bd352380d77a696
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x10af636]

goroutine 1 [running]:
main.kubeContainerToCreateConfig(0xc00031c240, 0x24, 0xc00031c210, 0x2a, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/cmd/podman/play_kube.go:210 +0x126
main.playKubeYAMLCmd(0xc0001269a0, 0x0, 0x0)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/cmd/podman/play_kube.go:153 +0xcff
github.com/containers/libpod/vendor/github.com/urfave/cli.HandleAction(0x11e3160, 0x14473e0, 0xc0001269a0, 0x0, 0xc000220780)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:501 +0xc8
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x13a7497, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13d74e8, 0x23, 0x0, ...)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:165 +0x459
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).RunAsSubcommand(0xc000178e00, 0xc000126580, 0x0, 0x0)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:383 +0x827
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.startApp(0x13a7613, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13c2ea2, 0x17, 0x0, ...)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:377 +0x808
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x13a7613, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13c2ea2, 0x17, 0x0, ...)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:103 +0x80f
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).Run(0xc000178c40, 0xc0000300c0, 0x4, 0x4, 0x0, 0x0)
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:259 +0x6bb
main.main()
        /builddir/build/BUILD/libpod-82e80110c3f2d8728745c47e340f3bee4d408846/_build/src/github.com/containers/libpod/cmd/podman/main.go:273 +0x15a6
 ✘ ⚡ root@fedora1  /home/fedora  uname -a
Linux fedora1.novalocal 4.19.15-300.fc29.x86_64 #1 SMP Mon Jan 14 16:32:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
 ⚡ root@fedora1  /home/fedora  cat /etc/redhat-release
Fedora release 29 (Twenty Nine)
 ⚡ root@fedora1  /home/fedora  podman --version
podman version 1.0.0
 ⚡ root@fedora1  /home/fedora 

@baude baude self-assigned this Jan 23, 2019
@baude
Copy link
Member

baude commented Jan 23, 2019

I will try your yaml today. Thanks for providing it! The play feature is also geared at playing yaml that podman generates using podman generate kube. Did you create your containers/pods first in podman and then generate it?

@warmchang
Copy link
Contributor Author

@baude I tryed the "podman generate kube" function, it works fine.

 ⚡ root@fedora1  /home/fedora  podman ps -a
CONTAINER ID  IMAGE                                       COMMAND               CREATED       STATUS           PORTS  NAMES
5da604ea3fee  localhost/connectivity-container:alpine3.8  ./connectivity-co...  23 hours ago  Up 22 hours ago         nostalgic_archimedes
 ⚡ root@fedora1  /home/fedora  podman generate kube -s 5da604ea3fee > test.yaml
 ⚡ root@fedora1  /home/fedora  cat test.yaml
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.0.0
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: 2019-01-24T02:50:08Z
  labels:
    app: nostalgicarchimedes
  name: nostalgicarchimedes
spec:
  containers:
  - command:
    - ./connectivity-container
    env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: HOSTNAME
    - name: container
      value: podman
    image: localhost/connectivity-container:alpine3.8
    name: nostalgicarchimedes
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
    workingDir: /root/
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-01-24T02:50:08Z
  labels:
    app: nostalgicarchimedes
  name: nostalgicarchimedes
spec:
  selector:
    app: nostalgicarchimedes
  type: NodePort
status:
  loadBalancer: {}

 ⚡ root@fedora1  /home/fedora  

But after run "podman play kube" to try to start with the yaml (after delete the origin container), it return the new error: "ERRO[0000] name nostalgicarchimedes is in use: container already exists".


 ⚡ root@fedora1  /home/fedora  podman pod list
 ⚡ root@fedora1  /home/fedora  podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
 ⚡ root@fedora1  /home/fedora  podman --log-level=debug play kube ./test.yaml
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay test mount with multiple lowers succeeded
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Created cgroup path machine.slice/machine-libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5.slice for parent machine.slice and name libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5
DEBU[0000] Created cgroup machine.slice/machine-libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5.slice
DEBU[0000] Got pod cgroup as machine.slice/machine-libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5.slice
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]k8s.gcr.io/pause:3.1"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] exporting opaque data as blob "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] exporting opaque data as blob "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] exporting opaque data as blob "sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e"
DEBU[0000] created container "9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2"
DEBU[0000] container "9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2" has work directory "/var/lib/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata"
DEBU[0000] container "9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2" has run directory "/var/run/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata"
DEBU[0000] Made network namespace at /var/run/netns/cni-ca1f1b6d-dbe9-2f10-9011-50621a21bf59 for container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2
INFO[0000] Got pod network &{Name:5ba92317e674-infra Namespace:5ba92317e674-infra ID:9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 NetNS:/var/run/netns/cni-ca1f1b6d-dbe9-2f10-9011-50621a21bf59 PortMappings:[] Networks:[] NetworkConfig:map[]}
INFO[0000] About to add CNI network cni-loopback (type=loopback)
DEBU[0000] mounted container "9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2" at "/var/lib/containers/storage/overlay/9c1de85f4f5f07b782da57d7707a84a3da848158e0a29b36301401059f15a252/merged"
DEBU[0000] Created root filesystem for container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 at /var/lib/containers/storage/overlay/9c1de85f4f5f07b782da57d7707a84a3da848158e0a29b36301401059f15a252/merged
INFO[0000] Got pod network &{Name:5ba92317e674-infra Namespace:5ba92317e674-infra ID:9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 NetNS:/var/run/netns/cni-ca1f1b6d-dbe9-2f10-9011-50621a21bf59 PortMappings:[] Networks:[] NetworkConfig:map[]}
INFO[0000] About to add CNI network podman (type=bridge)
DEBU[0000] [0] CNI result: Interfaces:[{Name:cni0 Mac:02:bc:75:18:6e:81 Sandbox:} {Name:vethcfd2450c Mac:86:26:74:f9:be:55 Sandbox:} {Name:eth0 Mac:f2:fd:0e:dd:02:4b Sandbox:/var/run/netns/cni-ca1f1b6d-dbe9-2f10-9011-50621a21bf59}], IP:[{Version:4 Interface:0xc0004b8270 Address:{IP:10.88.0.13 Mask:ffff0000} Gateway:10.88.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}], DNS:{Nameservers:[] Domain: Search:[] Options:[]}
INFO[0000] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]
INFO[0000] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
WARN[0000] User mount overriding libpod mount at "/dev/shm"
DEBU[0000] Setting CGroups for container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 to machine-libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5.slice:libpod:9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2
WARN[0000] failed to parse language "en_US.UTF-8": language: tag is not well-formed
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] reading hooks from /etc/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 at /var/lib/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata/config.json
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/libexec/podman/conmon    args=[-s -c 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 -u 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 -r /usr/sbin/runc -b /var/lib/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata -p /var/run/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata/pidfile -l /var/lib/containers/storage/overlay-containers/9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog]
INFO[0000] Running conmon under slice machine-libpod_pod_5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5.slice and unitName libpod-conmon-9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2.scope
DEBU[0000] Received container pid: 9306
DEBU[0000] Created container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 in OCI runtime
DEBU[0000] Starting container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2 with command [/pause]
DEBU[0000] Started container 9bffd468df74431996d65e59d29c2b7612221f28931073da1074f10dfab286d2
5ba92317e6742dd4fa5493a57c11636c1bb0bf765463bd9b8eb6338eeadaa2e5
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]localhost/connectivity-container:alpine3.8"
DEBU[0000] Using container netmode
DEBU[0000] Using container ipcmode
DEBU[0000] appending name nostalgicarchimedes
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@5cb7e3f2249c1912e6ea2dca29edf92f1528d343547a4b6f6b0134cbd031c783"
DEBU[0000] exporting opaque data as blob "sha256:5cb7e3f2249c1912e6ea2dca29edf92f1528d343547a4b6f6b0134cbd031c783"
DEBU[0000] created container "a4f71af795cb3a6043cff86f4f675665f2ee258261b2deda424145b8fc5c044b"
DEBU[0000] container "a4f71af795cb3a6043cff86f4f675665f2ee258261b2deda424145b8fc5c044b" has work directory "/var/lib/containers/storage/overlay-containers/a4f71af795cb3a6043cff86f4f675665f2ee258261b2deda424145b8fc5c044b/userdata"
DEBU[0000] container "a4f71af795cb3a6043cff86f4f675665f2ee258261b2deda424145b8fc5c044b" has run directory "/var/run/containers/storage/overlay-containers/a4f71af795cb3a6043cff86f4f675665f2ee258261b2deda424145b8fc5c044b/userdata"
DEBU[0000] Storage is already unmounted, skipping...
ERRO[0000] name nostalgicarchimedes is in use: container already exists
 ✘ ⚡ root@fedora1  /home/fedora 
 ✘ ⚡ root@fedora1  /home/fedora  podman pod list
POD ID         NAME                  STATUS    CREATED         # OF CONTAINERS   INFRA ID
5ba92317e674   nostalgicarchimedes   Running   8 seconds ago   1                 9bffd468df74
 ⚡ root@fedora1  /home/fedora  podman ps -a --pod
CONTAINER ID  IMAGE                 COMMAND  CREATED         STATUS             PORTS  NAMES               POD
9bffd468df74  k8s.gcr.io/pause:3.1           14 seconds ago  Up 14 seconds ago         5ba92317e674-infra  5ba92317e674
 ⚡ root@fedora1  /home/fedora 

@baude
Copy link
Member

baude commented Jan 30, 2019

That is because the container you created the YAML from still exists under that same name (or the pod). You might also be interested in https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/

@rhatdan
Copy link
Member

rhatdan commented Jan 31, 2019

If you delete the pod and the container does it work?

@warmchang
Copy link
Contributor Author

Thank you for reply!

Actually I deleted all the PODs and containers before running the "podman play kube".

#2209 (comment), the result of "podman pod list" and "podman ps -a":

image

And I used to run according to the article (https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/), everything is fine, so I have no idea now.

@warmchang
Copy link
Contributor Author

Try this in a public env (https://www.katacoda.com/courses/kubernetes/launch-single-node-cluster) without kubernetes cluster (does not run the minikube start), got the same error.

$ podman pod rm -a -f
ee6fcf6ac6a91f792254c4fdb9a9aa7f5ae54f4bd6b94941f2664c3af7acb1ad
$ podman rm -a -f
$ podman pod rm -a -f
$ podman rm -a -f
$ podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
$ vi ./test.yaml
$ cat ./test.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    name: initcontainer
  name: initcontainer
spec:
  containers:
  - name: podtest
    image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
$ podman play kube ./test.yaml
b4d6b0696aefccb5586beb161a7a27683c51b474831a7f468825dc2bdec4015a
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x10f60a5]

goroutine 1 [running]:
main.kubeContainerToCreateConfig(0xc42041ba87, 0x7, 0xc42041ba80, 0x7, 0xc4200b55c0, 0x2, 0x4, 0x0, 0x0, 0x0, ...)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/play_kube.go:210 +0x145
main.playKubeYAMLCmd(0xc420162c60, 0x0, 0x0)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/play_kube.go:153 +0xd18
github.com/containers/libpod/vendor/github.com/urfave/cli.HandleAction(0x12301e0, 0x14b26a0, 0xc420162c60, 0x0, 0xc4202c03c0)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:501 +0xc8
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x1416519, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x14462de, 0x23, 0x0, ...)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:165 +0x47d
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).RunAsSubcommand(0xc42015efc0, 0xc420162840, 0x0, 0x0)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:383 +0xa6b
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.startApp(0x1416691, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x143140e, 0x17, 0x0, ...)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:377 +0x8d9
github.com/containers/libpod/vendor/github.com/urfave/cli.Command.Run(0x1416691, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x143140e, 0x17, 0x0, ...)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/command.go:103 +0x838
github.com/containers/libpod/vendor/github.com/urfave/cli.(*App).Run(0xc42015ee00, 0xc4200b4040, 0x4, 0x4, 0x0, 0x0)
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/vendor/github.com/urfave/cli/app.go:259 +0x6e8
main.main()
        /build/podman-G0XI_v/podman-1.0.1/src/github.com/containers/libpod/cmd/podman/main.go:213 +0x115a
$ podman --version
podman version 1.0.1-dev
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$

@baude
Copy link
Member

baude commented Jan 31, 2019

@warmchang is the yaml in ^^ generated by podman or not?

@warmchang
Copy link
Contributor Author

@baude

  1. Is the "podman play kube" feature available in the version v1.0.0? #2209 (comment), the "test.yaml" is not generated by podman command in this test, but written by myself, compatible with the format of kubernetes.

  2. Is the "podman play kube" feature available in the version v1.0.0? #2209 (comment), in this test, the "test.yaml" is generated by podman.

@rhatdan
Copy link
Member

rhatdan commented Mar 8, 2019

@warmchang @baude Is this still an issue?

@baude
Copy link
Member

baude commented Mar 8, 2019

@rhatdan we dont support running yaml that isnt generated by us. But I would really prefer that we dont panic.

@rhatdan rhatdan assigned haircommander and unassigned baude Mar 8, 2019
@rhatdan
Copy link
Member

rhatdan commented Mar 8, 2019

Ok, I am giving this to @haircommander to look at, I want to make your statement less true over time.

baude added a commit to baude/podman that referenced this issue Mar 8, 2019
if an input YAML file lacks securitycontext and working dir for
a container, we need to be able to handle that.  if no default for
working dir is provided, we use a default of "/".

fixes issue containers#2209

Signed-off-by: baude <bbaude@redhat.com>
@baude
Copy link
Member

baude commented Mar 8, 2019

the root cause was two fold here: the input lacks a security context for containers which we do output for generate kube. it is required for libpod container running. we can now handle not having that input. once that was fixed, the input file also didn't define a CWD which is also a libpod requirement.

we can now tolerate both those and set a default cwd if not provided. i tested your YAML provided in the Jan 31 comment and it worked perfectly.

@rhatdan rhatdan closed this as completed Mar 8, 2019
muayyad-alsadi pushed a commit to muayyad-alsadi/libpod that referenced this issue Apr 21, 2019
if an input YAML file lacks securitycontext and working dir for
a container, we need to be able to handle that.  if no default for
working dir is provided, we use a default of "/".

fixes issue containers#2209

Signed-off-by: baude <bbaude@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

5 participants