Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman generate kube results in broken ports configuration #3408

Closed
Fodoj opened this issue Jun 22, 2019 · 9 comments · Fixed by #3417
Closed

podman generate kube results in broken ports configuration #3408

Fodoj opened this issue Jun 22, 2019 · 9 comments · Fixed by #3417
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@Fodoj
Copy link
Contributor

Fodoj commented Jun 22, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

YAML generated by podman generate kube contains broken ports configuration. There is also a bonus bug found during removing the pod.

Steps to reproduce the issue:

podman pod create --name postgresql -p 5432 -p 9187
baf367c1e7baf627c383ef1d80efa69b34dc9b0c0c06839b371fc5599c0d0e08
podman run -d --pod postgresql -e POSTGRES_PASSWORD=password postgres:latest                                                                           
9b8f325ff257d5670e1a1161c4bef168cf0bc7be0d3308da76a560678c389692
podman run -d --pod postgresql -e DATA_SOURCE_NAME="postgresql://postgres:password@localhost:5432/postgres?sslmode=disable" wrouesnel/postgres_exporter
0b933d2d4f056670143917057f67b7d933c27097fab3a274b79e52447a059046
podman generate kube postgresql > postgresql.yaml
podman pod rm postgresql -f

Last command throws this error:

ERRO[0000] cleanup volume (&{3e9db716f4210d923945df5c92f17231a171305c517b395eee5f5bc1b3d67072 /var/lib/postgresql/data [rprivate rw nodev rbind]}): volume 3e9db716f4210d923945df5c92f17231a171305c517b395eee5f5bc1b3d67072 is being used by the following container(s): 9b8f325ff257d5670e1a1161c4bef168cf0bc7be0d3308da76a560678c389692: volume is being used 
baf367c1e7baf627c383ef1d80efa69b34dc9b0c0c06839b371fc5599c0d0e08

But it's probably m'kay, because pod is removed and all good. Let's continue the road to primary bug of this issue:

podman play kube postgresql.yaml

This throws:

ERRO[0000] error starting some container dependencies   
ERRO[0000] "error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]" 
Error: error starting some containers: internal libpod error

Describe the results you received:

Faulty postgresql.yaml that is not usable due to incorrect port configuration. Somehow the command resulted in this port config for both containers:

    ports:
    - containerPort: 5432
      hostPort: 5432
      protocol: TCP
    - containerPort: 9187
      hostPort: 9187
      protocol: TCP

If I leave 5432 for PostgreSQL and 9187 for exporter, then podman play kube works. But then there is another issue: for postgres_exporter Podman generated following config:

  containers:
  - command:
    - /postgres_exporter
    env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: HOSTNAME
    - name: container
      value: podman
    - name: DATA_SOURCE_NAME
      value: postgresql://postgres:password@localhost:5432/postgres?sslmode=disable
    image: docker.io/wrouesnel/postgres_exporter:latest
    name: cockymirzakhani
    ports:
    - containerPort: 9187
      hostPort: 9187
      protocol: TCP
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
    workingDir: /

Notice the command part: this is wrong, because the image itself has /postgres_exporter as an entrypoint. So when I run podman play kube container tries to run /postgres_exporter /postgres_exporter and fails to start. If I remove generated command then it starts just fine. And if I run podman pod rm postgresql -f on the pod started with podman play kube with a fixed postgresql.yaml, then I don't hit a bonus bug described earlier.

Describe the results you expected:

Correct YAML that can be used straight ahead.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.4.2

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.5
  podman version: 1.4.2
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.2-1.fc30.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 0.2.0, commit: d7234dc01ae2ef08c42e3591e876723ad1c914c9'
  Distribution:
    distribution: fedora
    version: "30"
  MemFree: 4364161024
  MemTotal: 25005821952
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: e3b4c1108f7d1bf0d09ab612ea09927d9b59b4e3
      spec: 1.0.1-dev
  SwapFree: 12574236672
  SwapTotal: 12574519296
  arch: amd64
  cpus: 8
  hostname: localhost.localdomain
  kernel: 5.0.14-300.fc30.x86_64
  os: linux
  rootless: true
  uptime: 390h 49m 52.58s (Approximately 16.25 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/fodoj/.config/containers/storage.conf
  ContainerStore:
    number: 6
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /home/fodoj/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 13
  RunRoot: /tmp/1000
  VolumePath: /home/fodoj/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

Just my personal laptop with Fedora 30.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 22, 2019
@baude
Copy link
Member

baude commented Jun 23, 2019

to be certain, it appears this is a rootless usage? can you confirm as such

@Fodoj
Copy link
Contributor Author

Fodoj commented Jun 23, 2019

Yes, rootless

@giuseppe
Copy link
Member

slirp will try to bind mount the specified ports on the host. Could you please verify that the ports 5432 and 9187 on the host are free and not already used?

@Fodoj
Copy link
Contributor Author

Fodoj commented Jun 24, 2019

They are free and available. Please note that the issue is not about binding this ports, but because of configuration from Podman which configures both ports for both containers. If I leave relevant port for each container, then it works just fine.

@giuseppe
Copy link
Member

thanks for the clarification.

@baude should not we create a single network namespace for the infra container and let the other containers join it?

@mheon
Copy link
Member

mheon commented Jun 24, 2019

I think that we're looking up the ports for each container individually during generate kube. I recall a change a month or so back where podman port was made to understand that containers sharing a network namespace all had the same ports forwarded - that may be what broke us here, because the lookups for each container now return the ports forwarded to all containers in the pod.

@rhatdan
Copy link
Member

rhatdan commented Jun 24, 2019

@haircommander Is this something you changed?

@mheon
Copy link
Member

mheon commented Jun 24, 2019

I got this one

@mheon mheon self-assigned this Jun 24, 2019
mheon added a commit to mheon/libpod that referenced this issue Jun 24, 2019
This likely broke when we made containers able to detect that
they shared a network namespace and grab ports from the
dependency container - prior to that, we could grab ports without
concern for conflict, only the infra container had them. Now, all
containers in a pod will return the same ports, so we have to
work around this.

Fixes containers#3408

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member

mheon commented Jun 24, 2019

#3417

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants