Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman 5.0.1 can no longer connect to services running on container host on the default route IP address #22502

Closed
jdoss opened this issue Apr 25, 2024 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features

Comments

@jdoss
Copy link
Contributor

jdoss commented Apr 25, 2024

Issue Description

I have a service that runs in a different set of containers on my workstation and I can no longer connect to it's ports that are bound to the main network interface when using a pod or container that is connected to shared podman network.

This all broke when I upgraded to Fedora 40 which has podman 5.0.1. Everything worked as expected with Fedora 39 with podman podman-4.9.4-1.fc39

You can see the port is closed when trying to scan it with nmap:

[root@traefik /]# nmap -p 8123 192.168.1.11
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:12 UTC
Nmap scan report for host.containers.internal (192.168.1.11)
Host is up (0.000046s latency).

PORT     STATE  SERVICE
8123/tcp closed polipo

When you nmap the default route IP you only see open ports from the containers and pods that are attached to the podman network.

[root@traefik /]# nmap 192.168.1.11
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:12 UTC
Nmap scan report for host.containers.internal (192.168.1.11)
Host is up (0.0000070s latency).
Not shown: 995 closed tcp ports (reset)
PORT     STATE SERVICE
80/tcp   open  http
443/tcp  open  https
2022/tcp open  down
8000/tcp open  http-alt
8022/tcp open  oa-system

If you hit an IP address on a different interface you can access the port just fine:

[root@traefik /]# nmap -p 8123 100.71.0.3
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:11 UTC
Nmap scan report for sw-0608 (100.71.0.3)
Host is up (0.00081s latency).

PORT     STATE SERVICE
8123/tcp open  polipo

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a network with podman network.
  2. Create a pod or containers that use that shared network.
  3. Have a service that runs on the main host that is bound to all interfaces. Try to connect to that port on the interface that has the default route and note the failure.

Describe the results you received

I am unable to connect to services running on the default route IP address on the container host from containers.

Describe the results you expected

The same behavior as in podman-4.9.4-1.fc39.

podman info output

host:
  arch: amd64
  buildahVersion: 1.35.3
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.fc40.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 94.08
    systemPercent: 1.6
    userPercent: 4.32
  cpus: 64
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: workstation
    version: "40"
  eventLogger: journald
  freeLocks: 1956
  hostname: sw-0608
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.8.7-300.fc40.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 18606710784
  memTotal: 270054465536
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-1.fc40.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-3.fc40.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: crun-1.14.4-1.fc40.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20240326.g4988e2b-1.fc40.x86_64
    version: |
      pasta 0^20240326.g4988e2b-1.fc40.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-2.fc40.x86_64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8561356800
  swapTotal: 8589930496
  uptime: 22h 35m 56.00s (Approximately 0.92 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/jdoss/.config/containers/storage.conf
  containerStore:
    number: 33
    paused: 0
    running: 33
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/jdoss/.local/share/containers/storage
  graphRootAllocated: 4000764313600
  graphRootUsed: 3439477415936
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 807
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/jdoss/.local/share/containers/storage/volumes
version:
  APIVersion: 5.0.1
  Built: 1711929600
  BuiltTime: Sun Mar 31 19:00:00 2024
  GitCommit: ""
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Here is the podman network I am using:

[
     {
          "name": "traefik",
          "id": "211d3797abd8f70f3cef4ab592fbb0244de80588acc2690b630996f81e73bbb7",
          "driver": "bridge",
          "network_interface": "podman1",
          "created": "2024-04-24T15:36:12.794689453-05:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          },
          "containers": {
               "0d8fe3092c2ae6f33b7776aee5b1e1bcfe68308212023f80152f526f6e0a4a66": {
                    "name": "traefik-pod",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.40/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "b6:1f:6f:f5:6b:13"
                         }
                    }
               },
               "21a936c4438f1c81716ef41d0f96da8d904d9200f29b238dde9913e03e1855fd": {
                    "name": "lame-pod",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.36/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "62:dc:28:a0:5d:3b"
                         }
                    }
               },
               "257c36f5213bc70a87038f917d71a7449ecf01daa7cf904ca977f5cd43be027d": {
                    "name": "coredns-pod",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.31/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "d6:f2:f5:65:e2:b5"
                         }
                    }
               },
               "4caba4b6f8f78bc61a18058338cac98d460149cb8fe9402af34fd32ebc076be2": {
                    "name": "stepca-pod",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.37/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "52:a4:f6:61:1a:be"
                         }
                    }
               },
               "7cf21f3253434d698d8254c800c6adfbaf3f3c27e5735460e111f14b4cb209cd": {
                    "name": "hass-nodered",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.53/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "32:2c:05:66:19:0b"
                         }
                    }
               },
               "fefa23308be9d9048a26ee900e77e1d3b3be90d5b246a1c1f1b7d1e76982940c": {
                    "name": "paperless-pod",
                    "interfaces": {
                         "eth0": {
                              "subnets": [
                                   {
                                        "ipnet": "10.89.0.35/24",
                                        "gateway": "10.89.0.1"
                                   }
                              ],
                              "mac_address": "a2:a0:ea:49:11:53"
                         }
                    }
               }
          }
     }
]

Additional information

No response

@jdoss jdoss added the kind/bug Categorizes issue or PR as related to a bug. label Apr 25, 2024
@Luap99
Copy link
Member

Luap99 commented Apr 25, 2024

see the pasta section here https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

Although I thought I fixed the host.containers.internal problem but I guess it was only fixed for --network pasta and not bridge networks. And also in order to have this work you would need to have a second ip on the host because pasta by default will always pick the ip from the default interface so it can never be reachable.

@Luap99 Luap99 added network Networking related issue or feature pasta pasta(1) bugs or features labels Apr 25, 2024
@Luap99
Copy link
Member

Luap99 commented Apr 25, 2024

I close this a dup of #19213 as it has much more info on what needs to be done to fix it properly but it will need pasta fixes if there are no other ips on the host

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Apr 25, 2024
@jdoss
Copy link
Contributor Author

jdoss commented Apr 25, 2024

Thanks @Luap99! I will subscribe to #19213 for updates.

@sbrivio-rh
Copy link
Collaborator

@Luap99 isn't #19213 exclusively around DNS issues? I didn't quite grasp the problem here yet, but I'm wondering if this one could be worked around by some explicit -t / -T options.

@Luap99
Copy link
Member

Luap99 commented Apr 26, 2024

@Luap99 isn't #19213 exclusively around DNS issues?

No it is for both host.containers.internal host entry which needs a 1 to 1 ip mapping to the actual host ip. And resolv.conf which already works ok with --dns-forward today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features
Projects
None yet
Development

No branches or pull requests

3 participants