You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a service that runs in a different set of containers on my workstation and I can no longer connect to it's ports that are bound to the main network interface when using a pod or container that is connected to shared podman network.
This all broke when I upgraded to Fedora 40 which has podman 5.0.1. Everything worked as expected with Fedora 39 with podman podman-4.9.4-1.fc39
You can see the port is closed when trying to scan it with nmap:
[root@traefik /]# nmap -p 8123 192.168.1.11
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:12 UTC
Nmap scan report for host.containers.internal (192.168.1.11)
Host is up (0.000046s latency).
PORT STATE SERVICE
8123/tcp closed polipo
When you nmap the default route IP you only see open ports from the containers and pods that are attached to the podman network.
[root@traefik /]# nmap 192.168.1.11
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:12 UTC
Nmap scan report for host.containers.internal (192.168.1.11)
Host is up (0.0000070s latency).
Not shown: 995 closed tcp ports (reset)
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2022/tcp open down
8000/tcp open http-alt
8022/tcp open oa-system
If you hit an IP address on a different interface you can access the port just fine:
[root@traefik /]# nmap -p 8123 100.71.0.3
Starting Nmap 7.94 ( https://nmap.org ) at 2024-04-25 15:11 UTC
Nmap scan report for sw-0608 (100.71.0.3)
Host is up (0.00081s latency).
PORT STATE SERVICE
8123/tcp open polipo
Steps to reproduce the issue
Steps to reproduce the issue
Create a network with podman network.
Create a pod or containers that use that shared network.
Have a service that runs on the main host that is bound to all interfaces. Try to connect to that port on the interface that has the default route and note the failure.
Describe the results you received
I am unable to connect to services running on the default route IP address on the container host from containers.
Describe the results you expected
The same behavior as in podman-4.9.4-1.fc39.
podman info output
host:
arch: amd64buildahVersion: 1.35.3cgroupControllers:
- cpu
- io
- memory
- pidscgroupManager: systemdcgroupVersion: v2conmon:
package: conmon-2.1.10-1.fc40.x86_64path: /usr/bin/conmonversion: 'conmon version 2.1.10, commit: 'cpuUtilization:
idlePercent: 94.08systemPercent: 1.6userPercent: 4.32cpus: 64databaseBackend: boltdbdistribution:
distribution: fedoravariant: workstationversion: "40"eventLogger: journaldfreeLocks: 1956hostname: sw-0608idMappings:
gidmap:
- container_id: 0host_id: 1000size: 1
- container_id: 1host_id: 100000size: 65536uidmap:
- container_id: 0host_id: 1000size: 1
- container_id: 1host_id: 100000size: 65536kernel: 6.8.7-300.fc40.x86_64linkmode: dynamiclogDriver: journaldmemFree: 18606710784memTotal: 270054465536networkBackend: netavarknetworkBackendInfo:
backend: netavarkdns:
package: aardvark-dns-1.10.0-1.fc40.x86_64path: /usr/libexec/podman/aardvark-dnsversion: aardvark-dns 1.10.0package: netavark-1.10.3-3.fc40.x86_64path: /usr/libexec/podman/netavarkversion: netavark 1.10.3ociRuntime:
name: crunpackage: crun-1.14.4-1.fc40.x86_64path: /usr/bin/crunversion: |- crun version 1.14.4 commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1 rundir: /run/user/1000/crun spec: 1.0.0 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJLos: linuxpasta:
executable: /usr/bin/pastapackage: passt-0^20240326.g4988e2b-1.fc40.x86_64version: | pasta 0^20240326.g4988e2b-1.fc40.x86_64 Copyright Red Hat GNU General Public License, version 2 or later <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.remoteSocket:
exists: truepath: /run/user/1000/podman/podman.socksecurity:
apparmorEnabled: falsecapabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOTrootless: trueseccompEnabled: trueseccompProfilePath: /usr/share/containers/seccomp.jsonselinuxEnabled: trueserviceIsRemote: falseslirp4netns:
executable: /usr/bin/slirp4netnspackage: slirp4netns-1.2.2-2.fc40.x86_64version: |- slirp4netns version 1.2.2 commit: 0ee2d87523e906518d34a6b423271e4826f71faf libslirp: 4.7.0 SLIRP_CONFIG_VERSION_MAX: 4 libseccomp: 2.5.3swapFree: 8561356800swapTotal: 8589930496uptime: 22h 35m 56.00s (Approximately 0.92 days)variant: ""plugins:
authorization: nulllog:
- k8s-file
- none
- passthrough
- journaldnetwork:
- bridge
- macvlan
- ipvlanvolume:
- localregistries:
localhost:5000:
Blocked: falseInsecure: trueLocation: localhost:5000MirrorByDigestOnly: falseMirrors: nullPrefix: localhost:5000PullFromMirror: ""search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.iostore:
configFile: /home/jdoss/.config/containers/storage.confcontainerStore:
number: 33paused: 0running: 33stopped: 0graphDriverName: overlaygraphOptions: {}graphRoot: /home/jdoss/.local/share/containers/storagegraphRootAllocated: 4000764313600graphRootUsed: 3439477415936graphStatus:
Backing Filesystem: btrfsNative Overlay Diff: "true"Supports d_type: "true"Supports shifting: "false"Supports volatile: "true"Using metacopy: "false"imageCopyTmpDir: /var/tmpimageStore:
number: 807runRoot: /run/user/1000/containerstransientStore: falsevolumePath: /home/jdoss/.local/share/containers/storage/volumesversion:
APIVersion: 5.0.1Built: 1711929600BuiltTime: Sun Mar 31 19:00:00 2024GitCommit: ""GoVersion: go1.22.1Os: linuxOsArch: linux/amd64Version: 5.0.1
Although I thought I fixed the host.containers.internal problem but I guess it was only fixed for --network pasta and not bridge networks. And also in order to have this work you would need to have a second ip on the host because pasta by default will always pick the ip from the default interface so it can never be reachable.
I close this a dup of #19213 as it has much more info on what needs to be done to fix it properly but it will need pasta fixes if there are no other ips on the host
@Luap99 isn't #19213 exclusively around DNS issues? I didn't quite grasp the problem here yet, but I'm wondering if this one could be worked around by some explicit -t / -T options.
No it is for both host.containers.internal host entry which needs a 1 to 1 ip mapping to the actual host ip. And resolv.conf which already works ok with --dns-forward today.
Issue Description
I have a service that runs in a different set of containers on my workstation and I can no longer connect to it's ports that are bound to the main network interface when using a pod or container that is connected to shared podman network.
This all broke when I upgraded to Fedora 40 which has podman 5.0.1. Everything worked as expected with Fedora 39 with podman
podman-4.9.4-1.fc39
You can see the port is closed when trying to scan it with nmap:
When you nmap the default route IP you only see open ports from the containers and pods that are attached to the podman network.
If you hit an IP address on a different interface you can access the port just fine:
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
I am unable to connect to services running on the default route IP address on the container host from containers.
Describe the results you expected
The same behavior as in
podman-4.9.4-1.fc39
.podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
Here is the podman network I am using:
Additional information
No response
The text was updated successfully, but these errors were encountered: