Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Host unreachable from container with bridge network on Podman v5 #22653

Closed
n-hass opened this issue May 9, 2024 · 19 comments · Fixed by #22740
Closed

Host unreachable from container with bridge network on Podman v5 #22653

n-hass opened this issue May 9, 2024 · 19 comments · Fixed by #22740
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features

Comments

@n-hass
Copy link

n-hass commented May 9, 2024

Issue Description

I am running a web service on my host, which I would expect could be accessed from a bridge networked container.

This works on Podman v4.7.2: podman run --rm --network=bridge docker.io/mwendler/wget host.containers.internal:8091

The same does not work on v5.0.2, with Connecting to 10.1.26.100:8091... failed: Connection refused.

Here, 10.1.26.100 is the host's eth0 address (host.containers.internal), but the result is the same if i use the bridge's gateway IP.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Host a web server on the container host
  2. Start a container with podman run with --network=bridge
  3. Attempt to connect to host using either host.containers.internal or the bridge interface's gateway IP
  4. Observe Connection refused error

Describe the results you received

Connections to host from a container in bridge network mode are refused under Podman v5.0.2 when previously on v4 this was not the case.

Describe the results you expected

Container in bridge network mode can connect to the host using host.containers.internal

podman info output

host:
  arch: amd64
  buildahVersion: 1.35.3
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /nix/store/ipbgl019v93p0kz2az8vcai27bj2qvdj-conmon-2.1.11/bin/conmon
    version: 'conmon version 2.1.11, commit: '
  cpuUtilization:
    idlePercent: 40.63
    systemPercent: 23.64
    userPercent: 35.73
  cpus: 20
  databaseBackend: boltdb
  distribution:
    codename: uakari
    distribution: nixos
    version: "24.05"
  eventLogger: journald
  freeLocks: 2044
  hostname: praetor
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.8.9
  linkmode: dynamic
  logDriver: journald
  memFree: 9704091648
  memTotal: 67015405568
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
      path: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: Unknown
    path: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: Unknown
    path: /nix/store/q4xhymb7hrc0448w3vn76va86nv59b0b-crun-1.15/bin/crun
    version: |-
      crun version 1.15
      commit: 1.15
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/pasta
    package: Unknown
    version: |
      pasta 2024_04_26.d03c4e2
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /nix/store/qd3sk2xsj9fdn4xvgicqqzd9hc5z3114-podman-5.0.2/libexec/podman/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.3.0
      commit: 8a4d4391842f00b9c940bb8f067964427eb0c964
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 31h 28m 10.00s (Approximately 1.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 4
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 375809638400
  graphRootUsed: 142480777216
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 10
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.0.2
  Built: 315532800
  BuiltTime: Tue Jan  1 10:30:00 1980
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Environment is a NixOS host.

Additional information

No response

@n-hass n-hass added the kind/bug Categorizes issue or PR as related to a bug. label May 9, 2024
@n-hass n-hass changed the title Host unreachable with v5 bridge network Host unreachable from container with bridge network on Podman v5 May 9, 2024
@mheon
Copy link
Member

mheon commented May 9, 2024

Can you provide a podman info from the working 4.7 install? Want to see if the network backend has changed between the two.

@n-hass
Copy link
Author

n-hass commented May 10, 2024

@mheon sure. See below

host:
  arch: amd64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /nix/store/53lq2zdbaqqny8765mgmvw70kgslxrc9-conmon-2.1.8/bin/conmon
    version: 'conmon version 2.1.8, commit: '
  cpuUtilization:
    idlePercent: 34.78
    systemPercent: 25.36
    userPercent: 39.86
  cpus: 20
  databaseBackend: boltdb
  distribution:
    codename: uakari
    distribution: nixos
    version: "24.05"
  eventLogger: journald
  freeLocks: 2031
  hostname: praetor
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 992
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 427680
      size: 65536
  kernel: 6.8.9
  linkmode: dynamic
  logDriver: journald
  memFree: 10010148864
  memTotal: 67015405568
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
      path: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: Unknown
    path: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: Unknown
    path: /nix/store/djjn2p02dnh1n9k9kf66ywz8q8b95mwb-crun-1.12/bin/crun
    version: |-
      crun version 1.12
      commit: 1.12
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /nix/store/iyzsvszqksqlnn46bxfsn6xg56bnzk6p-podman-4.7.2/libexec/podman/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 42h 40m 44.00s (Approximately 1.75 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/servhost/.config/containers/storage.conf
  containerStore:
    number: 17
    paused: 0
    running: 17
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/servhost/.local/share/containers/storage
  graphRootAllocated: 375809638400
  graphRootUsed: 142457978880
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 30
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/servhost/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 315532800
  BuiltTime: Tue Jan  1 10:30:00 1980
  GitCommit: ""
  GoVersion: go1.21.9
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.2

@coolbry95
Copy link

I am experiencing the same issue.

I am not able to connect to any port that is in use on the host. I am able to ping the host. I am also able to connect to a port that is exposed by another container.

This is inside the container.

root@0e324e1f7e88:/# nc -v 192.168.1.100 443 # this is nginx running on the host
nc: connect to 192.168.1.100 port 443 (tcp) failed: Connection refused

root@0e324e1f7e88:/# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100): 56 data bytes
64 bytes from 192.168.1.100: seq=0 ttl=42 time=0.107 ms
64 bytes from 192.168.1.100: seq=1 ttl=42 time=0.137 ms
^C
--- 192.168.1.100 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.107/0.122/0.137 ms

root@0e324e1f7e88:/# nc 192.168.1.100 2343 # this is another container
asdf
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close

400 Bad Request^C

@mheon
Copy link
Member

mheon commented May 10, 2024

Can you try the 5.0 install with a container created with the --net=slirp4netns option?

@n-hass
Copy link
Author

n-hass commented May 10, 2024

@mheon Yep.
podman run --network=slirp4netns docker.io/mwendler/wget 10.1.26.100:8091 does work with the 5.0 install, no connection refused

@mheon
Copy link
Member

mheon commented May 10, 2024

@Luap99 Are we aware of this one on the Pasta side, or is this new?

@coolbry95
Copy link

I was also using pasta before on 4.x. I upgraded from fredora 39 to 40 and podman 4.x to 5.x. pasta is set in the ~/.config/containers/containers.conf.

~/.config/containers/containers.conf

[network]
default_rootless_network_cmd = "pasta"
pasta_options = ["--map-gw"]
[coolbry95@diamond ~]$ podman run -it --rm --net=slirp4netns fedora bash
[root@0b76d0341ac0 /]# nc 192.168.1.100 443
asdf
HTTP/1.1 400 Bad Request
Server: nginx
Date: Fri, 10 May 2024 02:48:08 GMT
Content-Type: text/html
Content-Length: 150
Connection: close
X-Frame-Options: SAMEORIGIN

<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

Without the pasta_options set it also fails to connect.

@lazyzyf
Copy link

lazyzyf commented May 11, 2024

i am experiencing the same issue, how to fix it?

@cemarriott
Copy link

Experiencing the same issue here. My container host updated from CoreOS 39 to 40 yesterday. I run a certificate authority container with host network mode, and a Traefik container that is connected to a bridge network and an internal network that has all of the backend services connected for proxying through Traefik.

After the update, Traefik gets connection refused when trying to connect to the CA container on the host network.

@ctml91
Copy link

ctml91 commented May 13, 2024

I am not sure if this is the exact same issue. Whether I use bridge or host networking, some containers are not accessible using the host IP but they are using the container IP. For example, running nginx on 443 will result in connection refused using host IP and succeeds using the container IP regardless if I'm using host network or bridge with port mapping.

Now the interesting part, I reboot the host and the problem switches to a different set of containers that become inaccessible using the host IP while the nginx starts working. Each reboot seems to transfer the problem to a different container, I haven't figured out any pattern.

tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      6666/conmon
tcp        0      0 0.0.0.0:8006            0.0.0.0:*               LISTEN      6265/conmon

⬢[root@toolbox ~]# curl 192.168.1.150:8006
<success>
⬢[root@toolbox ~]# curl 192.168.1.150:443
curl: (7) Failed to connect to 192.168.1.150 port 443 after 0 ms: Couldn't connect to server

<reboot>
⬢[root@toolbox ~]# curl 192.168.1.150:443
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>
⬢[root@toolbox ~]# curl 192.168.1.150:8006
curl: (7) Failed to connect to 192.168.1.150 port 8006 after 0 ms: Couldn't connect to server

Could be a different issue though as I may have had the issue on podman 4.X before upgrading. No system firewall is enabled.

[root@fedora ~]# rpm-ostree status
State: idle
Deployments:
* fedora-iot:fedora/stable/x86_64/iot
                  Version: 40.20240509.0 (2024-05-09T10:34:54Z)
               BaseCommit: 64266e7b3362d4fe8c1e02303c7dbc7cab17f0778a92c4cbe745439243c4349e
             GPGSignature: Valid signature by 115DF9AEF857853EE8445D0A0727707EA15B79CC
          LayeredPackages: toolbox

  fedora-iot:fedora/stable/x86_64/iot
                  Version: 39.20231214.0 (2023-12-15T01:47:31Z)
               BaseCommit: 922061c2981d4cd8f6301542635aa5dba5b85474782c8edbc354ba5cc344fc27
             GPGSignature: Valid signature by E8F23996F23218640CB44CBE75CF5AC418B8E74C
          LayeredPackages: toolbox
[root@fedora ~]# podman -v
podman version 5.0.2

Edit: I should add containers are being run from root user / systemd.

@Luap99
Copy link
Member

Luap99 commented May 13, 2024

@Luap99 Are we aware of this one on the Pasta side, or is this new?

Yes I know this, these are two issues really.
First using the default interface ip to connect to the host no longer works with pasta by default because pasta uses the same ip inside the namespace so the cotnainer cannot connect to that, see the pasta section here https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/

Second, bridge as rootless adds the wrong host.containers.internal ip. I fixed it for --network pasta so that it would never add the same ip as pasta there. If there is no second host ip that could be used instead it would not add the host entry which should lead to more meaningful error (name does not exists vs ip is not what you expect) which is tracked in #19213.

@sbrivio-rh
Copy link
Collaborator

sbrivio-rh commented May 13, 2024

First using the default interface ip to connect to the host no longer works with pasta by default because pasta uses the same ip inside the namespace so the cotnainer cannot connect to that

Maybe the work in progress to make forwarding more flexible will make this less of a problem, as we'll probably be able to say stuff like "splice container socket to host socket connected to port 5000 and source 2001:fd8::1`. But anyway, it's not necessarily going to be magic and intuitive, so, let's consider the current situation, as it won't be necessarily very different with this regard.

There are pretty much five ways to connect to a service running on the host, with pasta:

  • pass --map-gw (already mentioned on this ticket, Host unreachable from container with bridge network on Podman v5 #22653 (comment)) and use the address of the default gateway
    • cons (yeah I'm a positive person):
      • counterintuitive (see https://www.reddit.com/r/podman/comments/1c46q54/comment/kztjos7/), but DNS could hide this
      • you can't actually connect to the default gateway, should you have any service running there (uncommon?)
      • maps all the ports while you might want just some, so it's perhaps not the best idea, security-wise
      • needlessly translates between Layer-2 and Layer-4 even for local connections, lower throughput than direct socket splicing
    • pros:
      • it's a single configuration option
      • traffic from the container doesn't look like local traffic
  • explicitly map ports using -T / --tcp-ns, and connect to localhost from container
    • cons:
      • you need to know which ports will be used beforehand
      • traffic from the container looks local (well, it is, but it shouldn't look like it, because otherwise it's not... contained). Think of reverse-CVE-2021-20199
    • pros:
      • very low overhead as data is directly spliced between sockets
      • only exposes required ports
  • use IPv6 and link-local addresses:
    • cons:
      • many users might be unfamiliar with IPv6 (note that it doesn't actually require public IPv6 connectivity)
      • needlessly translates between Layer-2 and Layer-4 even for local connections
    • pros:
      • crystal clear semantics: the address is local to the link
      • no configuration needed
  • assign a different address to the containers compared to default address copied from the host (implies NAT)
    • cons:
      • ...well, NAT
      • needlessly translates between Layer-2 and Layer-4 even for local connections
      • needs special, somewhat arbitrary configuration
    • pros:
      • it's like it used to be with slirp4netns (just more flexible)
      • the destination address is actually assigned to an interface on the host, so it should all make sense
  • use the address of another interface, or another address on the same host interface
    • cons:
    • pros:
      • maybe, security-wise, requiring root to set up an additional address that can be used to connect to the host is a good idea
      • it's still intuitive enough that quite a few folks seem to have figured it out already
      • should play nicely with DNS

I think it would help if we pick one of those as recommended solution, eventually.

@coolbry95
Copy link

coolbry95 commented May 14, 2024

I am going with use the address of another interface, or another address on the same host interface because I happen to have a second physical address. I am also using DNS for my containers so I can continue to use the DNS. It would be nice to explore other ways of achieving the same thing with this method. I managed to just add a static IPv6 address to the same interface podman/pasta is using. I just incremented the current IP by 1. I do not know if this is an ok thing to do. This was also just for testing purposes for me.

@Luap99 Luap99 added network Networking related issue or feature pasta pasta(1) bugs or features labels May 14, 2024
@Luap99
Copy link
Member

Luap99 commented May 14, 2024

use the address of another interface, or another address on the same host interface

That is what the code is supposed to do today when trying to create a ip for host.containers.internal, however for the bridge network mode it does not work currently. I try to fix this part.

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

@sbrivio-rh
Copy link
Collaborator

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

...that is, just like --map-gw (or lack of --no-map-gw), but with an arbitrary address, right? That mapping doesn't change the source address to a loopback address (unlike -T). The source address would be the address of a local interface, but not loopback.

@Luap99
Copy link
Member

Luap99 commented May 14, 2024

However I think the actual issue is still #19213, we need a way to map a arbitrary ip in the netns to the host ip and than expose it as host.containers.internal to the container.

...that is, just like --map-gw (or lack of --no-map-gw), but with an arbitrary address, right? That mapping doesn't change the source address to a loopback address (unlike -T). The source address would be the address of a local interface, but not loopback.

Yes it is important that the address is not localhost, it must be impossible for such a mapping to reach the hosts localhost address, it must only work for services listing on the external interface.

@dimazest
Copy link

dimazest commented May 17, 2024

I faced an issue with pasta and Wireguard on a Fedora Core OS when it updated to 40.

I have a pod that runs a container with a wg0 interface. The image I use is docker.io/procustodibus/wireguard.

This wg config works with both pasta and slirp4netns

# Container interface.
[Interface]
...
# ListenPort is not set

[Peer]
...
Endpoint = wg.example.com:30104
PersistentKeepalive = 25

In this setup, container connects to a peer and keeps a connection open. Both peers can ping each other.

This setup doesn't work with pasta

# Container interface.
[Interface]
...
ListenPort = 34344

[Peer]
...
Endpoint = wg.example.com:30104

Port 34344 is published when I start a container.

With pasta there is no wg tunnel and peers can't ping each other. Switching to slirp4netns without changing anything else solves the issue.

@Luap99
Copy link
Member

Luap99 commented May 17, 2024

@dimazest I don't see how this is related to this issue here?
If you have a specific issue with your wireguard config then please file a new one with steps on how to reproduce.

Luap99 added a commit to Luap99/libpod that referenced this issue May 17, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue May 17, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

The test is bit more compilcated as I would like, however it must deal
with both cases one ip, more than one so there is no way around it I
think.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
openshift-cherrypick-robot pushed a commit to openshift-cherrypick-robot/podman that referenced this issue May 20, 2024
We have to exclude the ips in the rootless netns as they are not the
host. Now that fix only works if there are more than one ip one the
host available, if there is only one we do not set the entry at all
which I consider better as failing to resolve this name is a much better
error for users than connecting to a wrong ip. It also matches what
--network pasta already does.

The test is bit more compilcated as I would like, however it must deal
with both cases one ip, more than one so there is no way around it I
think.

Fixes containers#22653

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@devurandom
Copy link

devurandom commented May 23, 2024

@lazyzyf Maybe this helps: I am experiencing this issue with service running with podman-compose, but found a workaround with some help from the comments above.

I start a HTTP server on the host with python -m http.server -b 0.0.0.0 9000.

I execute curl on the host:

❯ curl -vv http://192.168.[REDACTED]:9000/test
*   Trying 192.168.[REDACTED]:9000...
* Connected to 192.168.[REDACTED] (192.168.[REDACTED]) port 9000
> GET /test HTTP/1.1
> Host: 192.168.[REDACTED]:9000
> User-Agent: curl/8.6.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 404 File not found
< Server: SimpleHTTP/0.6 Python/3.12.3
< Date: Thu, 23 May 2024 13:28:55 GMT
< Connection: close
< Content-Type: text/html;charset=utf-8
< Content-Length: 335
<
<!DOCTYPE HTML>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: 404</p>
        <p>Message: File not found.</p>
        <p>Error code explanation: 404 - Nothing matches the given URI.</p>
    </body>
</html>
* Closing connection

I execute curl in a container in the compose environment:

curl -vv http://host.containers.internal:9000/test
*   Trying 192.168.[REDACTED]:9000...
* connect to 192.168.[REDACTED] port 9000 failed: Connection refused
* Failed to connect to host.containers.internal port 9000 after 0 ms: Couldn't connect to server
* Closing connection 0
curl: (7) Failed to connect to host.containers.internal port 9000 after 0 ms: Couldn't connect to server

192.168.[REDACTED] is identical with the (only) inet address of the host's primary network interface (cf. output of ip address).

I set the following in ~/.config/containers/containers.conf:

[network]
default_rootless_network_cmd = "slirp4netns"

See https://blog.podman.io/2024/03/podman-5-0-breaking-changes-in-detail/ section "Pasta default for rootless networking".

After podman-compose down and podman-compose up I can connect to the host from the container:

curl -vv http://host.containers.internal:9000/test
*   Trying 192.168.[REDACTED]:9000...
* Connected to host.containers.internal (192.168.[REDACTED]) port 9000 (#0)
> GET /test HTTP/1.1
> Host: host.containers.internal:9000
> User-Agent: curl/7.88.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 404 File not found
< Server: SimpleHTTP/0.6 Python/3.12.3
< Date: Thu, 23 May 2024 13:51:44 GMT
< Connection: close
< Content-Type: text/html;charset=utf-8
< Content-Length: 335
<
<!DOCTYPE HTML>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: 404</p>
        <p>Message: File not found.</p>
        <p>Error code explanation: 404 - Nothing matches the given URI.</p>
    </body>
</html>
* Closing connection 0

Instead of reverting to slirp4netns, setting the following in ~/.config/containers/containers.conf also works, as mentioned in the article linked above:

[network]
pasta_options = ["--address", "10.0.2.0", "--netmask", "24", "--gateway", "10.0.2.2", "--dns-forward", "10.0.2.3"]

This appears to work independently of the IP address and network mask used by the container.

My system:

❯ grep PLATFORM /etc/os-release
PLATFORM_ID="platform:f40"

❯ podman-compose --version
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 5.0.3
podman-compose version 1.0.6
podman --version
podman version 5.0.3
exit code: 0

I came here from #22724.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants