Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternate port_handler that keeps the source ip for user-defined rootless networks #8193

Open
usury opened this issue Oct 29, 2020 · 33 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. network Networking related issue or feature pasta pasta(1) bugs or features rootless slirp4netns Bug is in slirp4netns

Comments

@usury
Copy link

usury commented Oct 29, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Alternate "port_handler=slirt4netns" provided in PR 6965 (#6965) for "only 127.0.0.1 within containers" not implemented for user-defined rootless cni networks

  • specifying "port_handler=slirp4netns" when connecting a container to a already-created user-defined rootless cni network does NOT error out and does NOT implement the behavior provided in PR 6965
  • as a result, not possible to see accurate remote address within containers connected to a user-defined rootless cni network (only "127.0.0.1")

Steps to reproduce the issue:

normaluser@containerhost $> podman network create myCNI

normaluser@containerhost $> podman run --name myNginx1 --publish 8081:80 --network=myCNI -d nginx:alpine
normaluser@containerhost $> podman run --name myNginx2 --publish 8082:80 --network=myCNI:port_handler=slirp4netns -d nginx:alpine
normaluser@containerhost $> podman run --name myNginx3 --publish 8083:80 --network=slirp4netns:port_handler=slirp4netns -d nginx:alpine

normaluser@containerhost $>  podman inspect myNginx1 | grep -i ipaddress
                "IPAddress": "10.89.0.2",
normaluser@containerhost $>  podman inspect myNginx2 | grep -i ipaddress
                "IPAddress": "",
normaluser@containerhost $>  podman inspect myNginx3 | grep -i ipaddress
                "IPAddress": "",

normaluser@containerhost $> podman exec myNginx1 ifconfig
                ## shows loopback and **eth0**, as expected
normaluser@containerhost $> podman exec myNginx2 ifconfig
                ## absolutely blank, not expected
normaluser@containerhost $> podman exec myNginx3 ifconfig
                ## shows loopback and **tap0**, as expected

otherhost $> curl --head http://_containerhost_:8081
otherhost $> curl --head http://_containerhost_:8082
`        curl: (7) Failed to connect to _containerhost_ port 8082: Connection refused`
otherhost $> curl --head http://_containerhost_:8083

normaluser@containerhost $> podman logs myNginx1 | grep HEAD
        127.0.0.1 - - [29/Oct/2020:15:51:25 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
normaluser@containerhost $> podman logs myNginx2 | grep HEAD
        ## nothing to see, no connection succeeded
normaluser@containerhost $> podman logs myNginx3|grep HEAD
        192.168.0.116 - - [29/Oct/2020:15:51:31 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"

Describe the results you received:
"myNginx1" container has only 127.0.0.1 as remote address (confusing), though explained in PR 6965
"myNginx2" container launches without error however port not opened and no ip address assigned
"myNginx3" container - no problem - everything makes sense

Describe the results you expected:
"myNginx1:" expected to see correct remote address wihin container (though explainded in PR 6965)
"myNginx2:" expected successful ip assignment and port to open using "port_handler=slirp4netns", or contaier launch to fail
"myNginx3" container - no problem - everything makes sense

Additional information you deem important (e.g. issue happens only occasionally):
happens consistently and reproducably

Output of podman version:

        Version:      2.1.1
        API Version:  2.0.0
        Go Version:   go1.14
        Built:        Wed Dec 31 16:00:00 1969
        OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: '
  cpus: 4
  distribution:
    distribution: debian
    version: "10"
  eventLogger: journald
  hostname: arachne
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 2000
      size: 1
    - container_id: 1
      host_id: 100001
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 2000
      size: 1
    - container_id: 1
      host_id: 100001
      size: 65536
  kernel: 4.19.0-10-amd64
  linkmode: dynamic
  memFree: 369057792
  memTotal: 2091732992
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0~rc6+dfsg1
      commit: 1.0.0~rc6+dfsg1-3
      spec: 1.0.1
  os: linux
  remoteSocket:
    path: /run/user/2000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.4
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
  swapFree: 746057728
  swapTotal: 1073737728
  uptime: 184h 14m 26.29s (Approximately 7.67 days)
registries:
  search:
  - docker.io
  - quay.io
version:
  APIVersion: 2.0.0
  Built: 0
  BuiltTime: Wed Dec 31 16:00:00 1969
  GitCommit: ""
  GoVersion: go1.14
  OsArch: linux/amd64
  Version: 2.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown,now 2.1.1~2 amd64 [installed]
podman/unknown 2.1.1~2 arm64
podman/unknown 2.1.1~2 armhf
podman/unknown 2.1.1~2 ppc64el

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
Container host is a VirtualBox VM running on Fedora 32
podman packages installed from OpenSuse repo
$> cat "/etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"

## "Kubic" repo from "OpenSuse" for "podman" packages since they aren't in Debian 10 (buster) repos
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Debian_10/ /

$> cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 29, 2020
@usury
Copy link
Author

usury commented Oct 29, 2020

Since the "port_handler=slirp4netns" solution from PR 6965 was implemented to address the "only 127.0.0.1" problem within rootless containers ("port_handler=rootlesskit" by default), and since that solution is not implemented for user-defined rooteless cni networks, perhaps revisiting the "only 127.0.0.1" problem with "port_handler=rootlesskit" is in order before simply extending the solution from PR 6965 to user-defined rootless cni networks.

Furthermore, the default behavior of "only 127.0.0.1" within rootless containers, regardless of user-defined rootless cni networking, is confusing for users who are new to containers/podman but not new to webserver/application configuration, like myself.

  • For hours, I chased what I thought was a self-inflicted configuration problem brought about by my own container/podman ignorance. Instead, "127.0.0.1" in my container logs was a consequence of an implementation decision for rootless podman to use "port_handler=rootlesskit" by default, not my inexperience with containers/podman.

Additionally, the default behvavior of "only 127.0.0.1" within rootless containers, regardless of user-defined rootless cni networking, breaks potential remote-address-based authentication mechanisms a developer could devise for rootless container services.

Background as I know it
PR 6965 provided a mechanism for specifying "port_handler=[rootlesskit|slirp4netns]" as an extension to the "--network" commandline arg when launching a rootless container. That option provides a work-around for the issue of seeing only "127.0.0.1" as the remote address for requests reaching rootless containers via published host ports

PR 6965 allows users to specify an alternate mechanism for rootless port binding which successfully preserves the remote address for packets reaching rootless containers.

PR 6965 did not fix the "only 127.0.0.1" issue stemming from the use of "port_handler=rootlesskit". Instead, PR 6965 made it possible to select "port_handler=slirp4netns" which does not exhibit the "only 127.0.0.1" behavior of "port_handler=rootlesskit" (the default).

Since the alternate "port_handler" work-around for "only 127.0.0.1" provided in PR 6965 is not implemented for user-defined rootless cni networks, perhaps a more general or clearer way to handle "rootlesskit" vs "slirp4netns" port_handler is required.

That clearer way to handle "rootlesskit" vs "slirp4netns" port_handler ought to support "slirp4netns" network options when creating and/or connecting to user-defined rootless cni networks (whichever is more appropriate) to address the issues identifed at the top of this post.

There may be future options required for current/future underlying network subsystems. A non-convoluted mechanism to easily support such subsystems/options could be useful or help to avoid some future headaches.

  Idea ONLY - not an actual working mechanism
  A separate "--netopt" commandline arg supported by "podman run" and/or "podman network create"
  I read a similar suggestion when learning about PR 6965

    ## NO user-defined rootless cni network
    ## specify --netopt when CONTAINER is created
    ## container uses default network and "port_handler=slirp4netns"
    "podman run --name myApp0 --netopt port_handler=slirp4netns"

    ## WITH user-defined rootless cni network
    ## specify --netopt when NETWORK is created
    ## container automatically uses "port_handler=slirp4netns" by virture of joining "myCNI1"
    ## !! There may be very good reasons preventing this per-NETWORK approach !!
    ## !! Maybe each containers on a network needs/should support independent "port_handler" !!
    "podman network create --name myCNI1 --netopt port_handler=slirp4netns"
    "podman run --name myApp1 --network myCNI1"

    ## WITH user-defined rootless cni network
    ## specify --netopt when CONTAINER is created
    ## continer uses "myCNI2" network and "port_handler=slirp4netns"
    ## !! There may be very good reasons preventing this per-CONTAINER approach when joining a network !!
    ## !! Maybe all containers on a network need the same "port_handler" !!
    "podman network create --name myCNI2"
    "podman run --name myApp2 --network myCNI2 --netopt port_handler=slirp4netns"


***** User-defined rootless network limitations *****


## Setup
    normaluser@containerhost $> podman rm -fa       # new set of examples
    normaluser@containerhost $> podman network create myCNI
    normaluser@containerhost $> podman run --name myNginx1 --publish 8081:80 --network=myCNI -d nginx:alpine
    normaluser@containerhost $> podman run --name myNginx2 --publish 8082:80 --network=myCNI:port_handler=slirp4netns -d nginx:alpine

## only "myNginx1" (port 8081) succeeded in opening a port
## Listening Ports
    normaluser@containerhost $> netstat -tnlp | grep 808[0-9]
    ## -t tcp ports, -n numeric addresses, -l listening, -p process
        tcp6       0      0 :::8081                 :::*                    LISTEN      26135/containers-ro 

## still, both containers started successfully (expected "myNginx2" to fail since it did not actually open port 8082)
## Containers
    normaluser@containerhost $> podman ps
        CONTAINER ID  IMAGE                             COMMAND               CREATED             STATUS                 PORTS                 NAMES
        844616df3c71  docker.io/library/nginx:alpine    nginx -g daemon o...  About a minute ago  Up About a minute ago  0.0.0.0:8082->80/tcp  myNginx2
        964801ccf46d  docker.io/library/nginx:alpine    nginx -g daemon o...  About a minute ago  Up About a minute ago  0.0.0.0:8081->80/tcp  myNginx1
        0aaa632fe8cc  quay.io/libpod/rootless-cni-infra sleep infinity        About a minute ago  Up About a minute ago                        rootless-cni-infra

## "127.0.0.1" behavior consistent with examples to follow
    normaluser@otherhost     $> curl --head http://containerhost:8081
    normaluser@containerhost $> curl --head http://containerhost:8081
    normaluser@containerhost $> podman logs myNginx1 | grep HEAD
        127.0.0.1 - - [29/Oct/2020:13:12:32 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
        127.0.0.1 - - [29/Oct/2020:13:12:50 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"

## cannot connect on port 8082 (it was not successfully published - as confirmed by netstat, above)
    normaluser@otherhost     $> curl --head http://containerhost:8082
    normaluser@containerhost $> curl --head http://containerhost:8081
        curl: (7) Failed to connect to 192.168.0.22 port 8082: Connection refused
    normaluser@containerhost $> podman logs myNginx2 | grep HEAD
        <nothing to see>

    normaluser@containerhost $> podman inspect myNginx1 | grep -i ipaddress
                        "IPAddress": "10.89.0.2",
    normaluser@containerhost $> podman inspect myNginx2 | grep -i ipaddress
                "IPAddress": "",

    normaluser@containerhost $> podman exec myNginx2 ifconfig
        ## absolutely blank, not even loopback - not expected
    normaluser@containerhost $> podman exec myNginx2 ip addr
        1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1000

    normaluser@containerhost $> podman exec myNginx2 netstat -tln
        tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      
        tcp        0      0 :::80                   :::*                    LISTEN      

Oddly:
"myNginx2" launches without error but does not get an ip address on "myCNI" (but "myNginx1" does as expected)
"myNginx2" launches without error but does not open a port
"myNginx2" thinks it is listening internally on its port 80 even though ifconfig within container reports nothing
expected "podman run --name myNginx2 ..." to either
- fail for not opening the port,
- fail for not having an ip address
- successfully receive ip address and open the port using "port_handlier=slirp4netns" network option

@usury
Copy link
Author

usury commented Oct 29, 2020


***** potentially informative "port_handler=rootlesskit" behavior *****


Unexpected IP6 address binding behavior of "port_handler=rootlesskit"

Alternate "port_handler" provided in PR 6965 helps to reveal ...

  • rootless "podman run" with "--network=slirpnetns:port_handler=rootlesskit" (default)
    -- binds to local IP6 address unless non-zero IP4 address specified in "--publish" arg
    --- container sees only "127.0.0.1" as remote address (regardless of actual remote address)
    -- binds to requested non-zero IP4 address when specified in "--publish" arg
    --- container still sees only "127.0.0.1" as remote address (regardless of actual remote address)

  • rootless "podman run" with "--network=slirpnetns:port_handler=slirp4netns"
    -- binds to IP4 address
    -- container sees correct remote address
    -- these are good things

## Setup
    normaluser@containerhost $> podman rm -fa       # new set of examples
    normaluser@containerhost $> podman run --name myNginx1 --publish 8081:80 -d nginx:alpine                # no IP addr
    normaluser@containerhost $> podman run --name myNginx2 --publish 0.0.0.0:8082:80 -d nginx:alpine        # "any" IP4 addr
    normaluser@containerhost $> podman run --name myNginx2 --publish 192.168.0.22:8083:80 -d nginx:alpine   # container host IP4 addr
    normaluser@containerhost $> podman run --name myNginx4 --publish 8084:80 --network=slirp4netns:port_handler=slirp4netns -d nginx:alpine     # port_handler=slirp4netns

## Container logs always show "127.0.0.1" via "port_handler=rootlesskit" (default) "--publish 8081:80" (no ip addr specified)
    normaluser@otherhost     $> curl --head http://containerhost:8081
    normaluser@containerhost $> curl --head http://containerhost:8081
    normaluser@containerhost $> podman logs myNginx1 | grep HEAD
        127.0.0.1 - - [29/Oct/2020:10:18:25 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
        127.0.0.1 - - [29/Oct/2020:10:18:45 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"

## Container logs always show "127.0.0.1" via "port_handler=rootlesskit" (default) "--publish 0.0.0.0:8082:80" (the "any" ip4 addr)
    normaluser@otherhost     $> curl --head http://containerhost:8082
    normaluser@containerhost $> curl --head http://containerhost:8082
    normaluser@containerhost $> podman logs myNginx2 | grep HEAD
        127.0.0.1 - - [29/Oct/2020:10:18:26 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
        127.0.0.1 - - [29/Oct/2020:10:18:46 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"

## Container logs always show "127.0.0.1" via "port_handler=rootlesskit" (default) "--publish 192.168.0.22:8083:80" (container host ip4 addr)
    normaluser@otherhost     $> curl --head http://containerhost:8083
    normaluser@containerhost $> curl --head http://containerhost:8083
    normaluser@containerhost $> podman logs myNginx3 | grep HEAD
        127.0.0.1 - - [29/Oct/2020:10:18:27 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
        127.0.0.1 - - [29/Oct/2020:10:18:47 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"

## Correct remote address via "port_handler=slirp4netns"
    normaluser@otherhost     $> curl --head http://containerhost:8084
    normaluser@containerhost $> curl --head http://containerhost:8084
    normaluser@containerhost $> podman logs myNginx4
        192.168.0.116 - - [29/Oct/2020:10:18:30 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.69.1" "-"
        192.168.0.22 - - [29/Oct/2020:10:18:52 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"

## Listening Ports
    normaluser@containerhost $> netstat -tnlp | grep 808[0-9]
    ## -t tcp ports, -n numeric addresses, -l listening, -p process
        tcp        0      0 192.168.0.22:8083       0.0.0.0:*               LISTEN      26387/containers-ro 
        tcp        0      0 0.0.0.0:8084            0.0.0.0:*               LISTEN      26470/slirp4netns   
        tcp6       0      0 :::8081                 :::*                    LISTEN      26135/containers-ro 
        tcp6       0      0 :::8082                 :::*                    LISTEN      26221/containers-ro 

Rather Odd Observations:

  • When no ip address or when "0.0.0.0" is specified in "--publish" arg, "port_handler=rootlesskit" (default) binds to localhost IP6 address (the ones on port 8081 and 8082)
  • When a non-zero IP4 address is specified in "--publish" arg (ip addr of containerhost in examples, above), "port_handler=rootlesskit" (default) binds to specified IP4 address (the one on port 8083)
  • Regardless of IP4 vs IP6 address binding, when "port_handler=rootlesskit" only 127.0.0.1 is available within container
  • Container using "port_handler=slirp4netns" binds to the "any" IP4 address (the one on port 8084)

I do not know what underlying IP4/IP6 linux mechanism allows connections like $> curl http://IP4 address:8081 (and 8082) to reach IP6 address:8081 (and 8082) where "rootlesskit" opened the ports. Perhaps that is an important mechanism in the behavior observed.

Still, the IP4 vs IP6 address bindings provide a clue regarding why "127.0.0.1" is the source address within containers launched with "port_handler=rootlesskit" (default) network option

Perhaps "rootlesskit" must recreate each packet it receives?

  • When bound to host IP6 address, does "rootlesskit" recreate each packet for destination IP4 address of the container?
  • If that is the case, it makes sense for source address to be "127.0.0.1" because the packet reaching container was generated on localhost (by "rootlesskit") from IP6 packets it somehow received
    -- still, it is unhelpful to see "127.0.0.1" within destination container by default
  • However, that does not explain "myNginx3" example, which is bound to IP4 192.168.0.22:8083
    -- only "127.0.0.1" within that container, as well

Work-arounds on other forums suggest diabling ip6 on container host. A better approach may be to actually have "rootlesskit" bind to container host IP4 address, assuming that is actually part of the problem. There still remains the problem of seeing only "127.0.0.1" regardless of IP4 vs IP6 address binding ("myNginx2" and "myNginx3" examples)

@rhatdan
Copy link
Member

rhatdan commented Oct 31, 2020

@AkihiroSuda WDYT?

@AkihiroSuda
Copy link
Collaborator

--netopt

SGTM

Perhaps "rootlesskit" must recreate each packet it receives?

If we do that, it will be slowed down as in slirp4netns.

Probably we can just spoof srcIP using IP_TRANSPARENT socket, but I'm not familiar with IP_TRNASPARENT.

@usury
Copy link
Author

usury commented Nov 1, 2020

SGTM

Perhaps "rootlesskit" must recreate each packet it receives?

If we do that, it will be slowed down as in slirp4netns.

My apologies for the confusion - it wasn't a suggestion for rootlesskit to recreate packets. It was a hypothesis as to why 127.0.0.1 is the remote address within a container when the packet passed through "rootlesskit". I should have said "Perhaps rootlesskit recreates each packet it receives?" The subpoints under that statement in original comment, above, make more sense in that clarified context.

@usury
Copy link
Author

usury commented Nov 13, 2020

I know I muddied the topic by hypothesizing. However, the initial bug remains. It is not possible to specify "port_handler=slirp4netns" for user-defined rootless CNI networks, which means it is not currently possible to determine the correct remote address for containers using such a network. The step-by-step section at the beginning of this thread describes how to reproduce the behavior.

@AkihiroSuda AkihiroSuda added kind/feature Categorizes issue or PR as related to a new feature. rootless slirp4netns Bug is in slirp4netns and removed kind/bug Categorizes issue or PR as related to a bug. labels Nov 13, 2020
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Dec 15, 2020

I take it this is still an issue.

@mheon
Copy link
Member

mheon commented Dec 15, 2020

Yes. Did we ever add the required fields to containers.conf? I can't recall us doing so.

@rhatdan
Copy link
Member

rhatdan commented Dec 15, 2020

I did not.

@daiaji
Copy link

daiaji commented Jan 10, 2021

When I used v2ray and tproxy in podman, I encountered a similar problem.

2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on 127.0.0.1:15490
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on 127.0.0.1:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/udp: listening UDP on 127.0.0.1:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on [::1]:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 1

@mheon
Copy link
Member

mheon commented Jan 10, 2021 via email

@daiaji
Copy link

daiaji commented Jan 11, 2021

@mheon ok

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@HidingCherry
Copy link

As a friendly reminder - this would be a useful feature.
Is this on some TODO short- or long-term?

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@vrothberg
Copy link
Member

@Luap99 shall we move the issue to netavark?

@Luap99
Copy link
Member

Luap99 commented Jul 13, 2022

No this is a podman issue.

@umohnani8 umohnani8 added Good First Issue This issue would be a good issue for a first time contributor to undertake. priority/medium labels Jul 13, 2022
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@MatsG23
Copy link

MatsG23 commented Jan 14, 2023

Is there a way to use port_handler=slirp4netns and still connect to a different container somehow or is this blocked due to this issue?

@E314c
Copy link

E314c commented Mar 7, 2023

Is there any priority assigned to looking into or resolving this?
This issue means I'm going to have to change some hosts run docker so that the applications on top of them will run correctly.

My scenario:
I'm running a docker-compose setup with a few containers, one of which needs the source_address of the packets it receives to be the valid external IP address.

I came across #5138 (comment), specifically stating that this was intentional and for "speed".
Reading through that and following various links I get information that a workaround is in place:
CLI flag : --network slirp4netns:port_handler=slirp4netns
or containers.conf:

[engine]
network_cmd_options= [ 'port_handler=slirp4netns']

Testing it I see it works when I'm running some containers and didn't immediately recognise that this still wouldn't work with docker-compose, as it will create new network, and "port_handler=slirp4netns (...) cannot be used for user-defined networks".

My use case doesn't need the 7Gbps -> 28 Gbps increase linked as the reason for moving away from slirp4netns.

@Luap99
Copy link
Member

Luap99 commented Mar 7, 2023

I am planning to fix this soon but not with slirp4netns.
I like to switch slirp4netns with pasta for the rootless netns, the pasta port forwarding will correctly handle the source ip so it should work with it. I will add some config option so users/distros can select between pasta and slirp.

@E314c
Copy link

E314c commented Mar 7, 2023

Sounds good, but to clarify: the current rootless configuration doesn't use slirp4netns, it defaults to rootlesskit, so are you actually intending to replace rootlesskit with pasta?

@Luap99
Copy link
Member

Luap99 commented Mar 7, 2023

Sounds good, but to clarify: the current rootless configuration doesn't use slirp4netns, it defaults to rootlesskit, so are you actually intending to replace rootlesskit with pasta?

No pasta is a slirp4netns replacement, the rootlesskit forwarder is not needed with pasta because its native port forwarding is already very fast and also keeps the correct source ip.
Podman v4.4 already supports pasta with --network pasta if you want to test it out.

@sbrivio-rh
Copy link
Collaborator

@Luap99 as far as I remember this was complicated by the fact that pasta doesn't support multiple containers as source/destination for port forwarding entries. That's still work in progress.

However, automatic UDP port forwarding with periodic port scanning, which according to my understanding should make this doable (albeit not elegant) is now implemented and available in passt version 2023_11_19.4f1709d, as well as Fedora's passt-0^20231119.g4f1709d-1.fc39 and passt-0^20231119.g4f1709d-1.fc40. I still have to prepare a Debian package update.

@bingoct
Copy link

bingoct commented Apr 4, 2024

Has there been any progress on this issue recently? In the pasta mode, it can indeed get the expected remote_ip.

However, it seems that the issue still exists because the IPAddress field of the container is empty.

podman inspect traefik | grep IPAddress
          "IPAddress": "",

This is disastrous for traefik because its service auto-discovery relies on this field.

time="2024-04-04T08:49:19Z" level=error msg="service \"traefik-traefik\" error: unable to find the IP address for the container \"/traefik\": the server is ignored"
podman version
Client:       Podman Engine
Version:      4.9.3
API Version:  4.9.3
Go Version:   go1.21.6
Built:        Thu Jan  1 08:00:00 1970
OS/Arch:      linux/amd64

@Luap99
Copy link
Member

Luap99 commented Apr 4, 2024

Has there been any progress on this issue recently? In the pasta mode, it can indeed get the expected remote_ip.

However, it seems that the issue still exists because the IPAddress field of the container is empty.

podman inspect traefik | grep IPAddress
          "IPAddress": "",

This is disastrous for traefik because its service auto-discovery relies on this field.

time="2024-04-04T08:49:19Z" level=error msg="service \"traefik-traefik\" error: unable to find the IP address for the container \"/traefik\": the server is ignored"

This is not related to this issue, it makes zero sense to ever add the ip address to podman inspect for the slirp4netns or pasta mode because that ip will not be reachable from the host or other containers.

You would need --network bridge for that and this is what the issue is about, because in this case we always use the rootlessport forwarder even with podman 5.0 and pasta. Doing proper port forwarding via pasta is planned but much more involved and complicated to implement.

@urbenlegend
Copy link

I just ran into this issue while migrating my rootful Nextcloud and Nginx Proxy Manager containers over to rootless. This bug basically breaks reverse proxies. Nginx Proxy Manager can't communicate to Nextcloud about where requests are actually coming from. This means that Nextcloud can't do things like brute-force protection based on remote IPs.

I've been wracking my brain and googling all over on how to use the slirp4netns port handler with docker-compose files and I honestly can't figure it out.

If I define a network within my docker-compose to contain Nextcloud and NPM, does this mean that I can not use slirp4netns port handler at all? Is there really no way around this? If that's the case, I think I might have to revert back to rootful, which is a shame.

Whatever solution ultimately comes up, I hope it's compatible with docker-compose.

@Luap99
Copy link
Member

Luap99 commented Apr 22, 2024

Well you can set network_mode: pasta or network_mode: port_handler=slirp4netns but this is incompatible with named (user-defined) networks, so it is a either or situation for now.

Fixing this is not trivial at all, sure I love to implement this but given other priorities I can promise any timeline for this.

@Luap99 Luap99 removed the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Apr 22, 2024
@Luap99 Luap99 changed the title Alternate "port_handler=slirp4netns" not implemented for user-defined rootless cni networks Alternate port_handler that keeps the source ip for user-defined rootless networks Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. network Networking related issue or feature pasta pasta(1) bugs or features rootless slirp4netns Bug is in slirp4netns
Projects
None yet
Development

No branches or pull requests