Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to use a pure IPv6 server in a rootless container #14709

Open
Hendrik-H opened this issue Jun 23, 2022 · 9 comments
Open

unable to use a pure IPv6 server in a rootless container #14709

Hendrik-H opened this issue Jun 23, 2022 · 9 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. network Networking related issue or feature

Comments

@Hendrik-H
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

Currently all traffic is forwarded into a rootless container using the rootlessport proxy. This forwards all the traffic to the IPv4 address of the container if it has one and only picks the IPv6 address if there is no IPv4 address. The code for this should be:

func getRootlessPortChildIP(c *Container, netStatus map[string]types.StatusBlock) string {
if c.config.NetMode.IsSlirp4netns() {
slirp4netnsIP, err := GetSlirp4netnsIP(c.slirp4netnsSubnet)
if err != nil {
return ""
}
return slirp4netnsIP.String()
}
var ipv6 net.IP
for _, status := range netStatus {
for _, netInt := range status.Interfaces {
for _, netAddress := range netInt.Subnets {
ipv4 := netAddress.IPNet.IP.To4()
if ipv4 != nil {
return ipv4.String()
}
ipv6 = netAddress.IPNet.IP
}
}
}
if ipv6 != nil {
return ipv6.String()
}
return ""
}

So when I start an IPv6 only service, for example in go by a listener for tcp6, it does not receive any traffic. I would expect that traffic that reaches the host on an IPv4 address is forwarded to the containers IPv4 address and traffic that reaches the host on an IPv6 address is forwarded to the container on it's IPv6 address. This should also match to what happens in a rootful container.

Steps to reproduce the issue:

  1. create a container image that contains a service that only listens on :: so only accepts IPv6 requests

  2. start the container in rootless mode podman run -it --rm --name test -p 8080:8080 test-container

  3. try to connect to the container using IPv4 and IPv6

Describe the results you received:
When the service in the container only listens to tcp6 then no communication is possible. However when the service is change to listen to tcp4 then a request via IPv4 and IPv6 works

Describe the results you expected:
I would expect that IPv4 requests are being forwarded as IPv4 and IPv6 as IPv6, basically as it happens in rootful. So when I start the service using tcp6 only an IPv6 connection should work and if I start it as tcp4 only IPv4 should work. When I start it using tcp than both should work.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.1.0
API Version:  4.1.0
Go Version:   go1.18.2
Built:        Mon May 30 16:03:28 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.26.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpuUtilization:
    idlePercent: 99.93
    systemPercent: 0.04
    userPercent: 0.03
  cpus: 2
  distribution:
    distribution: fedora
    variant: cloud
    version: "36"
  eventLogger: journald
  hostname: server-1.pok.stglabs.ibm.com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.17.1-300.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 2260959232
  memTotal: 4109443072
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.5-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.5
      commit: c381048530aa750495cf502ddb7181f2ded5b400
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 4109365248
  swapTotal: 4109365248
  uptime: 194h 50m 21.28s (Approximately 8.08 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/fedora/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/fedora/.local/share/containers/storage
  graphRootAllocated: 42314215424
  graphRootUsed: 1022480384
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  volumePath: /home/fedora/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.0
  Built: 1653926608
  BuiltTime: Mon May 30 16:03:28 2022
  GitCommit: ""
  GoVersion: go1.18.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.1.0-8.fc36.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

No but i asked on the mailing list

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 23, 2022
@Luap99 Luap99 added the network Networking related issue or feature label Jun 23, 2022
@Luap99
Copy link
Member

Luap99 commented Jun 23, 2022

The mailing list post: https://lists.podman.io/archives/list/podman@lists.podman.io/thread/A7LNHHG24IRR7EHEI4TPBNE3LG6JKE4F/

I think this makes sense, in theory it is not complicated but there is a corner case:
What if there is no ipv6 address in the container? Should we still send ipv6 connections to the ipv4 port (vice versa, when no ipv4 address in the container)?

@Hendrik-H
Copy link
Author

In my opinion you should not, just like this should not happen in a rootful container. but one can of course also add a couple of options to allow this to happen in all sorts of ways. To me the current behavior is unexpected and I would actually call it a bug.

@Luap99
Copy link
Member

Luap99 commented Jun 23, 2022

The current behaviour is definitely not a bug and changing it so that ipv6 connection are no longer forwarded to ipv4 can break existing workflows. It is somewhat common that the container has no ipv6 but ipv6 forwarding from the host is still expected to work. For example: #14491

@Hendrik-H
Copy link
Author

Regarding #14491 : any idea how docker forwards the traffic if the container has an IPv6 address? At least in the docker-registry case it should work if the traffic is forwarded to the container's IPv6 address, if it has one, as the docker-registry should be accepting both, at least it does so in my setup. But I understand the desire to act by default the same as docker does.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jul 25, 2022

@Luap99 Any update on this?

@Luap99
Copy link
Member

Luap99 commented Jul 26, 2022

no

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. network Networking related issue or feature
Projects
None yet
Development

No branches or pull requests

3 participants