Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman play kube does not bind ports to localhost #5610

Closed
christianh814 opened this issue Mar 25, 2020 · 24 comments
Closed

Podman play kube does not bind ports to localhost #5610

christianh814 opened this issue Mar 25, 2020 · 24 comments
Assignees
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@christianh814
Copy link

/kind bug

Given the following pod manifest...

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: my-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    command: ['nginx', '-g', 'daemon off;']
    ports:
    - containerPort: 80
  - name: python-web-container
    image: quay.io/redhatworkshops/simple-python-web:latest
    command: [ '/usr/bin/python3', '-m', 'http.server', '8080']
    ports:
    - containerPort: 80

produces the following output when running podman play kube mypod.yaml

ERRO[0007] error starting some container dependencies   
ERRO[0007] "failed to expose ports via rootlessport: \"unknown proto: \\\"\\\"\\n\"" 
Error: error starting some containers: internal libpod error

Here is my version

$ podman version
Version:            1.8.1
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

Here is my os

$ cat /etc/os-release 
NAME=Fedora
VERSION="31 (Workstation Edition)"
ID=fedora
VERSION_ID=31
VERSION_CODENAME=""
PLATFORM_ID="platform:f31"
PRETTY_NAME="Fedora 31 (Workstation Edition)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:31"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=31
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=31
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Workstation Edition"
VARIANT_ID=workstation
$ uname -a
Linux laptop 5.5.10-200.fc31.x86_64 #1 SMP Wed Mar 18 14:21:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 25, 2020
@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Mar 25, 2020
@rhatdan
Copy link
Member

rhatdan commented Mar 25, 2020

@kunalkushwaha @sujil02 Up for grabs. Or available to anyone else who wants it.

@giuseppe
Copy link
Member

getPodPorts needs to make sure the Protocol is correctly set. Also it seems to not manage hostPort set to 0

@tylarb
Copy link
Contributor

tylarb commented Mar 25, 2020

@rhatdan I'll take a stab at this

@rhatdan
Copy link
Member

rhatdan commented Mar 25, 2020

@tylarb you got it.

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

@giuseppe @rhatdan I've got a few questions.

First, the kubenetes API dictates that "hostPort" need not be defined: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#containerport-v1-core

I would guess this is a bug inside of rootlessport - not setting a hostPort, which is interpreted as "parentPort" in rootlessport, causes it to default to 0, which is clearly invalid in rootlessport:

From https://github.com/rootless-containers/rootlesskit/blob/c53ae22db16dd8963cc5ade4c91e2b2fc3fddf1e/pkg/port/portutil/portutil.go#L42

 40 // ValidatePortSpec validates *port.Spec.
 41 // existingPorts can be optionally passed for detecting conflicts.
 42 func ValidatePortSpec(spec port.Spec, existingPorts map[int]*port.Status) error {
...
 51 ⇥       if spec.ParentPort <= 0 || spec.ParentPort > 65535 {
 52 ⇥       ⇥       return errors.Errorf("invalid ParentPort: %q", spec.ParentPort)
 53 ⇥       }

I could separately submit a PR to resolve this, as mapping to a host port should not be necessary for exposing a container port. I wanted to make sure this logic is correct. Can open a quick pr to rootlesskit if so.

Second, as @giuseppe said, the protocol isn't set, which can be fixed, but I wondered about the implementation you might prefer.

For this fix, because ctr.config.PortMappings is a slice of PortMapping, instead of a slice of pointers to PortMapping, a fix could be replacing each PortMappping with a "validated" one - i.e. adding TCP if the config is undefined.

diff --git a/libpod/networking_linux.go b/libpod/networking_linux.go
index f1bf79ce7..01c455117 100644
--- a/libpod/networking_linux.go
+++ b/libpod/networking_linux.go
@@ -342,13 +342,23 @@ func (r *Runtime) setupRootlessPortMapping(ctr *Container, netnsPath string) (er
 		}
 	}
 
-	cfg := rootlessport.Config{
-		Mappings:  ctr.config.PortMappings,
+	portMappings := []ocicni.PortMapping{}
+
+	for _, m := range ctr.config.PortMappings {
+		if m.Protocol == "" {
+			m.Protocol = "tcp"
+		}
+		portMappings = append(portMappings, m)
+	}
+
+	cfg := &rootlessport.Config{
+		Mappings:  portMappings,
 		NetNSPath: netnsPath,
 		ExitFD:    3,
 		ReadyFD:   4,
 		TmpDir:    ctr.runtime.config.TmpDir,
 	}
+
 	cfgJSON, err := json.Marshal(cfg)
 	if err != nil {
 		return err

This could be cleaner locally, but have impacts "globally" - mainly, the PortMappings for the ctr is now edited, rather than just this local json which is passed to rootlessport.

But this could be desired - as noted, the kubernetes spec defines that if a protocol is undefined, TCP should be the default. So if it's not defined in the spec, it should be set to TCP for the ctr instead of checking each time. But I wanted to also ensure that this behavior is at least something you'd expect as well, as it is also more of a refactor instead of an fix for this specific issue only.

go playground of the behavior I'm describing

@giuseppe
Copy link
Member

@tylarb thanks for working on it.

I think your patch is fine and we could do it in place modifying the existing slice without creating a new one.

About the other issue, we need to map these concepts to Podman. What happens as root if the hostPort is not defined?

@mheon
Copy link
Member

mheon commented Mar 26, 2020

We have a method for randomly assigning ports for use with --publish-all - I wonder if a missing host port acts similar?

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

@giuseppe I forgot about modifying the existing slice using the index. It's still not as clean as using a pointer but should be sufficient.

As root:

libpod$ cat playkube.yml 
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: my-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    #    command: ['nginx', '-g', 'daemon off;']
    ports:
    - containerPort: 80
      #Protocol: UDP
  - name: python-web-container
    image: quay.io/redhatworkshops/simple-python-web:latest
    command: [ '/usr/bin/python3', '-m', 'http.server', '8080']
    ports:
    - containerPort: 80



ERRO[0002] Error adding network: failed to parse config: Invalid host port number: 0 
ERRO[0002] Error while adding pod to CNI network "podman": failed to parse config: Invalid host port number: 0 
ERRO[0002] error starting some container dependencies   
ERRO[0002] "error configuring network namespace for container ca3e31b0a6aa175cf3e9564b78abc573577b9b8a1b29fe84bf7706d3fe49b917: failed to parse config: Invalid host port number: 0" 
Error: error starting some containers: internal libpod error

This also looks to me to be incorrect - containerPort should be equivalent to and expose [port] in a dockerfile correct? It should not need a container -> hostport mapping.

@mheon in the current design, if no hostport is specified, the json unmarshal will define it as the default value of that type - in this case, default value of int is 0, so undefined (in the yaml) hostPort == 0

@mheon
Copy link
Member

mheon commented Mar 26, 2020

I think we can't leave it as 0... If containerPort is set, setting hostPort to the same thing seems appropriate?

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

@mheon I don't think that's correct, from a kubernetes pod or podman pod perspective.

In the Kubernetes API, if hostPort is unset, the container does not get a port on the host. This behavior should be the same here.

I should be able to define this spec:

libpod$ cat playkube.yml 
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
  labels:
    app: my-pod
spec:
  containers:
  - name: my-web-app
    image: my-web-app:latest
    command: ['nginx', '-g', 'daemon off;']
    ports:
    - containerPort: 80
      hostPort: 8080
  - name: my-app-database
    image: postgres:latest
    command: ['postgres-server', 'start']
    ports:
    - containerPort: 5432
  - name: my-app-helper
    image: my-app-helper:latest
    command: ['collect-data'']
    ports:
    - containerPort: 90

In this case, my app (and any helper apps) can access the DB at 5432, and serve data on the host at 8080, internally inside the pod at port 80, where "my-app-helper" processes data or something and returns it, again internally, to port 90. Neither postgres nor app-helper at bound to a host port, for security and to limit unnecessary port use.

@mheon
Copy link
Member

mheon commented Mar 26, 2020

In that case, I don't know if we should be adding these ports to the Podman pod spec - Podman ports are exclusively host:container mappings. The expectation would be that apps connect to each other over localhost using the containerPort-only mappings, which does not require any intervention on our part.

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

I would expect that, in the context of Podman, "containerPort" should function like an additional "expose" flag, and "hostPort" should work like a "publish" flag. The above example, not adding to pods but just the containers:

podman run --expose 80 -p 8080:80 --name my-web-app my-web-app:latest nginx -g daemon off
podman run --expose 5432 --name my-app-database postgres:latest postgres-server start

etc...

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

The expose should be a no-op if the same port is already exposed in the Dockerfile though.

@mheon
Copy link
Member

mheon commented Mar 26, 2020

For reference, --expose does nothing without --publish-all and we do not support --publish-all with pods (as of yet, at least)

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

Ah. So this is different behavior than docker? You must expose a port for a server to bind/be accessible, even inside of the container engine, at least as I recall.

This is not the same in podman? I don't need to expose a port for it to be accessible by a service inside the same pod?

@mheon
Copy link
Member

mheon commented Mar 26, 2020

Pods are sharing their network namespace, so they can access anything bound by another service in the pod via ports bound to localhost without any --expose flag being used. We use --expose for podman commit (adds a port to the committed image) and podman run --publish-all (make all ports in --expose and the image public, at randomly-assigned port numbers on the host.)

@tylarb
Copy link
Contributor

tylarb commented Mar 26, 2020

@mheon thanks so much for the clarification.

In my mind then, the behavior I would expect is as follows:

I think this will still require a change to some other networking code to get the above logic correct.

Does this proposal seem reasonable?

@mheon
Copy link
Member

mheon commented Mar 26, 2020

Agree on all points - this sounds correct

tylarb added a commit to tylarb/libpod that referenced this issue Mar 27, 2020
The logic used in parsing the ports to be utilized in a kubenetes api
defined pod did not fully adhere to the kubenetes spec, nor did it map
well to a podman context. This fix sanitizes the input of container
ports to meet the following rules:

- A defined containerPort with no defined hostPort does nothing in a
podman context, or is informational. This is line with [usage in
Kubernetes.](kubernetes/kubernetes#4332)

- A defined hostPort with no defined containerPort acts like a
publish [hostPort]:[hostPort]

- A defined containerPort and defined hostPort works like it does in
kubernetes, as in a publish [hostPort]:[containerPort]

Addresses containers#5610

Signed-off-by: Tyler Ramer <tyaramer@gmail.com>
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Apr 27, 2020

@tylarb @mheon Has this issue been fixed?

@tylarb
Copy link
Contributor

tylarb commented Apr 27, 2020 via email

@mheon mheon closed this as completed Apr 27, 2020
snj33v pushed a commit to snj33v/libpod that referenced this issue May 31, 2020
The logic used in parsing the ports to be utilized in a kubenetes api
defined pod did not fully adhere to the kubenetes spec, nor did it map
well to a podman context. This fix sanitizes the input of container
ports to meet the following rules:

- A defined containerPort with no defined hostPort does nothing in a
podman context, or is informational. This is line with [usage in
Kubernetes.](kubernetes/kubernetes#4332)

- A defined hostPort with no defined containerPort acts like a
publish [hostPort]:[hostPort]

- A defined containerPort and defined hostPort works like it does in
kubernetes, as in a publish [hostPort]:[containerPort]

Addresses containers#5610

Signed-off-by: Tyler Ramer <tyaramer@gmail.com>
@lsunsi
Copy link

lsunsi commented Aug 21, 2020

@tylarb
Sorry for resurrecting this closed issue, but what was the outcome of this? It wasn't clear for me upon reading it.
I came across this issue looking for direction especially on this feature.

containerPort + hostPort works like it does in kubernetes, as in a publish [hostPort]:[containerPort]

Is it supposed to work like that? Because it makes sense to me, but I'm having difficulty making it work.
It seems to be that when both container and host port are present, the containerPort is completely ignored.

@mbach04
Copy link

mbach04 commented Dec 5, 2022

Anyone ever solve this? The simple nginx examples published on the RH blogs, followed to the letter produce the same result. The workload is not bound to the host port, thus losing it's utility.

@mbach04
Copy link

mbach04 commented Dec 5, 2022

Setting both the containerPort and hostPort did the trick for me:

#abbreviated 
ports:
- name: nginx 
  containerPort: 80
  hostPort: 8080

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 8, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 8, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests

8 participants