Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only include ports in one container in Kube YAML #3417

Merged
merged 6 commits into from Jun 25, 2019

Conversation

mheon
Copy link
Member

@mheon mheon commented Jun 24, 2019

This likely broke when we made containers able to detect that they shared a network namespace and grab ports from the dependency container - prior to that, we could grab ports without concern for conflict, only the infra container had them. Now, all containers in a pod will return the same ports, so we have to work around this.

Fixes #3408

Still needs a test, writing one now.

This likely broke when we made containers able to detect that
they shared a network namespace and grab ports from the
dependency container - prior to that, we could grab ports without
concern for conflict, only the infra container had them. Now, all
containers in a pod will return the same ports, so we have to
work around this.

Fixes containers#3408

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mheon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XS labels Jun 24, 2019
@rhatdan
Copy link
Member

rhatdan commented Jun 24, 2019

Nice easy fix.
LGTM

@Fodoj
Copy link
Contributor

Fodoj commented Jun 24, 2019

Don't want to be picky, but doesn't it break portability of generated yaml to k8s environment? Assuming there will be no infra container on k8s it means that pod yaml won't expose any ports.

@mheon
Copy link
Member Author

mheon commented Jun 24, 2019

While writing a test, I realized that all the generate kube tests are verifying the YAML using yaml.Marshal instead of yaml.Unmarshal() which is really not good. Will see about fixing as part of this.

@mheon
Copy link
Member Author

mheon commented Jun 24, 2019

@Fodoj We actually take the ports from the infra container and stick them on the first container in the generated output, so even if there is no infra container, we should be good.

@Fodoj
Copy link
Contributor

Fodoj commented Jun 24, 2019

@mheon correct me if I am wrong, but then if I have two containers A and B, A has something running on port 1234, B on 4567. Then YAML exposes both ports on container A, but not container B, meaning that B:4567 can't be reached when connecting to the port, and A:4567 is useless, as nothing is running on this port inside A. Of course if 4567 is "exposed" in the container image behind container B, then it probably won't be an issue, but still generated YAML is kind of misleading (A exposes 2 ports, B exposes none via YAML definition, but exposes one as part of an image).

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member Author

mheon commented Jun 24, 2019

Within Podman alone, if you're using play kube to play the YAML, all ports are properly of the infra container only, and point to the pod as a whole (unless you explicitly opt out of sharing the network namespace when creating a new container). So this will work as expected there - the ports are detected, and added to the infra container once more on container creation.

If moving up to Kube, if you generate a service object, the ports are again forwarded to the pod, so it should work there too.

The question would be importing the YAML into Kube without a service.

(The ability for containers to request individual ports within a pod is coming, hopefully soon, but they'll still all be managed by the infra container - it'll just be taking the requested ports of other containers in the pod into account).

@Fodoj
Copy link
Contributor

Fodoj commented Jun 24, 2019

If moving up to Kube, if you generate a service object, the ports are again forwarded to the pod, so it should work there too.

My concern is more around that one container (the one that exposes only 1234) would have this config:

    ports:
    - containerPort: 1234
      hostPort: 1234
      protocol: TCP
    - containerPort: 4567
      hostPort: 4567
      protocol: TCP

And another one (that exposes 4567) won't have any port configuration at all.

According to Kubernetes container spec:

List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network.

So even though not specifying a port wont prevent it from being exposed if container does listen on this port, YAML configuration produced by Podman would be still a bit misleading.

We need to verify that valid YAML was produced - Marshal will
just pack the generated YAML even further.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@baude
Copy link
Member

baude commented Jun 24, 2019

@Fodoj can you join us on irc.freenode.net #podman to discuss? we can probably get down to an answer quicker?

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member Author

mheon commented Jun 24, 2019

My port-detection logic seems to be failing in the tests. Investigating...

@mheon
Copy link
Member Author

mheon commented Jun 24, 2019

Decode is definitely failing - we're seeing 0 containers in the pod.

@mheon
Copy link
Member Author

mheon commented Jun 25, 2019

Throwing a hold on here - even if tests go green I want to do some more work on them.
/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 25, 2019
OutputToString() was mangling newlines, which made YAML parsers
very, very angry. But not angry enough to actually error, that
would be too easy. Just angry enough to silently not decode
anything.

Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@mheon
Copy link
Member Author

mheon commented Jun 25, 2019

Alright. Tests are in a better spot. Cancelling hold
/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 25, 2019
Signed-off-by: Matthew Heon <matthew.heon@pm.me>
@baude
Copy link
Member

baude commented Jun 25, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 25, 2019
@openshift-merge-robot openshift-merge-robot merged commit a488e19 into containers:master Jun 25, 2019
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 26, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 26, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

podman generate kube results in broken ports configuration
6 participants