Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move HostPort port-opening from kubenet to kubelet/cni #31307

Closed
ozdanborne opened this issue Aug 23, 2016 · 92 comments
Closed

Move HostPort port-opening from kubenet to kubelet/cni #31307

ozdanborne opened this issue Aug 23, 2016 · 92 comments
Assignees
Labels
area/kubelet kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@ozdanborne
Copy link
Contributor

I'm working on adding hostPort support to a CNI plugin for kubernetes. I've found that in order for CNI plugins to honor and implement hostPort requests, the opening of the port itself (i.e. reservation) must be made by a long-running daemon process (discussed here by thockin). Currently, this is taken care of in kubenet code. But this forces CNI plugins to manage reservation of the portmapping themeselves. Because 3rd party CNI plugins are oneshots and do not necessarily run as daemon processes, it is difficult for them to cleanly manage these ports.

Therefore, the opening and holding of ports should be moved from kubenet code into kubelet’s CNI code, allowing CNI plugins to rely on kubernetes to take care of hostport opening / reservation. This will likely need to be done anyway as part of moving kubenet into its own CNI plugin.

@caseydavenport
Copy link
Member

caseydavenport commented Aug 23, 2016

@thockin @dcbw @kubernetes/sig-network

@caseydavenport
Copy link
Member

The idea here is that there are two parts to implementing the hostPort.

  1. Ensuring the port isn't already taken by someone else, and that it won't be taken by someone else in the future.

  2. Implementing the actual mapping (via iptables, or something else).

The first part should be done by Kubernetes, and be common across all CNI plugins. The second part should probably be up to the specific CNI plugin that is in use. Part 1 is tangential to part 2, which is this discussion in the CNI community re: how port mapping should be handled in CNI.

@thockin
Copy link
Member

thockin commented Aug 23, 2016

I want to agree with this, but I still see a problem. Maybe it's edge-case enough that we don't care. What if the second part requires ACTUALLY using the port, which the first part would prevent?

Consider a hypothetical CNI plugin that needs to do some userspace packet-processing for hostports. Maybe it needs to do some extended auth check or something. Not beyond the realm of possibility. It needs some agent on the machine to receive packets and forward them.

Ideally it would just bind() to the actual hostport. But if we are holding that port, it can't, so it has to bind to a random port and install some iptables DNAT. So we've made what will already be sort of slow even slower, and used conntrack when it wasn't strictly required. Now you need the forwarding agent to rectify against iptables - if it crashes it will get a new random port.

I don't see a clean answer short of making plugins enumerate their own capabilities and adapting the kubelet to handle both cases.

Someone tell me it will be OK and that this is not a thing...

@caseydavenport
Copy link
Member

What if the second part requires ACTUALLY using the port, which the first part would prevent?

:( Yeah, I had considered this, but I think I wishfully magic'd it away with "so long as the CNI spec says it's the responsibility of the orchestrator" or some such silly thing.

I don't see a clean answer short of making plugins enumerate their own capabilities and adapting the kubelet to handle both cases.

Is there a reason this couldn't be a configuration option on the kubelet? Ideally the kubelet doesn't need to change configuration based on the networking choice, but if we decide this is sufficiently edge-case-y then it seems like a "actually, don't reserve ports" type configuration option might suffice.

Someone tell me it will be OK and that this is not a thing...

Wish I could confidently... I'm not aware of anyone doing this but I'm certainly not omniscient...

@thockin
Copy link
Member

thockin commented Aug 24, 2016

It might be OK to just do it and wait for someone to scream... I like the
idea that we actually make it part of the specification of CNI, and the
two-halves model works otherwise..

On Tue, Aug 23, 2016 at 5:08 PM, Casey Davenport notifications@github.com
wrote:

What if the second part requires ACTUALLY using the port, which the first
part would prevent?

:( Yeah, I had considered this, but I think I wishfully magic'd it away
with "so long as the CNI spec says it's the responsibility of the
orchestrator" or some such silly thing.

I don't see a clean answer short of making plugins enumerate their own
capabilities and adapting the kubelet to handle both cases.

Is there a reason this couldn't be a configuration option on the kubelet?
Ideally the kubelet doesn't need to change configuration based on the
networking choice, but if we decide this is sufficiently edge-case-y then
it seems like a "actually, don't reserve ports" type configuration option
might suffice.

Someone tell me it will be OK and that this is not a thing...

Wish I could confidently... I'm not aware of anyone doing this but I'm
certainly not omniscient...


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVHB4wxgbTV_VXwUQPGxkbJgDJK3oks5qi4tigaJpZM4Jrcnb
.

@errordeveloper
Copy link
Member

We currently fail very silently, I've just spend hours figuring not understanding what's gone wrong to arrive at a conclusion that HostPort won't work with CNI as it relies on Docker setting up NAT... Anyhow, is there a meta issue where missing features like this are listed? If this cannot be considered important enough to fix in 1.4, can we at least consider adding a clear warning instead of being completely silent?

@luxas
Copy link
Member

luxas commented Sep 4, 2016

/cc

@thockin
Copy link
Member

thockin commented Sep 5, 2016

OK, I can get behind the 2-halves model. what do you think @freehan ?

@errordeveloper what condition can we detect for an error message? In kube, as it is (using kubenet anyway) we should not start the pod at all if we can't get the hostport.

@errordeveloper
Copy link
Member

errordeveloper commented Sep 5, 2016

@thockin I think it'd be sufficient to have something like this:

kubeCfg.NetworkPluginName == "cni" && anyContainerWantsHostPort(pod.Spec)

@thockin
Copy link
Member

thockin commented Sep 6, 2016

OHHH, you mean an error message NOW until a fix is in for CNI? Yeah, that
I could see..

On Mon, Sep 5, 2016 at 12:46 AM, Ilya Dmitrichenko <notifications@github.com

wrote:

@thockin https://github.com/thockin I think the condition would be kubeCfg.NetworkPluginName
== "cni" && anyContainerWantsHostPort(pod.Spec).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVCPNJ8aYbyYEK5HzXBqKWrq4RJdwks5qm8jKgaJpZM4Jrcnb
.

@errordeveloper
Copy link
Member

Yes.

On Tue, 6 Sep 2016, 06:41 Tim Hockin, notifications@github.com wrote:

OHHH, you mean an error message NOW until a fix is in for CNI? Yeah, that
I could see..

On Mon, Sep 5, 2016 at 12:46 AM, Ilya Dmitrichenko <
notifications@github.com

wrote:

@thockin https://github.com/thockin I think the condition would be
kubeCfg.NetworkPluginName
== "cni" && anyContainerWantsHostPort(pod.Spec).


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
#31307 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AFVgVCPNJ8aYbyYEK5HzXBqKWrq4RJdwks5qm8jKgaJpZM4Jrcnb

.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS0m9dXM3OOS6odMtto8C_ELE0shXks5qnPz0gaJpZM4Jrcnb
.

@freehan
Copy link
Contributor

freehan commented Sep 6, 2016

With CRI, from kubelet point of view, the two halves will become one piece. The CRI implementation, CNI plugin and runtime will need to handle host port together.

@caseydavenport
Copy link
Member

@dcbw suggested that that kubelet be capable of handling both of these roles, with either a way of enabling or disabling this behavior.

I like this since it puts less responsibility on plugin writers to make something that is "kubernetes specific". I'd vote for the kubelet reserving ports and programming iptables by default for all CNI plugins, with maybe an annotation on the node to disable?

@errordeveloper
Copy link
Member

I vote for that too.

On Fri, 23 Sep 2016, 21:41 Casey Davenport, notifications@github.com
wrote:

@dcbw https://github.com/dcbw suggested that that kubelet be capable of
handling both of these roles, with either a way of enabling or
disabling this behavior.

I like this since it puts less responsibility on plugin writers to make
something that is "kubernetes specific". I'd vote for the kubelet reserving
ports and programming iptables by default for all CNI plugins, with maybe
an annotation on the node to disable?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWSyRT9YHwIUhHCC_YShX6JAKZ0kRTks5qtDl-gaJpZM4Jrcnb
.

@dcbw
Copy link
Member

dcbw commented Sep 23, 2016

@caseydavenport I was thinking of some way for plugin developers to opt-out of the behavior for CNI plugins rather than an annotation for the time being. Since you already have to specify the network plugin for kubelet (either through --network-plugin or the CNI conf file), and since the network plugin determines whether or not it should opt-out of the hostport stuff, I think this decision be related to how you tell kubelet the network plugin.

My thoughts on that were:

  1. a command-line flag for kubelet, the same as we have --network-plugin; the downside is that since you're just specifying --network-plugin=cni and not the real CNI plugin they aren't really locationally related. But this one is easy.
  2. a comment in the CNI network config file that kubelet reads and interprets; the downside of this is that we're encoding special kube-specific functionality

@dcbw
Copy link
Member

dcbw commented Sep 23, 2016

@caseydavenport unrelated to this, but related to how CNI plugins interact with kubelet, we also need some way for CNI plugins to disable the kubelet bandwidth shaping functionality.

@caseydavenport
Copy link
Member

since the network plugin determines whether or not it should opt-out of the hostport stuf

Yep, I can buy that :)

I'd rather not stick more command line flags on the kubelet. Ideally this behavior (and others) can be configured at runtime rather than cluster init. Maybe we could make use of the "args" section of the CNI config in some way?

@dcbw
Copy link
Member

dcbw commented Sep 23, 2016

@caseydavenport yeah, that's another option

@bboreham
Copy link
Contributor

bboreham commented Oct 4, 2016

I'm working on adding hostPort support to a CNI plugin for kubernetes.

@djosborne can I ask how your work is coming along?

Also can I ask whether you are targeting a generic CNI plugin that could be used with any network implementation?

@dcbw
Copy link
Member

dcbw commented Oct 5, 2016

OK, I can get behind the 2-halves model.

@thockin @caseydavenport @bboreham I keep trying to think of ways to spin hostports/shaping/etc out to CNI plugins, and there are certainly ways to do this, but every single thing we spin out now requires information that only Kubernetes has. Even if we go with a 2-halves hostport model the CNI plugin side still needs HostPort/ContainerPort/Protocol. Shaping still needs the bandwidth annotations.

The problem is that these things are container-dependent so cannot be represented as a static file on-disk that's installed beforehand and never changes, like CNI network config files usually are.

We can do these things in plugins by talking to the api-server, but that's kubernetes specific.
We can do these things by passing the Pod and Spec to plugins, but that's kubernetes specific.

Here's a wacky idea... what if we had a kubernetes-specific CNI network config template directory. When about to call a CNI plugin for a container, kubelet's CNI driver would read the CNI template file, replace well-known tags like "@(k8s.io/pod/meta/Annotations/kubernetes.io/ingress-bandwidth)" or "@(k8s.io/pod/spec/Containers/Ports)" with the corresponding value(s), write the new CNI config to a temp directory, and run the pod setup with that file.

This way the actual CNI config file passed to the CNI plugin would not be kubernetes specific in any way, and runtime container information derived from the pod+spec or other sources could still be delivered on a per-pod basis.

Then we create CNI plugins for 2nd-half HostPort setup (or even the whole thing) or bandwidth shaping, and their configuration blocks in the CNI network config file would be filled in by the templating process, but not be kubernetes specific.

@thockin
Copy link
Member

thockin commented Oct 6, 2016

A few things:

  • Plugin "capability" discovery is ugly at best. I'd really rather take a
    stand and say this is part of the plugin's responsibilities
  • This implies that the CNI spec needs to grow a way to specify hostport
    mappings which get passed in at interface setup time.
  • who configures these hypothetical templates?

On Wed, Oct 5, 2016 at 6:29 AM, Dan Williams notifications@github.com
wrote:

OK, I can get behind the 2-halves model.

@thockin https://github.com/thockin I keep trying to think of ways to
spin hostports/shaping/etc out to CNI plugins, and there are certainly ways
to do this, but every single thing we spin out now requires information
that only Kubernetes has. Even if we go with a 2-halves hostport model the
CNI plugin side still needs HostPort/ContainerPort/Protocol. Shaping
still needs the bandwidth annotations.

The problem is that these things are container-dependent so cannot be
represented as a static file on-disk that's installed beforehand and never
changes, like CNI network config files usually are.

We can do these things in plugins by talking to the api-server, but that's
kubernetes specific.
We can do these things by passing the Pod and Spec to plugins, but that's
kubernetes specific.

Here's a wacky idea... what if we had a kubernetes-specific CNI network
config template directory. When about to call a CNI plugin for a
container, kubelet's CNI driver would read the CNI template file, replace
well-known tags like "@(k8s.io/pod/meta/Annotations/kubernetes.io/
ingress-bandwidth)" or "@(k8s.io/pod/spec/Containers/Ports)" with the
corresponding value(s), write the new CNI config to a temp directory, and
run the pod setup with that file.

This way the actual CNI config file passed to the CNI plugin would not be
kubernetes specific in any way, and runtime container information derived
from the pod+spec or other sources could still be delivered on a per-pod
basis.

Then we create CNI plugins for 2nd-half HostPort setup (or even the whole
thing) or bandwidth shaping, and their configuration blocks in the CNI
network config file would be filled in by the templating process, but not
be kubernetes specific.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#31307 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVF5pGDiPneKOwtZjuYvICJ_ynRyzks5qw6YxgaJpZM4Jrcnb
.

@dcbw
Copy link
Member

dcbw commented Oct 6, 2016

  • Plugin "capability" discovery is ugly at best. I'd really rather take a stand and say this is part of the plugin's responsibilities

@thockin I would agree, though could you elaborate on what you mean by capabilities? eg that the plugin can handle hostports or that it cannot handle hostports? I would expect at a minimum that if a pod requests shaping or hostports, and the plugin cannot provide them, that the CNI driver or the network plugin would raise an error though. Better error reporting/indication is desirable however.

  • This implies that the CNI spec needs to grow a way to specify hostport mappings which get passed in at interface setup time.

Yep, I was proposing adding hostport and shaping plugins to CNI along with appropriate configuration JSON for these plugins, in the same CNI net config file as you would currently configure the main plugin and IPAM.

  • who configures these hypothetical templates?

The cluster admin who determines what network plugin the cluster should use. Currently kubelet makes that decision for the cluster admin by providing hostport/shaping in all installs. With the wacky idea proposal, the cluster admin would now create a template like:

"type": "bridge",
"name": "my rockin' network",
"bridge": "cni0",
"ipam" {
    "type": "host-local",
    "subnet": "$(k8s.io/Node/Spec/PodCIDR)"
},
"hostports": {
    "hostport": "$(k8s.io/Pod/Spec/Containers/HostPort)",
    "sandboxport": "$(k8s.io/Pod/Spec/Containers/ContainerPort)",
}
"shaping": {
    "egress-bandwidth" : "$(k8s.io/Pod/Meta/Annotations/kubernetes.io/ingress-bandwidth)",
}

or something like that. The k8s CNI driver would recognize these and fill them in on a per-pod basis before passing to the actual CNI plugin. But once filled in, there would be nothing kubernetes-specific about the configuration, and if you wanted a static config, you could certainly write that without making it a template.

Just a thought...

@caseydavenport
Copy link
Member

I think the template-based proposal doesn't actually buy us what we want with the extra complexity. Plugins will still need to support the "shaping: {}", "hostports: {}", etc network config sections, which aren't part of the CNI spec, which means we're essentially making CNI plugins write k8s specific code anyway. If we want to make this not k8s specific, then it feels like it needs to be something agreed and added to the CNI spec first.

I don't feel super against adding k8s specific fields to the cni "args" field (see here). Plugins are free to ignore that section, which means you can still plug any old CNI plugin into k8s, but it won't necessarily support all of the k8s specific features out of the box, which is certainly no worse than it is today, but at least gives plugin writers the information they need without having to do a round trip to the API.

I think having the kubelet try to do as much of this as possible in a plugin agnostic way is also useful, with a way to opt-in / opt-out. One thing a CNI plugin just won't be able to do in any sensible manner is to reserve a port via a long running process, so kubelet will probably need to get involved here.

@jojimt
Copy link

jojimt commented Oct 6, 2016

One thing a CNI plugin just won't be able to do in any sensible manner is to reserve a port via a long running process, so kubelet will probably need to get involved here.
Perhaps, the plugin could use a companion helper process to achieve this? It does not seem reasonable to complicate the interface in order to leverage the long-running nature of kubelet.

@caseydavenport
Copy link
Member

@jojimt That's going to require some sort of communication between the CNI plugin + the long running process, or it needs to spin up a process per hostPort. The CNI plugin needs to fail the create call and return an error if it can't grab the port. Doing this out-of-band from the plugin makes this quite a bit harder. If it's done in the kubelet, we can guarantee that the port is reserved at the time the CNI plugin is called.

@cmluciano
Copy link

I will not be able to work on this in the 1.9 release

@danehans
Copy link

danehans commented Nov 8, 2017

@luxas do you find it acceptable to list HostPorts as a caveat for IPv6 alpha support in 1.9?

@luxas
Copy link
Member

luxas commented Nov 8, 2017

@luxas do you find it acceptable to list HostPorts as a caveat for IPv6 alpha support in 1.9?

@danehans As long as it's documented, it's ok to have as a non-goal for alpha.

@danehans
Copy link

danehans commented Nov 8, 2017

@mmueen please see ^ as this caveat needs to be documented for 1.9

@jethrogb
Copy link

jethrogb commented Nov 9, 2017

Is this supposed to work now? I'm using kubelet version 1.8.2 and flannel from quay.io/coreos/flannel:v0.9.0-amd64

I'm not seeing any iptables chains with CNI in them.

$ ls /opt/cni/bin
bridge  cnitool  dhcp  flannel  host-local  ipvlan  loopback  macvlan  noop  portmap  ptp  tuning
$ cat /etc/cni/net.d/10-flannel.conflist 
{
  "name": "cbr0",
  "cniVersion": "0.3.0",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

kubelet cmdline:

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf 
  --kubeconfig=/etc/kubernetes/kubelet.conf 
  --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true 
  --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin 
  --cluster-dns=10.245.0.10 --cluster-domain=cluster.local 
  --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt 
  --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki 
  --node-ip=10.199.2.173

From pod spec:

    ports:
    - containerPort: 4443
      hostPort: 4443
      protocol: TCP

@jethrogb
Copy link

jethrogb commented Nov 10, 2017

Finally got it working, make sure to get the latest (0.6.0) /opt/cni/bin/portmap AND /opt/cni/bin/flannel from https://github.com/containernetworking/plugins/releases

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2018
@errordeveloper
Copy link
Member

Sounds like this is going to rot... Would be great to see a comment from @kubernetes/sig-network-feature-requests.

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 12, 2018
@caseydavenport
Copy link
Member

HostPort can be implemented via the containernetworking/plugins portmap plugin, and a number of CNI implementations make use of it now.

There's still the question of "should k8s hold the port?". The last time we discussed this, we agreed it was probably OK for the kubelet not to hold the port, leaving that up to the implementation.

So, I think this can be closed.

@squeed
Copy link
Contributor

squeed commented Feb 13, 2018

I believe this issue can be closed; we're working off of #38890

@squeed
Copy link
Contributor

squeed commented Feb 13, 2018

/close

@errordeveloper
Copy link
Member

Folks, thanks for resolving this!

@yanhongwang
Copy link

Hello all,

In my Kubernetes cluster, SlaveIP:80 can be shown by netstat and accessed by browser.
but MasterIP:80 can't be opened.

I think portmap plugin have already enabled by slave side.
But, why doesn't on master side?

My environment condition:

Ubuntu: 16.04 LTS
Kubernetes: 1.9.3
Docker-ce: 17.03.2ce-0ubuntu-xenial
Weave Net: 2.2.1

I follow the official Kubernetes ingress page.
https://github.com/kubernetes/ingress-nginx/tree/master/deploy

In default "with-rbac.yaml", I appended "hostnetwork=true" and "hostport=80" within
https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

/////////////////////////////////////////////////////////////////////////////////////
...
...
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
containers:
...
...
ports:
- name: http
hostPort: 80
containerPort: 80
...
...
/////////////////////////////////////////////////////////////////////////////////////

Is there something parameter or step that need to enable master portmap in Kubernetes cluster?

Thanks very much.

Hong

@squeed
Copy link
Contributor

squeed commented Mar 13, 2018

@yanhongwang hostnetworking means that no port forwarding is taking place; the application is opening port 80 directly. So, that's not related to port forwarding.

@yanhongwang
Copy link

Hello @squeed ,

Thanks for your reply.

In https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md
After applying all the yaml, Ingress controller works without error.
But why the master 80 port still can't be shown by netstat and accessed by browser?

Currently I use Kubernetes: 1.9.3, do I still need to create /etc/cni/net.d/10-****.conflist to enable portmap mechanism? Something like 3016 (comment)

Thanks very much!!!

Hong

@orinciog
Copy link

Hello!

I am also in same situation as @yanhongwang . Can someone answer to our questions, please?

Thank you.

@yanhongwang
Copy link

yanhongwang commented Mar 27, 2018

Hello @orinciog

By the way, in my case I use baremetal way.
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md#baremetal

All the pod are running without error, but port 80 can't be shown in Master side with netstat ...
And of course I remove 'hostNetwork' in yaml already.

Hong

@orinciog
Copy link

@yanhongwang Thank you very much for your response. This is also my case, running on baremetal, ingress is working on service port, but on port 80 is not listening.

@gmile
Copy link
Contributor

gmile commented May 22, 2018

@yanhongwang @orinciog were you guys able to solve the problem with 80 port on master?

I'm curious about this as well.

I think I lack some fundamental understanding of how ingress works. In a particular setup, I do not have a bare metal load balancer standing in front of my cluster. I only have a 2 VM Kubernetes setup. I working to understand how to properly expose my services to the outer world, without the load balancer.

Right now I have traefik service running in NodePort mode on both nodes, hence I get a traefik ingress controller exposed on 95.96.97.98:32167 and 95.96.97.98:32168. But I need traefik to listen on 80 port, and I'm puzzled how to do that.

I've manager to work around this by adding an nginx process sitting next to master kubelet, and forwarding packets from 80 port to 32167 port. This feel like a dirty hack.

@yanhongwang
Copy link

@gmile , Not yet. In my case I use baremetal.

@orinciog
Copy link

@gmile No, I didn't resolved kubernetes like.

My solution was just like yours: to put a nginx in front and fwd all packets from 80 to ingress port.

@gmile
Copy link
Contributor

gmile commented Jun 4, 2018

@yanhongwang @orinciog I'm yet to try this out, but I just figured its possible to provide selected containers with NET_BIND_SERVICE linux capability, which should allow container to run on a privileged port.

Learned this from "Deploy Træfik using a Deployment or DaemonSet" section in "Kubernetes Ingress Controller" Traefik guidelines.

Here's an actual DeamonSet example: https://github.com/containous/traefik/blob/67a0b4b4b1176f2eec1eca615a3ebe1af41cdff9/examples/k8s/traefik-ds.yaml#L8-L43

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests