Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic port number assignment #390

Closed
yugui opened this issue Jul 10, 2014 · 12 comments
Closed

Automatic port number assignment #390

yugui opened this issue Jul 10, 2014 · 12 comments
Labels
area/api Indicates an issue on api area. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@yugui
Copy link
Contributor

yugui commented Jul 10, 2014

IIUC, host port numbers in each pods are not important for replicated services in Kubernetes.
From the perspective of orchestration, the important thing is port of service, but not port of pod.

Also manual assignment of host port can be troublesome because spawn of container just fails if the container manifest specified a port which is already taken by other containers or Kubernetes daemons.

So I propose the following enhancement.

  • Keep tracking taken ports in etcd
  • Allow omitting host port in container manifest
    • If omitted, Kubelet automatically choose an available host port for each exposed container port.
@thockin
Copy link
Member

thockin commented Jul 10, 2014

First, the apiserver is supposed to ensure that HostPort conflicts do not
happen among pods. We recognize that this is sort of awful, and want to
find a better answer :)

Second, I agree that the current Ports arrangement is not ideal. I have an
idea that I want to flesh out to make it "better" (in my opinion).

Keep in mind that you really only need to specify a Ports entry if you want
a HostPort. You only want a HostPort in GCE if you want an external IP to
be able to access it. What we have done with networking is clever, but
maybe too much so.

My feeling is that this distinction is not clear, and instead everyone will
list all their ports when they do not need to. It should be possible to
list your ports and NOT get a HostPort at all, which I think is what you
really want.

I am not against doing a random HostPort, but i want to think about it
carefully.

On Thu, Jul 10, 2014 at 12:16 AM, Yuki Yugui Sonoda <
notifications@github.com> wrote:

IIUC, host port numbers in each pods are not important for replicated
services in Kubernetes.
From the perspective of orchestration, the important thing is port of
service, but not port of pod.

Also manual assignment of host port can be troublesome because spawn of
container just fails if the container manifest specified a port which is
already taken by other containers or Kubernetes daemons.

So I propose the following enhancement.

  • Keep tracking taken ports in etcd
  • Allow omitting host port in container manifest
    • If omitted, Kubelet automatically choose an available host port
      for each exposed container port.

Reply to this email directly or view it on GitHub
#390.

@bgrant0607
Copy link
Member

I don't understand the motivation for automatically assigned host ports. The motivation for requesting one is so that it can be opened in firewall rules and connected to via external clients, through frontend load balancing, etc. This is not possible with automatically assigned ports.

@yugui
Copy link
Contributor Author

yugui commented Jul 11, 2014

On Fri Jul 11 2014 at 12:12:24 AM, Tim Hockin notifications@github.com wrote:

First, the apiserver is supposed to ensure that HostPort conflicts do not
happen among pods. We recognize that this is sort of awful, and want to
find a better answer :)

Second, I agree that the current Ports arrangement is not ideal. I have an
idea that I want to flesh out to make it "better" (in my opinion).

Keep in mind that you really only need to specify a Ports entry if you want
a HostPort. You only want a HostPort in GCE if you want an external IP to
be able to access it. What we have done with networking is clever, but
maybe too much so.

My feeling is that this distinction is not clear, and instead everyone will
list all their ports when they do not need to. It should be possible to
list your ports and NOT get a HostPort at all, which I think is what you
really want.

I'm not sure if I get your point correctly, but I would be better if I can expose services to external IP without having HostPort at all.

What I actually want is the followings.

  • I have a system which consists of several services. Some of them are backends, which don't need to be accessible from external IPs. Others are exposed to external.
  • IIUC, exposed services need to have HostPort so that external IP can access to them.
    • But I don't want to carefully manage my list of HostPorts so that ports don't conflict to each other. Why doesn't Kuberlet do for me?
    • I also want to assign an available HostPort without digging into Kubernetes internal because it might change and can depend on cloudprovider implementation.
      • e.g. HostPort 5000 is not available to pods because Kubelet uses it
      • e.g. HostPort 22 is not available to pods because sshd in GCE uses it.

On Fri Jul 11 2014 at 10:47:02 AM, bgrant0607 notifications@github.com wrote:

I don't understand the motivation for automatically assigned host ports. The motivation for requesting one is so that it can be opened in firewall rules and connected to via external clients, through frontend load balancing, etc. This is not possible with automatically assigned ports.

Automatic assignment I imagine does not mean random assignment per kubelet.
I am thinking of automatic but consistent assignment among kubelets.
I imagine a mechanism like:

  • Keep tracking ports used in kubelet nodes. Record the current state in etcd.
  • When kubemaster sends requests to kubelets, it allocates an available port according to the record in etcd. All pods for the replication consistently uses the port allocated by kubemaster.

@bgrant0607
Copy link
Member

I think we need a more concrete example.

  1. Frontend service running on host port 8080, on hosts assigned externally visible IP addresses, opened in virtual network firewall, targeted by L3 load balancers, with DNS set up to hit the L3 LB virtual IP. We put this service into a container, which is the only container running in its pod. Kubernetes schedules just one of these pods per host. Yes, you need to avoid using ports that conflict with system services, like sshd. We could give Kubelet its own address in order to avoid polluting the host port space. We should at least clearly document which ports are in use. Clients of this frontend service, such as web browsers running on people's laptops, aren't going to be using etcd, so dynamically assigned ports are a non-starter. Eventually we'd like to not need to use the host's primary network in this scenario, but we don't have a way of doing that yet.
  2. Multiple backend services. Again, one container per pod. They can all use port 5000 (or even 22) if they want. They shouldn't request host ports. They'll be scheduled to any minion in the cluster with no conflicts. Use etcd, Consul, Eureka, DNS, or whatever other naming/discovery mechanism you like. Or, if they are load-balanced services, use our service abstraction.

Issue #188 describes the problems associated with dynamic port assignment.

@thockin
Copy link
Member

thockin commented Jul 11, 2014

Hi Yuki,

Some comments in-line

On Fri, Jul 11, 2014 at 12:57 AM, Yuki Yugui Sonoda
notifications@github.com wrote:

I'm not sure if I get your point correctly, but I would be better if I can expose services to external IP without having HostPort at all.

In order for an external IP in GCE to find your service, you have to
have a HostPort mapping. If you don't, then we don't bind you to the
main interface of the VM.

What I actually want is the followings.

I have a system which consists of several services. Some of them are backends, which don't need to be accessible from external IPs. Others are exposed to external.
IIUC, exposed services need to have HostPort so that external IP can access to them.

But I don't want to carefully manage my list of HostPorts so that ports don't conflict to each other. Why doesn't Kuberlet do for me?

We can assign you a port, but then nobody knows how to find it. You
need to open the GCE firewall for that port. You need to tell your
customers what port number it is. We can put it into etcd and make
the firewall open up automatically, but that still doesn't fix the
fact that there is no port re-writing when routing between external
IPs and internal IPs.

I also want to assign an available HostPort without digging into Kubernetes internal because it might change and can depend on cloudprovider implementation.

e.g. HostPort 5000 is not available to pods because Kubelet uses it
e.g. HostPort 22 is not available to pods because sshd in GCE uses it.

Yes, this is a problem that I want to find a way to fix. I know how I
want it to work, but GCE doesn't support it.

@yugui
Copy link
Contributor Author

yugui commented Jul 15, 2014

On Fri, Jul 11, 2014 at 12:57 AM, Yuki Yugui Sonoda notifications@github.com wrote:

I'm not sure if I get your point correctly, but I would be better if I can expose services to external IP without having HostPort at all.
In order for an external IP in GCE to find your service, you have to
have a HostPort mapping. If you don't, then we don't bind you to the
main interface of the VM.
What I actually want is the followings.

I have a system which consists of several services. Some of them are backends, which don't need to be accessible from external IPs. Others are exposed to external.
IIUC, exposed services need to have HostPort so that external IP can access to them.

But I don't want to carefully manage my list of HostPorts so that ports don't conflict to each other. Why doesn't Kuberlet do for me?

We can assign you a port, but then nobody knows how to find it. You
need to open the GCE firewall for that port. You need to tell your
customers what port number it is. We can put it into etcd and make
the firewall open up automatically, but that still doesn't fix the
fact that there is no port re-writing when routing between external
IPs and internal IPs.

It doesn't matter for firewall or customers because kube-proxy proxies from service port to the automatically determined port. So GCE firewall need to open the service port which kube-proxy serves at, and the customer also needs to know the service port.

I didn't think about automatic assignment of service ports.

I also want to assign an available HostPort without digging into Kubernetes internal because it might change and can depend on cloudprovider implementation.

e.g. HostPort 5000 is not available to pods because Kubelet uses it
e.g. HostPort 22 is not available to pods because sshd in GCE uses it.

Yes, this is a problem that I want to find a way to fix. I know how I
want it to work, but GCE doesn't support it.

If we manage the list of used ports in etcd, it can be also covered.

@srobertson
Copy link

Here's our usecase.

We have an app and we often want to run many different versions of at the same time. For example production, staging and one off branches that a developer may want to spin up quickly for testing or demoing purposes.

The app is light weight enough that it doesn't matter if it multiple versions are assigned to the same host. Choosing ports can be a pain and requires a high degree of coordination amongst those sharing the cluster.

As a first pass I was planning/wanted to:

For each flavor of the app launch a pod with a coresponding environment label set. We would set containerPort to 8000 but not set a host port.

Create an nginx (or go app) pod/service listening on port 80 that maps virtual hosts to kupernetes api lookups and forwards traffic to the correct pod.

Setup DNS to point at the services on port 80:

So these dns would all point to the same service on 80 which would find the right pods.
prod.example.com
demo.example.com
foo.example.com

@bgrant0607
Copy link
Member

@srobertson Yes, all versions should be able to use the same containerPort. I'd definitely like to set up DNS, for both pods (#146) and services.

@yugui I could see the argument for automatic port allocation for services (rather than for containers), since the port is passed to clients using environment variables, anyway. However, I'd like to move towards an approach where we allocate an IP address for each service and then create DNS mappings for the services.

@thockin
Copy link
Member

thockin commented Jul 15, 2014

On Tue, Jul 15, 2014 at 10:08 AM, Scott Robertson
notifications@github.com wrote:

Here's our usecase.

We have an app and we often want to run many different versions of at the same time. For example production, staging and one off branches that a developer may want to spin up quickly for testing or demoing purposes.

The app is light weight enough that it doesn't matter if it multiple versions are assigned to the same host. Choosing ports can be a pain and requires a high degree of coordination amongst those sharing the cluster.

As a first pass I was planning/wanted to:

For each flavor of the app launch a pod with a coresponding environment label set. We would set containerPort to 8000 but not set a host port.

Create an nginx (or go app) pod/service listening on port 80 that maps virtual hosts to kupernetes api lookups and forwards traffic to the correct pod.

Setup DNS to point at the services on port 80:

So these dns would all point to the same service on 80 which would find the right pods.
prod.example.com
demo.example.com
foo.example.com

My main sentiment is that you should not have to do this - it should
just happen for you.

@bgrant0607 bgrant0607 mentioned this issue Jul 25, 2014
Closed
@bgrant0607 bgrant0607 added sig/network Categorizes an issue or PR as relevant to SIG Network. area/api Indicates an issue on api area. labels Sep 30, 2014
@bgrant0607 bgrant0607 changed the title [Enhancement] Automatic port number assignment Automatic port number assignment Oct 4, 2014
@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Oct 4, 2014
@ghost
Copy link

ghost commented Dec 14, 2014

+1

What's the progress on this? We are managing the assignments ourselves too :(.

@bgrant0607
Copy link
Member

@kelonye IP per service has been implemented. Therefore, service ports no longer collide.

@ghost
Copy link

ghost commented Dec 14, 2014

aha, awesome!

@yugui yugui closed this as completed Dec 15, 2014
keontang pushed a commit to keontang/kubernetes that referenced this issue May 14, 2016
keontang pushed a commit to keontang/kubernetes that referenced this issue Jul 1, 2016
harryge00 pushed a commit to harryge00/kubernetes that referenced this issue Aug 11, 2016
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Dec 8, 2016
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Mar 3, 2017
li-ang pushed a commit to li-ang/kubernetes that referenced this issue Dec 16, 2017
li-ang pushed a commit to li-ang/kubernetes that referenced this issue Dec 16, 2017
Automatic merge from submit-queue (batch tested with PRs 56639, 56746, 56715, 56673, 56726). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix issue kubernetes#390

**What this PR does / why we need it**:
When VM node is removed from vSphere Inventory, the corresponding Kubernetes node is unregistered and removed from registeredNodes cache in nodemanager. However, it is not removed from the other node info cache in nodemanager. The fix is to update the other cache accordingly.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes vmware-archive#390

**Special notes for your reviewer**:
Internally review PR here: vmware-archive#402

**Release note**:
```
NONE
```

Testing Done:
1. Removed the node VM from vSphere inventory.
2. Create storageclass and pvc to provision volume dynamically
k8s-github-robot pushed a commit that referenced this issue Jan 3, 2018
seans3 pushed a commit to seans3/kubernetes that referenced this issue Apr 10, 2019
wking pushed a commit to wking/kubernetes that referenced this issue Jul 21, 2020
kinflate: added version command
linxiulei pushed a commit to linxiulei/kubernetes that referenced this issue Jan 18, 2024
Fix build tags manipulation in Makefile
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

No branches or pull requests

4 participants