-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable generating random ports with host port mapping #49792
Comments
/sig network |
@kubernetes/sig-network-feature-requests |
This is a bit of complexity I'd like to avoid in the Kubernetes API if possible, but need to understand the use-case a bit better to understand if it is. @gyliu513 is there a reason you need each pod to be using a |
@caseydavenport , yes, I want to use
|
+1 At gaming industry that would be very valuable as well, our use case is: We want to deploy "game room units" into kubernetes and they can listen in any protocol, tcp or udp, for us the best way to expose the game servers to the clients is by using hostNetwork as we need performance(exchanging a lot of ticks between server and client) and creating a service for each pod would generate too much overhead (for udp based game servers). |
+1 I have a similar use case where I want to start a cassandra ring where the nodes are accessible outside of the kubernetes cluster - I can do this by using a However, it would be nice if I didn't have to pick ports like this manually, and I could just ask for a random available one and have the port number accessible in the pod via an API. |
/remove-kind bug |
To allocate a random HostPort, it has to be done on the kubelet node, not by the API server. So we would have to pass it down to kubelet with some sentinel value indicating "allocate random", have kubelet allocate it, and then have kubelet write it back to the API server. Not impossible, but not implemented. Kubelet is going to create an iptables rule anyway, just the same as NodePort. |
I've never looked into the Kubernetes source code (maybe this is an excuse to do so). I suppose it depends how the hostPort data structure looks like, if it's stored as an We'd also need to make sure that the port information are easily accessible within the pod so that services could make use of them. I'm not sure how this would look if we asked for multiple ports. |
That's a documented feature; the scheduler avoids allocating the hostport twice. That's the scheduler, not the node iirc. Why do you have to specify hostport? It's a sort-of legacy mechanism. A service / service + nodeport should generally be preferred I think.
That's just a If you're using I don't understand the use-case for this given that hostports are largely unneeded in favor of services. Mind explaining exactly why you need hostports here at all @gyliu513? |
@euank The reason is that I want to have my own load balance cross different kubernetes clusters via calico host port feature. Suppose I have two clusters, each cluster have 3 worker nodes, and I want to start 6 pods on each cluster with calico host port, then it will be failed, as I can at most create 3 pods on each cluster on each cluster. But if we can generate the hostPort dynamically, then I will not need to hardcode it in my yaml template so that I can create more pods in one cluster. |
@gyliu513 How is that use case not covered by using a NodePort, and not setting any hostports at all? |
My use case is specifically if I want to be able to access each of the pods in my deployment individually from outside of the cluster (i.e. each pod must be able to identify it's own specific endpoint). As far as I understand, A NodePort only allows you to load balance between all the pods in the deployment, but in the case of Cassandra (or kafka, HDFS,or another similarly clustered app) a a client needs to be able to address each of the pods separately. It could be that I'm misunderstanding something (or not communicating my use case effectively). EDIT: It looks like some comments here: #28660 touch on similar things like "PetSet controller creates a service per pet" |
@euank I do not want to use |
@gyliu513 As mentioned above, you could use a service-per-pod. Then, so long as you know the node that the Pod is running on (which you need to anyway for the hostPort approach) and you direct your traffic to that node then the traffic won't get redirected and will behave similar to hostPorts. Would that work for your case? |
+1 |
@caseydavenport sorry for the late reply, service-per-pod is one solution, but I do not want to use services, as it will introduce quite a lot of iptables records in my host if there are thousands of services and this make performance bad, so I do hope that the host port mapping can help generate some random ports. |
Had the same problem as you @gyliu513, our solution was to create https://github.com/topfreegames/maestro, it has a logic to manually manage a pool of node ports |
@gyliu513 I'm confused about how you expect this to work:
hostports mappings are typically also implemented as iptables rules. They would have very similar performance characteristics to services. If your use-case can't handle the overhead of mapping between a hostport and a different constant containerport, then your best bet would probably be to use host networking and have your containerized application listen on Am I missing some detail here? |
@euank unlike service, I think calico host port only create iptables on the node where the pod is running, so we will not have too many iptables on each node for other pods like services. This can help improve the performance. Can you please show a detailed example for this solution you proposed? It would be great if you can show some yaml templates, thanks!
|
I want to create rc rs jobs with hostPort and containerPort. But this need to know which port can be used... |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
We are prototyping moving a docker-based homegrown orchestration system over to k8s and this functionality is important. The existing orchestration relies on random port assignment done by docker to ensure that even though each of the thousands of containers are essentially identical they can be addressed individually via a host ip and unique port pair. We would similarly like to run each container as its own pod, managed via a deployment and be able to address each individual pod from outside of the cluster. |
Is NodePort with |
Just going to chime in here on this dead issue with my use-case. I use K8s to launch blockchain nodes via stateful sets. It is a requirement that each unique node/pod has a unique There is the caveat however that the blockchain nodes have to be configured with the same port they're bound to on the host. It was my hope that I would be able to do something like |
I have the same problem. Has your problem been solved? How to solve it? |
If anyone wants the code, this is how we do random, non conflicting hostPort management in Agones, as we had the same issue for dedicated multiplayer game servers. It is tied to our CRD, but it would probably be turned into something more generic is someone was so inclined. https://github.com/googleforgames/agones/blob/master/pkg/gameservers/portallocator.go |
I have the same problem, how did you solve the problem of accessing internal services from outside the cluster? |
Why was this issue closed? It seems like this is still something that many people are asking for. And easy example is having a game server cluster. Each server needs its own port. With Terraform you could then take that output of ports and use them. |
/reopen |
@ThiagoT1: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
With the help of @coderanger I've developed a system that might be able to solve a niche problem. |
one-service-per-pod is not acceptable |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
/kind feature
This is a problem that I found when testing https://github.com/projectcalico/k8s-policy/issues/109 . The problem is that when I use
hostport
mapping feature in calico, I was always need to specify podshostPort
as following inports
section.The problem is that if one node start two such pods, only one pod can be started and another pod will be pending always as it cannot get the same port again.
Enable
host port mapping
can generate the host port randomly so that end user will not need to specify the host port.The text was updated successfully, but these errors were encountered: