-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA hostNetwork #47
Comments
Sorry but I really don't want to add such a use case in the chart. You should try to use the new post-renderer option available in latest versions of Helm to patch the chart before it's installed. |
Why? It's just a couple of lines and doesn't break BC anywhere. |
Because I'll prefer to maintain charts with simple use cases that I can test, which is not the case with EKS with a custom CNI. Do you have any other chart than kube-prometheus-stack that you use and that have such option? I don't understand why the patch you propose is applied on all deployments and not only on the one with the webhook. |
I wasn't thorough with investigating where to apply that patch just followed the suggested diff in the linked issue. You are probably right that it is only needed for the admission controller. Example charts that have this option:
|
The more I think on your need, the more I see problems that will come. You will soon need to have configurable ports on the pod to prevent port conflicts on your nodes and other things like that. |
That is a very weird and counter-productive way to think about it. I merely found a missing bit that other charts provide and VPA needs as well, for the quite common EKS+custom CNI use case. It has no relation whatsoever to port conflicts or any other topics. I just wanted to contribute to make this chart more useful, but up to you, really. |
If you're using hostNetwork on the pod, which port will be used on the host? |
Same as the one it uses without it. HostNetwork just allows access to the network used by the control plane, it doesn't change where the pod listens |
Ok. What happens if one of the other pod configured with hostNetwork uses the same port? |
Nothing, pods have names and namespaces and individual addresses, they use the same port number across replicas anyway. |
Nothing? Can you launch a |
And compare it to the IPs listed with |
All apologies, you are right, pods with hostNetwork do get node private IPs assigned, and when there isn't a node with the requested port number open, then they fail to schedule. Cluster Autoscaler solves this gracefully though. Shouldn't affect adding this option to the helm chart though, right? |
Can you explain the solution cluster autoscaler uses? I check this chart https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler ans this older one https://github.com/helm/charts/tree/master/stable/cluster-autoscaler and can't find it. The main problem for me is that I can't test such a change now and in the long term as I'm not an AWS user. As I said before problems will come with such a network mode with only the change you ask. |
Here is the autoscaler: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler I've tested it, and it only works with that change, it doesn't work without it. I have no idea what kind of problems you refer to. |
All the problems I've yet mentioned: port conflict mainly and certainly other things. I'll keep the issue open to see if other people have some bright ideas about this but will not apply the change you want now. |
Closing this one as pod port can now be configured |
Hi,
thank you for the VPA chart! I'm using AWS EKS with Calico CNI (common practice to get rid of ridiculous pod per node limits), and this setup needs hostNetworking enabled for trusted services, see the text in the blue box here or the issue with a solution at the bottom here.
Can you please add an option to the chart values so I don't have to patch the manifests after installing? Should be really easy, this is what I'm currently using to make it work:
And the patch is:
A simple setting in values could drive this:
Example here
Thank you in advance
The text was updated successfully, but these errors were encountered: