Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA hostNetwork #47

Closed
TamasSzigeti opened this issue Nov 18, 2020 · 17 comments
Closed

VPA hostNetwork #47

TamasSzigeti opened this issue Nov 18, 2020 · 17 comments

Comments

@TamasSzigeti
Copy link

TamasSzigeti commented Nov 18, 2020

Hi,

thank you for the VPA chart! I'm using AWS EKS with Calico CNI (common practice to get rid of ridiculous pod per node limits), and this setup needs hostNetworking enabled for trusted services, see the text in the blue box here or the issue with a solution at the bottom here.

Can you please add an option to the chart values so I don't have to patch the manifests after installing? Should be really easy, this is what I'm currently using to make it work:

helm install vpa cowboysysop/vertical-pod-autoscaler \
    --namespace=kube-system \
    --values values.vpa.yaml
kubectl patch deployment vpa-vertical-pod-autoscaler-admission-controller \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"
kubectl patch deployment vpa-vertical-pod-autoscaler-recommender \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"
kubectl patch deployment vpa-vertical-pod-autoscaler-updater \
    --namespace=kube-system \
    --patch "$(cat hostnetwork.patch.yaml)"

And the patch is:

spec:
    template:
        spec:
            hostNetwork: true
            dnsPolicy: ClusterFirstWithHostNet

A simple setting in values could drive this:

hostNetwork:
    enabled: true

Example here

Thank you in advance

@sebastien-prudhomme
Copy link
Contributor

Sorry but I really don't want to add such a use case in the chart.

You should try to use the new post-renderer option available in latest versions of Helm to patch the chart before it's installed.

@TamasSzigeti
Copy link
Author

Why? It's just a couple of lines and doesn't break BC anywhere.

@sebastien-prudhomme
Copy link
Contributor

Because I'll prefer to maintain charts with simple use cases that I can test, which is not the case with EKS with a custom CNI.

Do you have any other chart than kube-prometheus-stack that you use and that have such option? I don't understand why the patch you propose is applied on all deployments and not only on the one with the webhook.

@TamasSzigeti
Copy link
Author

I wasn't thorough with investigating where to apply that patch just followed the suggested diff in the linked issue. You are probably right that it is only needed for the admission controller.

Example charts that have this option:

  • stable/metrics-server
  • ingress-nginx/ingress-nginx
  • prometheus-community/kube-prometheus-stack
  • prometheus-community/prometheus-adapter (I've submitted a fix here)

@sebastien-prudhomme
Copy link
Contributor

sebastien-prudhomme commented Nov 18, 2020

The more I think on your need, the more I see problems that will come. You will soon need to have configurable ports on the pod to prevent port conflicts on your nodes and other things like that.

@TamasSzigeti
Copy link
Author

That is a very weird and counter-productive way to think about it. I merely found a missing bit that other charts provide and VPA needs as well, for the quite common EKS+custom CNI use case. It has no relation whatsoever to port conflicts or any other topics. I just wanted to contribute to make this chart more useful, but up to you, really.

@sebastien-prudhomme
Copy link
Contributor

If you're using hostNetwork on the pod, which port will be used on the host?

@TamasSzigeti
Copy link
Author

Same as the one it uses without it. HostNetwork just allows access to the network used by the control plane, it doesn't change where the pod listens

@sebastien-prudhomme
Copy link
Contributor

Ok. What happens if one of the other pod configured with hostNetwork uses the same port?

@TamasSzigeti
Copy link
Author

Nothing, pods have names and namespaces and individual addresses, they use the same port number across replicas anyway.

@sebastien-prudhomme
Copy link
Contributor

Nothing? Can you launch a kubectl get pod -o wide on a namespace with a pod with hostNetwork: true

@sebastien-prudhomme
Copy link
Contributor

And compare it to the IPs listed with kubectl get node -o wide

@TamasSzigeti
Copy link
Author

All apologies, you are right, pods with hostNetwork do get node private IPs assigned, and when there isn't a node with the requested port number open, then they fail to schedule. Cluster Autoscaler solves this gracefully though. Shouldn't affect adding this option to the helm chart though, right?

@sebastien-prudhomme
Copy link
Contributor

Can you explain the solution cluster autoscaler uses?

I check this chart https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler ans this older one https://github.com/helm/charts/tree/master/stable/cluster-autoscaler and can't find it.

The main problem for me is that I can't test such a change now and in the long term as I'm not an AWS user. As I said before problems will come with such a network mode with only the change you ask.

@TamasSzigeti
Copy link
Author

Here is the autoscaler: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

I've tested it, and it only works with that change, it doesn't work without it. I have no idea what kind of problems you refer to.

@sebastien-prudhomme
Copy link
Contributor

All the problems I've yet mentioned: port conflict mainly and certainly other things.

I'll keep the issue open to see if other people have some bright ideas about this but will not apply the change you want now.

@sebastien-prudhomme
Copy link
Contributor

Closing this one as pod port can now be configured

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants