-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vertical-pod-autoscaler 0.3.0 on AWS EKS - admission controller doesn't kick in #1547
Comments
Which Kubernetes version? |
Kubernetes version is "v1.10.11-eks". I did just that in the meantime. I'm pretty sure it's wrong EKS config - in-cluster service URL works fine, but I get no calls from apiserver when pods are created (checked with tcpdump - nothing, so it's not just lack of log entries). I'm in contact with AWS support, and I will update this case once I learn more. Currently, in EKS there's no way you can get control plane logs :| |
I see, that's a bummer :( One thing you can also try to do in the meantime is change the |
/sig aws |
Apparently looks like that on AWS EKS the admission controller pod have to listen onto 443 (no matter if the service is ok to forward to any other port), looks like they are using a weird way to resolve endpoint (maybe they are not using this https://github.com/kubernetes/kubernetes/blob/release-1.11/staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/config/serviceresolver.go) Applying this https://github.com/kubernetes/autoscaler/pull/1613/files#diff-741c9c09f72b481cf3cb277a6a2ee929 and passing |
@safanaj Thanks for the update! I'll take a look at your PR today hopefully. |
Verify the rules on the security groups you use for the cluster control plane and for the worker nodes. In particular, verify that the control plane security group allows egress to the worker node security group on port The default node group template allows port |
Yes, we have checked our security groups, it seems you have to use port 443, as @safanaj mentioned above. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I think this is fixed already, since the change by @safanaj has been released. |
@bskiba: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I faced this issue as well and was getting in the
Changing the this requires the addition of |
@brycecarman Actually you were right, instead of using port I couldint use the port |
…after the workload is evicted (kubernetes#1547) Signed-off-by: tenzen-y <yuki.iwai.tz@gmail.com>
HI!
I'm running VPA on EKS cluster in AWS. It supports mutating webhooks, as claimed by AWS. Now, I have the following configuration (to test "hamster" deployment in "Initial" mode):
Webhook is registered and seems in place:
Hamster pods are running, VPA is created and successfully updated by Recommender:
But actual admission controller seems to do nothing: the only logs I get are (repeated over and over):
When new pods matching the selector are created, their default resurces are not changes nor there's anything showing up in logs. HOw can I investigate this problem?
The text was updated successfully, but these errors were encountered: