-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set DefaultHeapsterPort to a more sensible default #58289
Set DefaultHeapsterPort to a more sensible default #58289
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: itskingori Assign the PR to them by writing No associated issue. Update pull-request body to add a reference to an issue, or get approval with The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Before we change this, we need to know whether this used to work (empty port defaulted to first port) in which case this was a regression of the service proxy and we need to either fix that or document that was deliberately changed. |
@smarterclayton Sure. Based on my examples in the description it wasn't/isn't working for me. I tried to look into the commit/PR (5df9fe6/#28844) that introduced that code for more context on the "use the first exposed port on the service" comment but couldn't find anything useful. Doesn't seem like it's been changed or reviewed since. Admittedly, I'm also not sure hard-coding the port is the right solution. Just figured a PR with an idea is better than filing an issue. I might be hiding the symptoms of a larger issue. @DirectXMan12 I saw you in some heapster/kubectl-top issues ... and you're in podautoscaling OWNERS. I hope you don't mind weighing in on this. And maybe on any possible effects on this line in |
/assign @smarterclayton /cc @DirectXMan12 |
ok, so. This is a common misconception compounded by the fact that the default behavior only make sense if you tilt your head ever-so-slightly to the left. What the service proxy actually does is that it uses the first port with no name on the service, and has done that as long as I can remember (I've hit this before when attempting to actually give my Heapster ports names). Now you might think to yourself:
To which one of your other selves might reply:
Upon viewing that, your first and second selves are satisfied, but a third self questions:
Alas, nobody quite knows that answer, but a fourth self conjectures:
Now, having enlightened several of your selves, you may lament aloud in frustration
At which point your eyes shall fall upon PR #56206, and behold API discovery mechanisms, and see that they solve many such problems. |
So, if I had to hazard a guess, you've set a port name on your Heapster service, and that's what's causing the issue. I'm also guessing that your HPA doesn't work. If either of those aren't true, then we have an actual bug. Otherwise, this is more-or-less known behavior (not amazing behavior, but somewhat known none-the-less). We can't really change the default behavior, because we can't assume that people weren't relying on using an unnamed port with a different port number, and if we did change it, we'd need to change the legacy HPA client at the same time. (EDIT P.S. the above comment is not intended to be snarky, I'm just in a strange mood) |
@DirectXMan12 amazing explanation in #58289 (comment)! Thanks a lot!
Yes! And I couldn't figure out why 😓 ... setting the flag did make it work so I poked around the code to understand why. The comment in the code doesn't explain the full story. It would have if it were "use the first exposed port on the service that's not named" 😅
Yes!! And I don't quite know why (dashboard works though). I suspected that it might be because of this i.e. I'll remove the named ports on heapster and see if my problems go away.
I 💯% agree.
Actually, I wasn't quite sold on this solution because it hard-codes a port, which forces everyone to put heapster on port 80. My goal was to explain the issue better and spur a discussion (PR is better than filing an issue) ... to which I'd say this was a success.
It didn't come off snarky. Makes sense. I ... thought my PR title was snarky 🙈 ... implicitly calling the code-author not sensible-ish, so I understand. 😅 |
@DirectXMan12 so, glad to report that $ kubectl autoscale deployment ingress-nginx-controller --cpu-percent=50 --min=3 --max=12 -n kube-system
$ kubectl describe hpa/ingress-nginx-controller -n kube-system
Name: ingress-nginx-controller
Namespace: kube-system
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 17 Jan 2018 08:59:50 +0300
Reference: Deployment/ingress-nginx-controller
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 3
Max replicas: 12
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 1s (x2 over 31s) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics)
Warning FailedComputeMetricsReplicas 1s (x2 over 31s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics) |
@DirectXMan12 HPA issue is unrelated and resolved. PR submitted in kubernetes-sigs/metrics-server#32. Thanks for your guidance, you set me on the right path. |
What this PR does / why we need it:
Because it seems
kubectl top
is broken. The code comments (removed by PR) indicate that this should "use the first exposed port on the service" but this does not seem to be the case.Below is what I get when trying to use it on a node:
Below is what I get when trying to use it on a pod:
Poking around the API server logs i.e. every time the error happens ... I find a trace that looks like this.
Which led me to believe that the port was missing, notice the
http:heapster:
, and it does seem like the default is to useDefaultHeapsterPort
which is set to an empty string. As you've seen, adding the port, fixes the issue.Which issue(s) this PR fixes
This improves defaults so that
kubectl top
works out of the box. No more need to set the--heapster-port
flag whenever you're using it (assuming one has Heapster running on port 80).Might be related to these issues:
Special notes for your reviewer:
Official heapster documentation/examples all use port 80 for heapster, so it seems to me to be a good default. See:
Even other non-heapsters-official (but still kubernetes-org) examples set it up on port 80:
I also noticed that the legacy metrics client uses picks up this value.
Release note: