New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"kubectl top" no longer works when --read-only-port=0 is set on kubelet #64027

Closed
jeroenjacobs1205 opened this Issue May 18, 2018 · 8 comments

Comments

Projects
None yet
5 participants
@jeroenjacobs1205
Copy link

jeroenjacobs1205 commented May 18, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

After setting --read-only-port=0 on all kubelets in my cluster, "kubectl top" no longer works:

kubectl top node
error: metrics not available yet

kubectl top pods --all-namespaces
(no output is displayed)

What you expected to happen:

I expected the output as before the change:

kubectl top nodes
NAME            CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
ip-10-9-0-244   63m          7%        1061Mi          64%
ip-10-9-0-80    57m          7%        1008Mi          61%
ip-10-9-1-231   56m          7%        1141Mi          69%
kubectl top pods --all-namespaces                                                                                                 
NAMESPACE       NAME                                         CPU(cores)   MEMORY(bytes)
alb-heartbeat   alb-heartbeat-app-64778c56fc-82tmf           0m           8Mi
alb-heartbeat   alb-heartbeat-app-64778c56fc-xtg9m           0m           16Mi
conduit         controller-5b576d647b-xwpv7                  1m           87Mi
conduit         grafana-7c85b54866-wnhjl                     3m           43Mi
conduit         prometheus-5d8f9cd8b8-r5mgx                  2m           117Mi
conduit         web-5bf454788f-5nnzt                         0m           37Mi
kube-system     heapster-5d758f7c9d-89s6p                    1m           44Mi
kube-system     kube-apiserver-ip-10-9-0-244                 13m          311Mi
kube-system     kube-controller-manager-ip-10-9-0-244        11m          81Mi
kube-system     kube-dns-66d5bc69b8-hcvrn                    0m           40Mi
kube-system     kube-proxy-n4t85                             1m           26Mi
kube-system     kube-proxy-t9tcm                             1m           27Mi
kube-system     kube-proxy-tx7hb                             1m           24Mi
kube-system     kube-scheduler-ip-10-9-0-244                 3m           23Mi
kube-system     kubernetes-dashboard-5bd6f767c7-nw8h2        0m           25Mi
kube-system     nfs-client-provisioner-67995fb67c-nn7p5      0m           13Mi
kube-system     nginx-dummy-ip-10-9-0-80                     0m           2Mi
kube-system     nginx-dummy-ip-10-9-1-231                    0m           3Mi
kube-system     nginx-master-lb-ip-10-9-0-80                 0m           2Mi
kube-system     nginx-master-lb-ip-10-9-1-231                0m           4Mi
kube-system     traefik-ingress-controller-55bb447c4-9czrg   1m           24Mi
kube-system     weave-net-kzdfw                              0m           55Mi
kube-system     weave-net-lfb59                              0m           64Mi
kube-system     weave-net-qprdl                              0m           54Mi

How to reproduce it (as minimally and precisely as possible):

Add --read-only-port=0 to kubelet.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.9.7
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): CentOS
  • Kernel (e.g. uname -a): Linux ip-10-9-0-244.eu-west-1.compute.internal 4.16.0-1.el7.elrepo.x86_64 #1 SMP Sun Apr 1 20:13:35 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: custom ansible scripts
  • Others:
@jeroenjacobs1205

This comment has been minimized.

Copy link

jeroenjacobs1205 commented May 18, 2018

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node and removed needs-sig labels May 18, 2018

@wgliang

This comment has been minimized.

Copy link
Member

wgliang commented May 21, 2018

@jeroenjacobs1205

it set to 0 to disable

fs.Int32Var(&c.ReadOnlyPort, "read-only-port", c.ReadOnlyPort, "The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable)")

@jeroenjacobs1205

This comment has been minimized.

Copy link

jeroenjacobs1205 commented Jun 19, 2018

The only fix I found so far, is editing the heapster cluster-role, and adding "nodes/stats" to the allowed resources.

Probably duplicate of this one: kubernetes-retired/heapster#1936

@binumn

This comment has been minimized.

Copy link

binumn commented Jul 30, 2018

Try the following as well and restart kubelet and docker on masters

kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Oct 28, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Nov 27, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 27, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Dec 27, 2018

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment