Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error from server (ServiceUnavailable): the server is currently unable to handle the request #188

Closed
alanh0vx opened this issue Dec 14, 2018 · 16 comments

Comments

@alanh0vx
Copy link

executing the command kubectl top nodes, seems it's freezed

executing the command kubectl get --raw /apis/metrics.k8s.io/v1beta1
it returned Error from server (ServiceUnavailable): the server is currently unable to handle the request

view the logs from metrics-server

http: TLS handshake error from 192.168.133.64:51926:EOF

192.168.133.64 is the internal ip address of api-server

from api-server log

E1214 02:06:35.625504       1 memcache.go:134] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1214 02:07:05.573958       1 available_controller.go:311] v1beta1.metrics.k8s.io failed with: Get https://10.109.179.236:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1214 02:07:05.658841       1 memcache.go:134] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request

kubelet version is 1.12.3
metrics-server 0.3.1

i have another clusters-set with the same version and configuration, metrics-server works just fine

api-server parameters

 - command:
    - kube-apiserver
    - --authorization-mode=Node,RBAC
    - --advertise-address=10.100.1.2
    - --allow-privileged=true
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
@garenwen
Copy link

Have you solved the problem?

@serathius
Copy link
Contributor

Have you seen log that metrics-server successfully generated certificate?
I had cases when it could take 2-3 minutes for metrics server to be able to serve tls.

@alanh0vx
Copy link
Author

no, it still gives the same errors

any other places that i can check

@serathius
Copy link
Contributor

I would increase verbosity in metrics-server and try to find something in logs

@abdennour
Copy link

Has someone solved the problem?

@alanh0vx
Copy link
Author

just keep having such logs

I1220 09:39:33.000928       1 logs.go:49] http: TLS handshake error from 192.168.133.64:45734: EOF
I1220 09:40:02.985989       1 logs.go:49] http: TLS handshake error from 192.168.133.64:45794: EOF
I1220 09:40:10.416586       1 manager.go:95] Scraping metrics from 5 sources
I1220 09:40:10.419951       1 manager.go:120] Querying source: kubelet_summary:k83.xxxx.xxx
I1220 09:40:10.424855       1 manager.go:120] Querying source: kubelet_summary:k82.xxxx.xxx
I1220 09:40:10.427763       1 manager.go:120] Querying source: kubelet_summary:k84.xxxx.xxx
I1220 09:40:10.435885       1 manager.go:120] Querying source: kubelet_summary:k85.xxxx.xxx
I1220 09:40:10.446844       1 manager.go:120] Querying source: kubelet_summary:k81.xxxx.xxx
I1220 09:40:16.078000       1 manager.go:150] ScrapeMetrics: time: 5.661316126s, nodes: 5, pods: 46

@d0o0bz
Copy link

d0o0bz commented Dec 31, 2018

try to edit the metrics-server-deployment.yaml file and add command parameters :

  containers:
    command:
    - /metrics-server
    - --kubelet-insecure-tls
    - --kubelet-preferred-address-types=InternalIP
    volumeMounts:
    - name: tmp-dir
      mountPath: /tmp

@alanh0vx
Copy link
Author

thanks, turned out it's MTU issue

from calico config, it was 1500 while the interface is 1450

finally kubectl edit configmap calico-config -n kube-system and change the MTU value from 1500 to 1430

@Darren-wh
Copy link

you can add this
command:
- /metrics-server
- --metric-resolution=30s
- --requestheader-allowed-names=aggregator
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
it will work

@kevinsingapore
Copy link

to reinstall the metrics-server again;
follow the way,i had sovle the error.

@NEWgaofeng
Copy link

InternalIP need view localhost ip ?

@NEWgaofeng
Copy link

NEWgaofeng commented Dec 18, 2019

my yaml

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
  namespace: cpu-example
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
    - "2"

i input node command:
kubectl apply -f cpu-request-limit.yaml
then
kubectl top pod cpu-demo --namespace=cpu-example

but Errors:
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

@anrajme
Copy link

anrajme commented Feb 16, 2020

Adding hostNetwork: true in the spec fixed my issue !

hostNetwork:
  enabled: true

Duplicate of #157

@abdennour
Copy link

abdennour commented Mar 24, 2020

@aneesh121 you save 2 days. Thanks!

Anyone provision the cluster with kubeadm, please start by checking the answer #188 (comment)

For EKS users, it requires only args: - --kubelet-preferred-address-types=InternalIP as I explained here

@fardin01
Copy link

Adding hostNetwork: true in the spec fixed my issue !

hostNetwork:
  enabled: true

Duplicate of #157

This is the solution only when the metrics-server runs on EKS.

@x1wins
Copy link

x1wins commented Feb 24, 2021

original

added --kubelet-insecure-tls

kubectl create -f https://raw.githubusercontent.com/x1wins/CW-OVP/master/k8s-manifests/components.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests