Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GET nodes.metrics.k8s.io fails #1968

Closed
edenreich opened this issue Jun 28, 2020 · 17 comments
Closed

GET nodes.metrics.k8s.io fails #1968

edenreich opened this issue Jun 28, 2020 · 17 comments

Comments

@edenreich
Copy link

edenreich commented Jun 28, 2020

Hardware
Raspberry Pi 4 8GB RAM (Buster Lite OS)

Version: v1.18.4+k3s1

K3S arguments
Server: --docker --no-deploy=traefik
Agent: --docker

Describe the bug

Fresh installation of k3s, and run kubectl top nodes getting error from API 503 service not available.

To Reproduce

Install k3s using k3s-ansible with the specified version

Expected behavior

I'd expected to see metrics of nodes

Actual behavior

Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Additional context / logs

eden@eden ~> env KUBECONFIG=/home/eden/.kube/production kubectl -v5 top nodes
I0628 23:15:25.094137   17258 helpers.go:216] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server is currently unable to handle the request (get nodes.metrics.k8s.io)",
  "reason": "ServiceUnavailable",
  "details": {
    "group": "metrics.k8s.io",
    "kind": "nodes",
    "causes": [
      {
        "reason": "UnexpectedServerResponse",
        "message": "service unavailable"
      }
    ]
  },
  "code": 503
}]
F0628 23:15:25.094215   17258 helpers.go:115] Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Systemd
Server:

k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2020-06-28 12:09:19 BST; 10h ago
     Docs: https://k3s.io
  Process: 614 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 621 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 622 (k3s-server)
    Tasks: 28
   Memory: 420.3M
   CGroup: /system.slice/k3s.service
           └─622 /usr/local/bin/k3s server --docker --no-deploy traefik

Jun 28 22:51:10 k8s-master k3s[622]: E0628 22:51:10.173721     622 resource_quota_controller.go:408] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jun 28 22:51:10 k8s-master k3s[622]: time="2020-06-28T22:51:10.276193535+01:00" level=error msg="node password not set"
Jun 28 22:51:10 k8s-master k3s[622]: time="2020-06-28T22:51:10.276651752+01:00" level=error msg="https://127.0.0.1:6443/v1-k3s/serving-kubelet.crt: 500 Internal Server Error"
Jun 28 22:51:10 k8s-master k3s[622]: E0628 22:51:10.759934     622 available_controller.go:420] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.37.24:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.37.24:443/apis/metrics.k8s.io/v1beta1: net/htt
Jun 28 22:51:10 k8s-master k3s[622]: time="2020-06-28T22:51:10.765030642+01:00" level=info msg="Waiting for master node  startup: resource name may not be empty"
Jun 28 22:51:11 k8s-master k3s[622]: time="2020-06-28T22:51:11.765333066+01:00" level=info msg="Waiting for master node  startup: resource name may not be empty"
Jun 28 22:51:12 k8s-master k3s[622]: time="2020-06-28T22:51:12.765718119+01:00" level=info msg="Waiting for master node  startup: resource name may not be empty"
Jun 28 22:51:13 k8s-master k3s[622]: time="2020-06-28T22:51:13.766206430+01:00" level=info msg="Waiting for master node  startup: resource name may not be empty"
Jun 28 22:51:14 k8s-master k3s[622]: time="2020-06-28T22:51:14.768944604+01:00" level=info msg="Waiting for master node  startup: resource name may not be empty"
Jun 28 22:51:15 k8s-master k3s[622]: http: TLS handshake error from 127.0.0.1:43918: remote error: tls: bad certificate
@edenreich edenreich changed the title GET nodes.metrics.k8s.io GET nodes.metrics.k8s.io Fails Jun 28, 2020
@edenreich edenreich changed the title GET nodes.metrics.k8s.io Fails GET nodes.metrics.k8s.io fails Jun 28, 2020
@BlueCrescent
Copy link

I have the same problem with a fresh install of k3s version v1.17.4+k3s1 running on k3OS v0.10.0 (also on Raspberry Pi 4, with server argument --flannel-backend=ipsec).

I tried the modifications of /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml described here (and restarted with kubectl -n kube-system rollout restart deployment metrics-server) but the problem persists.

This is my yaml file now (maybe I added something in the wrong place?):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      hostNetwork:
        enabled: true
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: rancher/metrics-server:v0.3.6
        command:
        - /metrics-server
        - --metrics-resolution=30s
        - --requestheader-allowed-names=aggregator
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

This also results in the kubernetes-dashboard pod not starting.

@jdmarshall
Copy link

jdmarshall commented Jul 30, 2020

I don't understand why k3s is contacting random internet servers by default. When apps phone home it tends to make people unpleasant to talk to. If you're lucky they just uninstall it and move on.

I just got here and "leave?" is already on my TODO list.

@BlueCrescent
Copy link

What do you mean?
The metrics server is not "on the internet". It's a service that typically runs on a kubernetes cluster collecting RAM + CPU usage statistics and whatnot about the cluster's nodes (as far as I understand). It is not exclusive to k3s and neither is the ServiceUnavailable problem it seems.

@brandond
Copy link
Contributor

brandond commented Jul 30, 2020

@jdmarshall It sounds like you're under the impression that https://10.43.37.24:443/apis/metrics.k8s.io/v1beta1 is a server on the internet. All 10.x.x.x addresses, like 192.168.x.x and 172.16.x.x-172.31.x.x are reserved for private networks that you will not (or at least should not) find on the internet at large. See: https://tools.ietf.org/html/rfc1918

In this case, 10.43.x.x is used for Kubernetes services running within your cluster, while 10.42.x.x is used for Kubernetes pods. None of this is k3s specific; it's core to how Kubernetes works. If you're having an issue with your k3s cluster, please open a new issue - but perhaps try to avoid jumping to any conclusions about what k3s is or is not doing.

@jdmarshall
Copy link

I'm not seeing IP addresses. I'm seeing repeated errors trying to connect to FQDNs, like v1beta1.metrics.k8s.io

Why use a subdomain of a registered internet domain for RFC1918 traffic? That doesn't telegraph 'local address lookup', let alone local/vlan traffic.

@brandond
Copy link
Contributor

brandond commented Jul 31, 2020

@jdmarshall are you talking about errors like:
Jun 28 22:51:10 k8s-master k3s[622]: E0628 22:51:10.759934 622 available_controller.go:420] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.37.24:443/apis/metrics.k8s.io/

That's the APIService name. They're all namespaced as part of the Kubernetes standard; it's not a hostname any more than a Java class name with java.sun.com in it is a hostname. See: https://github.com/kubernetes-sigs/metrics-server/blob/master/manifests/base/apiservice.yaml#L5

The error indicates that it's failing to access a resource with that API group from a cluster API server endpoint.

@rlabrecque
Copy link

rlabrecque commented Aug 18, 2020

Has anyone made any progress on this?

It's the last issue I'm running into with my new little AWS based k3s cluster.

My issue is exactly as described.

I've tried various forms of:

I opened up all my SG for All Access across 10.x.x.x.

      hostNetwork:
        enabled: true

        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

Checked my DNS settings.

Tried raising resource limits for metric-server.

Still the same issue. :(

Edit:

I have re-rolled my cluster and updated to using curl -sfL https://get.k3s.io | K3S_TOKEN="redacted" INSTALL_K3S_EXEC="--tls-san redacted.elb.us-west-2.amazonaws.com --disable traefik" sh -

I was previously using the setup script from here (with the k3s version updated!):

https://github.com/sgdan/k3s-test/blob/master/templates/server.j2#L20

I think that removing --disable-agent or adding --tls-san may have solved it for me!

@ViBiOh
Copy link

ViBiOh commented Aug 25, 2020

I've disabled the metrics-server a long time ago because it just doesn't work, I've gave it another try tonight and finally make it work!

My configuration:

  • 1 Pi3 as a master on v1.18.8+k3s1
  • 2 Pi4 as nodes on v1.18.8+k3s1

First of all, after an accurate reading of kubernetes documentation I've ended adding enable-aggregator-routing=true flag on api-server. Here's my master's configuration (beware I've also enabled pod security policy, you might not want it :p )

k3s server --disable-agent --disable traefik --disable metrics-server --kube-apiserver-arg enable-admission-plugins=PodSecurityPolicy,NodeRestriction --kube-apiserver-arg enable-aggregator-routing=true

I've disabled the provided metrics-server in order to use the "official" one.

So, starting from the official deployment, I've added the followings args:

- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --v=2

v=2 seems important, because when I add it, I got interesting logs from the pod.

I0825 19:08:29.470550       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0825 19:08:32.327548       1 manager.go:95] Scraping metrics from 0 sources
I0825 19:08:32.327915       1 manager.go:148] ScrapeMetrics: time: 2.982µs, nodes: 0, pods: 0
I0825 19:08:32.371075       1 secure_serving.go:116] Serving securely on 0.0.0.0:4443

And finally, I've added hostNetwork: true on the deployment, and after 2 minutes, I've got kubectl top pods working!

@edenreich
Copy link
Author

edenreich commented Aug 25, 2020

I've disabled the metrics-server a long time ago because it just doesn't work, I've gave it another try tonight and finally make it work!

My configuration:

  • 1 Pi3 as a master on v1.18.8+k3s1
  • 2 Pi4 as nodes on v1.18.8+k3s1

First of all, after an accurate reading of kubernetes documentation I've ended adding enable-aggregator-routing=true flag on api-server. Here's my master's configuration (beware I've also enabled pod security policy, you might not want it :p )

k3s server --disable-agent --disable traefik --disable metrics-server --kube-apiserver-arg enable-admission-plugins=PodSecurityPolicy,NodeRestriction --kube-apiserver-arg enable-aggregator-routing=true

I've disabled the provided metrics-server in order to use the "official" one.

So, starting from the official deployment, I've added the followings args:

- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --v=2

v=2 seems important, because when I add it, I got interesting logs from the pod.

I0825 19:08:29.470550       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0825 19:08:32.327548       1 manager.go:95] Scraping metrics from 0 sources
I0825 19:08:32.327915       1 manager.go:148] ScrapeMetrics: time: 2.982µs, nodes: 0, pods: 0
I0825 19:08:32.371075       1 secure_serving.go:116] Serving securely on 0.0.0.0:4443

And finally, I've added hostNetwork: true on the deployment, and after 2 minutes, I've got kubectl top pods working!

Cool, thanks for sharing a walk around. Going to give it a try this weekend.

@edenreich
Copy link
Author

@ViBiOh awesome, your solution works, I can finally get some pods and nodes output :)) thanks!!

image

ping on the issue - question remains why it does not work out of the box when installing k3s latest version ? any ideas ? I think this deserve a further investigation..
Perhaps we can provide this flags to the default metrics-server ?

@gloomytrousers
Copy link

Alas I still can't get it to work (I've just been trying some more). The only thing I haven't yet done from @ViBiOh's instructions is swapped out the default k3s deployment of metrics-server for the official one. Can anyone explain why this makes a difference?

I agree with @edenreich - this should work out of the box in k3s. From a lot of reading around and trying things, I can't see that it ever would do - the k3s processes are making the request from the node's network (192.168.x.x in my case) but can't access the cluster's network (10.42.x.x or 10.43.x.x). Has ANYONE actually had success with the default k3s configuration?

@eoli3n

This comment has been minimized.

@martensson
Copy link

I am getting this issue as well with k3s on Hetzner Cloud. Metric Server works great on node1 but timeouts constantly on the other two nodes.

The workaround of @ViBiOh works thought, so there is something weird with the initial setup.

@martensson
Copy link

Just wanted to add that I managed to fix this finally. It was a host network issue, where the floating IP that was set for some reason conflicted with the host IP of the node. Using Ubuntu 20.04 and Netplan I had to set the host IP BEFORE the floating IP to not cause some kind of internal routing issue within Kubernetes/k3s. Never managed to figure it out because, but this simple solution fixed all my problems.

@erikwilson
Copy link
Contributor

Please avoid using --disable-agent, it will probably cause more problems than it will fix.
The order that network interfaces come up may be important, especially since k8s uses iptables.
If you have multiple network interfaces please ensure that --flannel-iface points to the interface where nodes have shared networking. For something like ipsec there may be a lower level networking issue that needs to be resolved.

@stale
Copy link

stale bot commented Jul 30, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Jul 30, 2021
@stale stale bot closed this as completed Aug 13, 2021
@aaftab
Copy link

aaftab commented Nov 7, 2021

Just wanted to add that I managed to fix this finally. It was a host network issue, where the floating IP that was set for some reason conflicted with the host IP of the node. Using Ubuntu 20.04 and Netplan I had to set the host IP BEFORE the floating IP to not cause some kind of internal routing issue within Kubernetes/k3s. Never managed to figure it out because, but this simple solution fixed all my problems.

Thank you, finally got it fixed after a couple of days of headache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests