New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incompatibility between kubectl and Kubernetes cluster version results in misleading error #426

Closed
exarkun opened this Issue Jan 24, 2018 · 1 comment

Comments

2 participants
@exarkun
Copy link
Contributor

exarkun commented Jan 24, 2018

I recently (mostly accidentally) tried using kubectl 1.9.x against a 1.6.x cluster. Kubernetes doesn't seem to promise anything about what will happen in this situation (but I haven't found great compatibility guidelines). What did happen is that kubectl would frequently fail with:

Error from server (NotFound): the server could not find the requested resource

This made it look like there was some application logic error in telepresence, where it attempted to operate on a non-existent resource. I couldn't figure out how this could happen. Eventually, I added -v=13 to the kubectl invocation and was rewarded:

   5.2 7 | I0118 17:30:03.887649   17027 round_trippers.go:436] GET https://35.184.12.241/swagger-2.0.0.pb-v1 404 Not Found in 187 milliseconds
   5.2 7 | I0118 17:30:03.887711   17027 round_trippers.go:442] Response Headers:
   5.2 7 | I0118 17:30:03.887719   17027 round_trippers.go:445]     Content-Type: application/json
   5.2 7 | I0118 17:30:03.887724   17027 round_trippers.go:445]     Content-Length: 1137
   5.2 7 | I0118 17:30:03.887729   17027 round_trippers.go:445]     Date: Thu, 18 Jan 2018 17:30:03 GMT
   5.2 7 | I0118 17:30:03.890236   17027 request.go:873] Response Body: {
   5.2 7 |   "paths": [
   5.2 7 |     "/api",
   5.2 7 |     "/api/v1",
   5.2 7 |     "/apis",
   5.2 7 |     "/apis/apps",
   5.2 7 |     "/apis/apps/v1beta1",
   5.2 7 |     "/apis/authorization.k8s.io",
   5.2 7 |     "/apis/authorization.k8s.io/v1",
   5.2 7 |     "/apis/authorization.k8s.io/v1beta1",
   5.2 7 |     "/apis/autoscaling",
   5.2 7 |     "/apis/autoscaling/v1",
   5.2 7 |     "/apis/autoscaling/v2alpha1",
   5.2 7 |     "/apis/batch",
   5.2 7 |     "/apis/batch/v1",
   5.2 7 |     "/apis/batch/v2alpha1",
   5.2 7 |     "/apis/certificates.k8s.io",
   5.2 7 |     "/apis/certificates.k8s.io/v1beta1",
   5.2 7 |     "/apis/extensions",
   5.2 7 |     "/apis/extensions/v1beta1",
   5.2 7 |     "/apis/policy",
   5.2 7 |     "/apis/policy/v1beta1",
   5.2 7 |     "/apis/rbac.authorization.k8s.io",
   5.2 7 |     "/apis/rbac.authorization.k8s.io/v1alpha1",
   5.2 7 |     "/apis/rbac.authorization.k8s.io/v1beta1",
   5.2 7 |     "/apis/storage.k8s.io",
   5.2 7 |     "/apis/storage.k8s.io/v1",
   5.2 7 |     "/apis/storage.k8s.io/v1beta1",
   5.2 7 |     "/healthz",
   5.2 7 |     "/healthz/SSH Tunnel Check",
   5.2 7 |     "/healthz/ping",
   5.2 7 |     "/healthz/poststarthook/bootstrap-controller",
   5.2 7 |     "/healthz/poststarthook/ca-registration",
   5.2 7 |     "/healthz/poststarthook/extensions/third-party-resources",
   5.2 7 |     "/healthz/poststarthook/rbac/bootstrap-roles",
   5.2 7 |     "/logs",
   5.2 7 |     "/metrics",
   5.2 7 |     "/swaggerapi/",
   5.2 7 |     "/ui/",
   5.2 7 |     "/version"
   5.2 7 |   ]
   5.2 7 | }
   5.2 7 | I0118 17:30:03.892342   17027 helpers.go:201] server response object: [{
   5.2 7 |   "metadata": {},
   5.2 7 |   "status": "Failure",
   5.2 7 |   "message": "the server could not find the requested resource",
   5.2 7 |   "reason": "NotFound",
   5.2 7 |   "details": {
   5.2 7 |     "causes": [
   5.2 7 |       {
   5.2 7 |         "reason": "UnexpectedServerResponse",
   5.2 7 |         "message": "unknown"
   5.2 7 |       }
   5.2 7 |     ]
   5.2 7 |   },
   5.2 7 |   "code": 404
   5.2 7 | }]
   5.2 7 | F0118 17:30:03.892380   17027 helpers.go:119] Error from server (NotFound): the server could not find the requested resource
   5.2 TL | [7] exit 255.

This makes it clear that the resource has nothing to do with telepresence logic and is all about basic communication between kubectl and the cluster.

Ideally, I would like to see telepresence tell me that my kubectl is incapable of talking to my kubernetes cluster. I'm not exactly sure how this situation can be reliably detected (considering an incompatibility can presumably take any form and therefore manifest any kind of failure). Perhaps reporting more information in the telepresence log is the best we could actually do.

This was encountered in the course of development of #418 (wherein I blindly set up CI with the latest kubectl without consideration of with which server version the tests would interact). The misconfiguration was then exacerbated by the fact by default I had no interactive access to the environment and was making no other use of kubectl, therefore couldn't easily notice that kubectl itself was failing to talk to the cluster.

@ark3

This comment has been minimized.

Copy link
Contributor

ark3 commented Jan 24, 2018

We can warn if we see widely divergent version numbers when we run kubectl version --short for usage tracking. However, it may be difficult to choose the circumstances under which a warning would be useful.

If we fire off a bunch of kubectl auth can-i commands (#288), we may notice this sort of problem anyhow.

@plombardi89 plombardi89 added this to UX in Roadmap Feb 21, 2018

@richarddli richarddli added this to Reliability in T Roadmap (v2) Feb 21, 2018

@rhs rhs added this to Error Feedback in Buckets Mar 8, 2018

@ark3 ark3 added this to Output in Blobs Apr 9, 2018

@ark3 ark3 closed this in #872 Dec 14, 2018

Roadmap automation moved this from UX to Completed Dec 14, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment