Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addon Manager returns "error retrieving RESTMappings to prune" #43755

Closed
jsloyer opened this issue Mar 28, 2017 · 14 comments
Closed

Addon Manager returns "error retrieving RESTMappings to prune" #43755

jsloyer opened this issue Mar 28, 2017 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@jsloyer
Copy link

jsloyer commented Mar 28, 2017

BUG REPORT

Kubernetes version (use kubectl version):

# kubectl version
Client Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.3-1+11d2fe73285bf7-dirty", GitCommit:"11d2fe73285bf763bbb3976c99510e1dddcf043f", GitTreeState:"dirty", BuildDate:"2017-03-16T14:56:53Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.3-1+11d2fe73285bf7-dirty", GitCommit:"11d2fe73285bf763bbb3976c99510e1dddcf043f", GitTreeState:"dirty", BuildDate:"2017-03-16T14:56:53Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
    IBM
  • OS (e.g. from /etc/os-release):
    Ubuntu 16.04.1 LTS
  • Kernel (e.g. uname -a):
    Linux kube-dal10-cr1ae13de8f8ae48349427ad8857dd5069-w2 4.4.0-45-generic Add extended health checking of pods/containers #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

What happened:
I am running addon manager as a pod with this yaml and have placed the kube dashboard yaml from here into /etc/kubernetes/addons. I have modifed the yaml to have the addonmanager.kubernetes.io/mode: Reconcile label. See the full yaml here.

Addon Manager runs and successfully creates the deployment and service but I have an error in the log about unable to prune RESTMappings.

INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment "kubernetes-dashboard" created
error: error retrieving RESTMappings to prune: invalid resource apps/v1beta1, Kind=Deployment, Namespaced=true: no matches for apps/, Kind=Deployment
INFO: == Kubernetes addon reconcile completed at 2017-03-28T13:36:10+0000 ==

How to reproduce it (as minimally and precisely as possible):
see above

@jayunit100
Copy link
Member

jayunit100 commented Mar 28, 2017

[wild guess here] but iirc maybe something is out of date? I recall having an error like this once, and I think this is when i discovered the apis/ hierarchy ... i.e. it seems like it should be reaching out to something like:

  • /apis/apps/v1beta1/ (as opposed to)
  • /apps/v1beta1 ?

(see the versioning section in https://kubernetes.io/docs/concepts/overview/kubernetes-api/)

@MrHohn
Copy link
Member

MrHohn commented Mar 28, 2017

Hi @jsloyer , you were using addon-manager v6.4-beta.1 (which is built with v1.6 kubectl binary) with v1.5+ apiserver. The specific error you hit is because there is no apps/v1beta1,Deployment resource in v1.5 k8s, instead there is only extensions/v1beta1,Deployment. The new group was introduced by #39683 for v1.6.

So it seems like kubectl apply --prune in v1.6 is not compatible with k8s v1.5.

On the addon-manager aspect, I'd recommend to use addon-manager v6.1 with k8s v1.5 (CHANGELOG).

cc @kubernetes/sig-cli-bugs

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@MrHohn
Copy link
Member

MrHohn commented Jun 5, 2017

Since kubectl apply --prune is still in alpha, we may not guarantee its compatibility during version skew.

Closing this issue for now.

/sig cli

/close

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Jun 5, 2017
@joshuawilson
Copy link

@smarterclayton should this be closed?

@aslakknutsen
Copy link

@joshuawilson @smarterclayton Note; this was tested with 3.6.0 RC.0, retesting with 3.6.0

@aslakknutsen
Copy link

@joshuawilson @smarterclayton Confirmed same behavior in 3.6.0

@smarterclayton smarterclayton reopened this Aug 4, 2017
@smarterclayton
Copy link
Contributor

kubectl apply should only fail under version skew when the client can't find a resource from the file in the discovery doc, and even if it does fail, it should return a better message. This will only grow more common as we introduce and deprecate new APIs.

@smarterclayton
Copy link
Contributor

At minimum the CLI needs to indicate to the user that a particular resource in their client file is not recognized by the server, potentially print recognized versions, and instruct the user on what to do.

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 4, 2017
@adohe-zz
Copy link

adohe-zz commented Aug 6, 2017

At minimum the CLI needs to indicate to the user that a particular resource in their client file is not recognized by the server, potentially print recognized versions, and instruct the user on what to do.

SGTM.

@seh
Copy link
Contributor

seh commented Oct 10, 2017

I ran into this same problem running kubectl apply --prune with the following versions in play:

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-09-29T05:56:06Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

None of the manifests in the files submitted to kubectl apply --prune are of kind Deployment, nor are any of the objects in the cluster selected by the label selector I passed via the --selector flag.

@MrHohn
Copy link
Member

MrHohn commented Oct 10, 2017

Unassigning as I'm not currently working on fixing this issue.

/unassign

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests