-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addon Manager returns "error retrieving RESTMappings to prune" #43755
Comments
[wild guess here] but iirc maybe something is out of date? I recall having an error like this once, and I think this is when i discovered the apis/ hierarchy ... i.e. it seems like it should be reaching out to something like:
(see the versioning section in https://kubernetes.io/docs/concepts/overview/kubernetes-api/) |
Hi @jsloyer , you were using addon-manager v6.4-beta.1 (which is built with v1.6 kubectl binary) with v1.5+ apiserver. The specific error you hit is because there is no So it seems like On the addon-manager aspect, I'd recommend to use addon-manager v6.1 with k8s v1.5 (CHANGELOG). cc @kubernetes/sig-cli-bugs |
Since kubectl apply --prune is still in alpha, we may not guarantee its compatibility during version skew. Closing this issue for now. /sig cli /close |
@smarterclayton should this be closed? |
@joshuawilson @smarterclayton Note; this was tested with 3.6.0 RC.0, retesting with 3.6.0 |
@joshuawilson @smarterclayton Confirmed same behavior in 3.6.0 |
|
At minimum the CLI needs to indicate to the user that a particular resource in their client file is not recognized by the server, potentially print recognized versions, and instruct the user on what to do. |
SGTM. |
I ran into this same problem running kubectl apply --prune with the following versions in play:
None of the manifests in the files submitted to kubectl apply --prune are of kind Deployment, nor are any of the objects in the cluster selected by the label selector I passed via the |
Unassigning as I'm not currently working on fixing this issue. /unassign |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
BUG REPORT
Kubernetes version (use
kubectl version
):Environment:
IBM
Ubuntu 16.04.1 LTS
uname -a
):Linux kube-dal10-cr1ae13de8f8ae48349427ad8857dd5069-w2 4.4.0-45-generic Add extended health checking of pods/containers #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
What happened:
I am running addon manager as a pod with this yaml and have placed the kube dashboard yaml from here into
/etc/kubernetes/addons
. I have modifed the yaml to have theaddonmanager.kubernetes.io/mode: Reconcile
label. See the full yaml here.Addon Manager runs and successfully creates the deployment and service but I have an error in the log about unable to prune RESTMappings.
How to reproduce it (as minimally and precisely as possible):
see above
The text was updated successfully, but these errors were encountered: