Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1" #1866

Closed
longwuyuan opened this issue Sep 6, 2018 · 12 comments
Closed

Comments

@longwuyuan
Copy link

What did you do?

 minikube start --kubernetes-version=v1.11.1 --memory=8192 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0

git clone https://github.com/coreos/prometheus-operator.git

cd prometheus-operator/contrib/kube-prometheus 

kubectl apply -f manifests/

What did you expect to see?

Expected to see all manifests get applied

What did you see instead? Under which circumstances?


namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
secret/grafana-datasources created
configmap/grafana-dashboard-k8s-cluster-rsrc-use created
configmap/grafana-dashboard-k8s-node-rsrc-use created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-pods created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
unable to recognize "manifests/0prometheus-operator-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/alertmanager-alertmanager.yaml": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/alertmanager-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/kube-state-metrics-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/node-exporter-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-prometheus.yaml": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-rules.yaml": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitorApiserver.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitorCoreDNS.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitorKubeControllerManager.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitorKubeScheduler.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "manifests/prometheus-serviceMonitorKubelet.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Environment

Ubuntu 18 Desktop [ updated as of today ] with docker-17.03-2-ce [ changed apt source.list to install and version-locked to 17.03.2-ce]

  • Prometheus Operator version:
commit ce4ab08d6791161267204d9a61588e64f1b57e05 (HEAD -> master, origin/master, origin/HEAD)
Merge: 153142fb 70d9c8fc
Author: Frederic Branczyk <fbranczyk@gmail.com>
Date:   Thu Sep 6 14:16:35 2018 +0200

    Merge pull request #1855 from metalmatze/go-1.11

    
    *: Update to Go 1.11
  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:
 minikube start --kubernetes-version=v1.11.1 --memory=8192 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0

  • Manifests:
... /prometheus-operator/contrib/kube-prometheus/manifests/
  • Prometheus Operator Logs:
ts=2018-09-06T17:32:44.045703298Z caller=main.go:130 msg="Starting Prometheus Operator version '0.23.2'."
level=info ts=2018-09-06T17:32:44.150518151Z caller=operator.go:176 component=alertmanageroperator msg="connection established" cluster-version=v1.11.1
level=info ts=2018-09-06T17:32:44.15142829Z caller=operator.go:320 component=prometheusoperator msg="connection established" cluster-version=v1.11.1
level=info ts=2018-09-06T17:32:44.753413957Z caller=operator.go:566 component=alertmanageroperator msg="CRD updated" crd=Alertmanager
level=info ts=2018-09-06T17:32:44.79461856Z caller=operator.go:1358 component=prometheusoperator msg="CRD updated" crd=Prometheus
level=info ts=2018-09-06T17:32:44.811324279Z caller=operator.go:1358 component=prometheusoperator msg="CRD updated" crd=ServiceMonitor
level=info ts=2018-09-06T17:32:44.856717992Z caller=operator.go:1358 component=prometheusoperator msg="CRD updated" crd=PrometheusRule
level=info ts=2018-09-06T17:32:47.758113754Z caller=operator.go:192 component=alertmanageroperator msg="CRD API endpoints ready"
level=info ts=2018-09-06T17:32:54.050773783Z caller=operator.go:336 component=prometheusoperator msg="CRD API endpoints ready"


@brancz
Copy link
Contributor

brancz commented Sep 6, 2018

As the QuickStart mentions, there is a race in Kubernetes that the CRD creation finished but the API is not actually available. You just have to run the command once again.

@longwuyuan
Copy link
Author

My bad...did'nt read that ... all manifests applied after second time... thnx and apologies.. closing

@jolson490
Copy link
Contributor

Background:

But since yesterday when deploying kube-prometheus for the first time to a newly created K8s cluster (that is running in AWS), running the (create manifests) command a second time is no longer doing the trick. (I don't know why it suddenly stopped working for me.)

Here's the output from the 1st time running the command:

14:43:52 customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
14:43:52 customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
14:43:52 customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
14:43:52 customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
14:43:52 clusterrole.rbac.authorization.k8s.io/prometheus-operator created
14:43:52 clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
14:43:52 deployment.apps/prometheus-operator created
14:43:52 service/prometheus-operator created
14:43:52 serviceaccount/prometheus-operator created
14:43:53 secret/alertmanager-main created
14:43:53 service/alertmanager-main created
14:43:53 serviceaccount/alertmanager-main created
14:43:53 secret/grafana-config created
14:43:53 secret/grafana-datasources created
14:43:53 configmap/grafana-dashboard-etcd created
14:43:53 configmap/grafana-dashboard-k8s-cluster-rsrc-use created
14:43:53 configmap/grafana-dashboard-k8s-node-rsrc-use created
14:43:53 configmap/grafana-dashboard-k8s-resources-cluster created
14:43:53 configmap/grafana-dashboard-k8s-resources-namespace created
14:43:53 configmap/grafana-dashboard-k8s-resources-pod created
14:43:53 configmap/grafana-dashboard-nodes created
14:43:53 configmap/grafana-dashboard-pods created
14:43:53 configmap/grafana-dashboard-statefulset created
14:43:53 configmap/grafana-dashboards created
14:43:53 deployment.apps/grafana created
14:43:53 service/grafana created
14:43:53 serviceaccount/grafana created
14:43:53 clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
14:43:53 clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
14:43:53 deployment.apps/kube-state-metrics created
14:43:53 role.rbac.authorization.k8s.io/kube-state-metrics created
14:43:53 rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
14:43:53 service/kube-state-metrics created
14:43:53 serviceaccount/kube-state-metrics created
14:43:53 clusterrole.rbac.authorization.k8s.io/node-exporter created
14:43:53 clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
14:43:53 daemonset.apps/node-exporter created
14:43:53 service/node-exporter created
14:43:53 serviceaccount/node-exporter created
14:43:53 clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 endpoints/etcd created
14:43:53 service/kube-controller-manager-prometheus-discovery created
14:43:53 service/kube-dns-prometheus-discovery created
14:43:53 service/kube-scheduler-prometheus-discovery created
14:43:53 rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
14:43:53 rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 role.rbac.authorization.k8s.io/prometheus-k8s-config created
14:43:53 role.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 role.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 role.rbac.authorization.k8s.io/prometheus-k8s created
14:43:53 secret/kube-etcd-client-certs created
14:43:53 service/prometheus-k8s created
14:43:53 serviceaccount/prometheus-k8s created
14:43:53 service/etcd created
14:43:53 unable to recognize "manifests/0prometheus-operator-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/alertmanager-alertmanager.yaml": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/alertmanager-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/kube-state-metrics-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/node-exporter-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-prometheus.yaml": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-rules.yaml": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorApiserver.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorCoreDNS.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorEtcd.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorKubeControllerManager.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorKubeScheduler.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
14:43:53 unable to recognize "manifests/prometheus-serviceMonitorKubelet.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

And from the 2nd time running the create manifests command (minus the errors about "already exists" & "already allocated"):

14:43:55 [unable to recognize "manifests/0prometheus-operator-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/alertmanager-alertmanager.yaml": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1", unable to recognize "manifests/alertmanager-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/kube-state-metrics-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/node-exporter-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-prometheus.yaml": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-rules.yaml": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorApiserver.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorCoreDNS.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorEtcd.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorKubeControllerManager.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorKubeScheduler.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1", unable to recognize "manifests/prometheus-serviceMonitorKubelet.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"]

Then I started over (i.e. kubectl delete -f manifests/), and then while running the create manifests command (twice - back-to-back) I also in a separate terminal was running: until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done

And I found that after the 1st time running create manifests that for a few seconds the get servicemonitors command had an exit code of 1 and was printing: error: the server doesn't have a resource type "servicemonitors"
And after those few seconds, then the get servicemonitors command exited successfully and printed: No resources found.

But by the time those few seconds went by, the 2nd time running the create manifests command had already finished running.

At that point I had this (neither the alertmanager-main nor prometheus-k8s pod(s) were created):

# kubectl get pods -n monitoring
NAME                                   READY     STATUS    RESTARTS   AGE
grafana-57f4f86ff7-wwm96               1/1       Running   0          27m
kube-state-metrics-f884b88b-7m6xv      4/4       Running   0          27m
node-exporter-bl2k7                    2/2       Running   0          27m
node-exporter-jx6z7                    2/2       Running   0          27m
node-exporter-jzrnd                    2/2       Running   0          27m
node-exporter-smbkn                    2/2       Running   0          27m
prometheus-operator-6b574898c9-qwblg   1/1       Running   0          27m

Now that get servicemonitors was showing that the resource type was ready/available, I was able to rectify things by running create manifests a 3rd time - it printed (again excluding the benign "already" errors):

servicemonitor.monitoring.coreos.com/prometheus-operator created
alertmanager.monitoring.coreos.com/main created
servicemonitor.monitoring.coreos.com/alertmanager created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
servicemonitor.monitoring.coreos.com/node-exporter created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/etcd created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created

And after running it the 3rd time, then I had a fully happy kube-prometheus deployment (in my case I'm setting replicas to 1 for both alertmanager & prom):

# kubectl get pods -n monitoring
NAME                                   READY     STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2       Running   0          5m
grafana-57f4f86ff7-wwm96               1/1       Running   0          36m
kube-state-metrics-f884b88b-7m6xv      4/4       Running   0          36m
node-exporter-bl2k7                    2/2       Running   0          36m
node-exporter-jx6z7                    2/2       Running   0          36m
node-exporter-jzrnd                    2/2       Running   0          36m
node-exporter-smbkn                    2/2       Running   0          36m
prometheus-k8s-0                       3/3       Running   1          5m
prometheus-operator-6b574898c9-qwblg   1/1       Running   0          36m

So my point in all of this is: simply running create manifests twice turns out to not be a guaranteed foolproof way of getting kube-prometheus successfully deployed.

Heads up to @brancz , I've created PR #2006 to help prevent anyone else from getting tripped up by this issue I encountered.

@mxinden
Copy link
Contributor

mxinden commented Oct 18, 2018

Thanks for the very detailed issue report and the pull-request @jolson490. We used to have a script that properly waits for all CRDs to be registered before continuing, but chose the simplicity of the kubectl apply -f manifests one-liner.

If this keeps coming up I am happy to think about alternatives.

@mxinden
Copy link
Contributor

mxinden commented Oct 18, 2018

@jolson490 out of curiosity, how many API server were you running? One present race is, that one API server gets the create request, forwards it to etcd and registers the http endpoints, but the other API server needing some time to catch up.

@jolson490
Copy link
Contributor

hi @mxinden , sorry i forgot to get back to you sooner.
We have 3 master nodes in this K8s cluster our ours (running in AWS) - so a total of 3 kube-apiserver pods in the cluster.

The change I made in the PR has been doing the trick for us.
By the way, on previous GH issues (e.g the one i referenced in my comment above) Frederic (@brancz ) & I discussed the pros/cons of how there used to be deploy & teardown scripts. Maybe some of that is different since kube-prometheus was refactored to use jsonnet - since then the team I'm on has no longer needed to maintain a forked copy of this entire prometheus-operator repo (which is amazingly nice). Though we still have our own script that bridges the (relatively small) gap from kube-prometheus - a script that fills in variables (e.g. name of given cluster we're running the script for - e.g. in the a-m yaml config file) in preparation for calling build.sh, and then does the kubectl create -f manifests/ commands from Quickstart.
Anywho, if a need arises for a more reliable solution (for deploying kube-prometheus) than what's currently in Quickstart then I'm sure one of us will come up with something, but currently I'm good.
(That script of ours does a few other handy things - it can do a teardown followed by a deploy, and in-between it verifies/waits for the k-p pods/nodeports/etc. to indeed be deleted.)

@angelcos
Copy link

angelcos commented Jul 31, 2019

Yes, this happens again using kops 1.12.2 and k8s/kubectl 1.12.10... But just running the script again for deploy what is missing (in my case servicemonitor).

@ringerc
Copy link

ringerc commented Sep 20, 2021

I'm seeing similar behaviour here on a current kube-prometheus and a kind cluster. It's easily reproduced.

I reported the underlying kubectl issue here: kubernetes/kubectl#1117

@LinTechSo
Copy link

Hi all. i hit same issue. any update?
using loki-stack (loki version 2.4.1)
on kubernetes version 1.18 and also 1.23

@hbarajas
Copy link

Hi All, i am getting the same issue, is there any update on this or a workaround for this

@fpetkovski
Copy link
Contributor

The latest version of the operator is resilient enough to keep retrying until all CRDs are available in the apiserver.

If you are trying to apply Prometheus or Alertmanager custom resources before CRDs are available, there is nothing we can do to solve this problem. It is how Kubernetes works.

@sysnasri
Copy link

sysnasri commented Apr 25, 2022

Hi all. i hit same issue. any update? using loki-stack (loki version 2.4.1) on kubernetes version 1.18 and also 1.23

Having the same issue, did you figure out?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants