Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install fails: service "prometheus-operator-operator" not found #660

Closed
luebken opened this issue Sep 1, 2020 · 7 comments
Closed

Install fails: service "prometheus-operator-operator" not found #660

luebken opened this issue Sep 1, 2020 · 7 comments

Comments

@luebken
Copy link

luebken commented Sep 1, 2020

What happened?

Followed the https://github.com/prometheus-operator/kube-prometheus#quickstart to install kube-prometheus but it failed with:

... 
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
Error from server (InternalError): error when creating "manifests/prometheus-rules.yaml": Internal error occurred: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": Post https://prometheus-operator-operator.prometheus-operator.svc:443/admission-prometheusrules/mutate?timeout=30s: service "prometheus-operator-operator" not found

There is a service called prometheus-operator but not a prometheus-operator-operator.

Environment

  • Prometheus Operator version:

  • Kubernetes version information:

v1.18.6

  • Kubernetes cluster kind:

    DigitalOcean

@simonpasquier
Copy link
Contributor

AFAICT configuring webhooks isn't part of the quickstart instructions. Since it's a cluster-wide setting, it looks like something/someone had already configured a mutating webhook for Prometheus rules. You can try to find out more with the kubectl get MutatingWebhookConfiguration command.

@luebken
Copy link
Author

luebken commented Sep 1, 2020

There was in fact a MutatingWebhookConfiguration.

I've deleted all kube-prometheus resources with kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup. It ran without any errors. And I've deleted the MutatingWebhookConfiguration.

Re-running kubectl create -f manifests/ gives me the same error:

servicemonitor.monitoring.coreos.com/kubelet created
Error from server (InternalError): error when creating "manifests/prometheus-rules.yaml": Internal error occurred: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": Post https://prometheus-operator-operator.prometheus-operator.svc:443/admission-prometheusrules/validate?timeout=30s: service "prometheus-operator-operator" not found

@simonpasquier
Copy link
Contributor

I can't find any occurrence of prometheus-operator-operator in the kube-prometheus repository. Are you sure that your working directory hasn't been modified?

@luebken
Copy link
Author

luebken commented Sep 4, 2020

The cluster might have been. The local directory not.

Unfortunately I had to move on and started from scratch so I can't reproduce this issue anymore.
There for I'm closing this issue. Thanks for your help.

@luebken luebken closed this as completed Sep 4, 2020
@AlliotTech
Copy link

I had the same problem

when i follow the "quickstart" ,and exec this commond :

kubectl apply -f manifests/

alertmanager.monitoring.coreos.com/main unchanged
secret/alertmanager-main configured
service/alertmanager-main unchanged
serviceaccount/alertmanager-main unchanged
servicemonitor.monitoring.coreos.com/alertmanager unchanged
secret/grafana-datasources unchanged
configmap/grafana-dashboard-apiserver unchanged
configmap/grafana-dashboard-cluster-total unchanged
configmap/grafana-dashboard-controller-manager unchanged
configmap/grafana-dashboard-k8s-resources-cluster unchanged
configmap/grafana-dashboard-k8s-resources-namespace unchanged
configmap/grafana-dashboard-k8s-resources-node unchanged
configmap/grafana-dashboard-k8s-resources-pod unchanged
configmap/grafana-dashboard-k8s-resources-workload unchanged
configmap/grafana-dashboard-k8s-resources-workloads-namespace unchanged
configmap/grafana-dashboard-kubelet unchanged
configmap/grafana-dashboard-namespace-by-pod unchanged
configmap/grafana-dashboard-namespace-by-workload unchanged
configmap/grafana-dashboard-node-cluster-rsrc-use unchanged
configmap/grafana-dashboard-node-rsrc-use unchanged
configmap/grafana-dashboard-nodes unchanged
configmap/grafana-dashboard-persistentvolumesusage unchanged
configmap/grafana-dashboard-pod-total unchanged
configmap/grafana-dashboard-prometheus-remote-write unchanged
configmap/grafana-dashboard-prometheus unchanged
configmap/grafana-dashboard-proxy unchanged
configmap/grafana-dashboard-scheduler unchanged
configmap/grafana-dashboard-statefulset unchanged
configmap/grafana-dashboard-workload-total unchanged
configmap/grafana-dashboards unchanged
deployment.apps/grafana configured
service/grafana unchanged
serviceaccount/grafana unchanged
servicemonitor.monitoring.coreos.com/grafana unchanged
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
deployment.apps/kube-state-metrics unchanged
service/kube-state-metrics unchanged
serviceaccount/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
daemonset.apps/node-exporter configured
service/node-exporter unchanged
serviceaccount/node-exporter unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
prometheus.monitoring.coreos.com/k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
service/prometheus-k8s unchanged
serviceaccount/prometheus-k8s unchanged
servicemonitor.monitoring.coreos.com/prometheus unchanged
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/coredns unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged
Error from server (InternalError): error when creating "manifests/prometheus-rules.yaml": Internal error occurred: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": Post https://demo-prometheus-operator-operator.prom-test.svc:443/admission-prometheusrules/mutate?timeout=30s: service "demo-prometheus-operator-operator" not found

@AlliotTech
Copy link

 kubectl get MutatingWebhookConfiguration
NAME                                   WEBHOOKS   AGE
demo-prometheus-operator-admission     1          47h
istio-sidecar-injector                 1          4d1h
ks-events-admission-mutate             1          4d1h
logsidecar-injector-admission-mutate   1          4d1h
mutating-webhook-configuration         1          4d1h
prometheus-operator-admission          1          2d

@AlliotTech
Copy link

AlliotTech commented Apr 16, 2021

i have solved this problem, by following steps:

# find the output about demo-prometheus-operator
➜  kube-prometheus git:(heads/v0.6.0) ✗ kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io
NAME                                 WEBHOOKS   AGE
demo-prometheus-operator-admission   1          47h
ingress-nginx-admission              1          4d1h
istio-galley                         2          4d1h
ks-events-admission-validate         1          4d1h
prometheus-operator-admission        1          2d
users.iam.kubesphere.io              1          4d1h
validating-webhook-configuration     3          123m
➜  kube-prometheus git:(heads/v0.6.0) ✗ kubectl get MutatingWebhookConfiguration
NAME                                   WEBHOOKS   AGE
demo-prometheus-operator-admission     1          47h
istio-sidecar-injector                 1          4d1h
ks-events-admission-mutate             1          4d1h
logsidecar-injector-admission-mutate   1          4d1h
mutating-webhook-configuration         1          4d1h
prometheus-operator-admission          1          2d



# and delete it: 
➜  kube-prometheus git:(heads/v0.6.0) ✗ kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io  demo-prometheus-operator-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "demo-prometheus-operator-admission" deleted

➜  kube-prometheus git:(heads/v0.6.0) ✗ kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io  demo-prometheus-operator-admission
mutatingwebhookconfiguration.admissionregistration.k8s.io "demo-prometheus-operator-admission" deleted

# and then  .you can reapply it:  
 kubectl apply -f manifests/

alertmanager.monitoring.coreos.com/main unchanged
secret/alertmanager-main configured
service/alertmanager-main unchanged
serviceaccount/alertmanager-main unchanged
servicemonitor.monitoring.coreos.com/alertmanager unchanged
secret/grafana-datasources unchanged
configmap/grafana-dashboard-apiserver unchanged
configmap/grafana-dashboard-cluster-total unchanged
configmap/grafana-dashboard-controller-manager unchanged
configmap/grafana-dashboard-k8s-resources-cluster unchanged
configmap/grafana-dashboard-k8s-resources-namespace unchanged
configmap/grafana-dashboard-k8s-resources-node unchanged
configmap/grafana-dashboard-k8s-resources-pod unchanged
configmap/grafana-dashboard-k8s-resources-workload unchanged
configmap/grafana-dashboard-k8s-resources-workloads-namespace unchanged
configmap/grafana-dashboard-kubelet unchanged
configmap/grafana-dashboard-namespace-by-pod unchanged
configmap/grafana-dashboard-namespace-by-workload unchanged
configmap/grafana-dashboard-node-cluster-rsrc-use unchanged
configmap/grafana-dashboard-node-rsrc-use unchanged
configmap/grafana-dashboard-nodes unchanged
configmap/grafana-dashboard-persistentvolumesusage unchanged
configmap/grafana-dashboard-pod-total unchanged
configmap/grafana-dashboard-prometheus-remote-write unchanged
configmap/grafana-dashboard-prometheus unchanged
configmap/grafana-dashboard-proxy unchanged
configmap/grafana-dashboard-scheduler unchanged
configmap/grafana-dashboard-statefulset unchanged
configmap/grafana-dashboard-workload-total unchanged
configmap/grafana-dashboards unchanged
deployment.apps/grafana configured
service/grafana unchanged
serviceaccount/grafana unchanged
servicemonitor.monitoring.coreos.com/grafana unchanged
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
deployment.apps/kube-state-metrics unchanged
service/kube-state-metrics unchanged
serviceaccount/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
daemonset.apps/node-exporter configured
service/node-exporter unchanged
serviceaccount/node-exporter unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
prometheus.monitoring.coreos.com/k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s unchanged
serviceaccount/prometheus-k8s unchanged
servicemonitor.monitoring.coreos.com/prometheus unchanged
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/coredns unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged

more info about this issue :
see helm/charts#21080

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants