Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus-operator helm install not working in Kube 1.16 #17511

Open
grailsweb opened this issue Sep 28, 2019 · 10 comments

Comments

@grailsweb
Copy link

@grailsweb grailsweb commented Sep 28, 2019

when intalling this char in Kube 1.16, its throwing this error message.

 helm install --name prometheus stable/prometheus-operator
Error: validation failed: [unable to recognize "": no matches for kind "PodSecurityPolicy" in version "extensions/v1beta1", unable to recognize "": no matches for kind "DaemonSet" in version "extensions/v1beta1", unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2"]

Thanks

@leandromoreirati

This comment has been minimized.

Copy link

@leandromoreirati leandromoreirati commented Sep 29, 2019

I have the same problem for prometheus installation and some other projects using helm.
From what I understand the error is occurring due to the Kuberntes API change, as can be seen at the link below:

https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

Because what was apiVersion: v1, was now configured as apps / v1

To fix this you have to change the Charts or see if there is a backward compatibility, which I still can't find.

@zain-874

This comment has been minimized.

Copy link

@zain-874 zain-874 commented Oct 8, 2019

I solved this issue on Minikube. Downgrade your cluster from 1.16.0 to 1.15.4. Using k8s 1.15.4 and Helm v3.0.0-beta.3 I was able to install "Stable/Prometheus". It worked on Minikube and still i can't say anything about cluster based on kubeadm or any other cluster.

@alter

This comment has been minimized.

Copy link

@alter alter commented Oct 11, 2019

I got the same shit here...

@lookbeat

This comment has been minimized.

Copy link

@lookbeat lookbeat commented Oct 11, 2019

I got the same shit here...

try this: #17268

@Bfoster-melrok

This comment has been minimized.

Copy link

@Bfoster-melrok Bfoster-melrok commented Oct 11, 2019

There are Pull Requests like the one @lookbeat linked in the works, but getting things merged it proving to be pretty slow.

you could hold off on installing kube 1.16/downgrade back to the latest 1.15.x

host the chart yourself with the fixes

or another quick fix would be to do a helm install --dry-run to have it generate the yaml's for you. then simply update the yamls with the appropriate apiVersions

@praparn

This comment has been minimized.

Copy link

@praparn praparn commented Oct 15, 2019

We also facing this problem on kubernetes 1.16.0 and helm 3 (beta 4.0). We're try to create CRD by ourself. Anyway many obsolete api overthere.

Screen Shot 2562-10-14 at 23 34 56

@rshutt

This comment has been minimized.

Copy link

@rshutt rshutt commented Oct 24, 2019

Anyone else seeing that none of the rules will apply either? Or grafana dashboards?

  {{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values  .kubeTargetVersionOverride }}
  {{- if and (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<1.  16.0-0" $kubeTargetVersion) .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}

Guess we have to do a target version override?

@fcuello-fudo

This comment has been minimized.

Copy link

@fcuello-fudo fcuello-fudo commented Oct 24, 2019

Yeah, I have --set kubeTargetVersionOverride="1.15.999" in the mean time

@rshutt

This comment has been minimized.

Copy link

@rshutt rshutt commented Oct 25, 2019

@fcuello-fudo et al.

I suppose from the perspective of convention, should the upper bounding check even be in place until it is otherwise determined that there are issues? Or was this a purposeful decision due to some change in 1.16.0? I should read more perhaps?

@fcuello-fudo

This comment has been minimized.

Copy link

@fcuello-fudo fcuello-fudo commented Oct 25, 2019

should the upper bounding check even be in place until it is otherwise determined that there are issues?

I have no idea where this check came from, but I agree with this ^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
9 participants
You can’t perform that action at this time.