Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maintain a helm chart #3169

Closed
izelnakri opened this issue Apr 26, 2020 · 32 comments
Closed

Maintain a helm chart #3169

izelnakri opened this issue Apr 26, 2020 · 32 comments

Comments

@izelnakri
Copy link

izelnakri commented Apr 26, 2020

Since the introduction of kubectl -k and helm 3, it is now possible to version control helm charts and run kustomize over the output of an helm chart without installing additional software. This gives developers 100% control over their manifest and provides an easy way to automate their workflow using the native/and standard software.

Having prometheus-operator chart as 'community-supported' creates many roadblocks for the adoption of this project. Certain CRDS are missing in the community supported helm chart such as GrafanaDashboard, AlertManagerRule, GrafanaChart that could sync with Grafana and Prometheus AlertManager, and alows further automation of essential cluster state.

In my opinion this is an essential part of administering a kubernetes cluster and resorting to jsonnet instead of the now-default kubectl -k would be a disservice to many developers. Helm 3 allows for custom repositories, so I believe there is no reason not to maintain a helm chart that people can use easily. People can use jsonnet over the helm output so a helm chart has to be maintained in this repository in my opinion:

https://helm.sh/docs/topics/chart_repository/

@brancz
Copy link
Contributor

brancz commented Apr 27, 2020

None of the maintainers use helm and we used to have a helm chart as part of this repo and it certainly did not work out well, which is exactly why it was removed. I'm not opposed to have a separate repo for it, as mentioned here: #3161, but we will not be having one as part of this repo again. It would give a false perception of the maintainers properly taking care of it, which would not be the case.

@izelnakri
Copy link
Author

izelnakri commented Apr 27, 2020

Could you expand on why it did not work well? A lot happened & happening on the helm development side of things. Today I find the best way to deploy any 3rd party k8s project is to helm output their chart and run kustomization over it when values.yaml falls short. This approach works with jsonnet as well meanwhile the current suggested jsonnet customization doesnt work well with kustomize(which is default/standard now) and is custom.

@lilic
Copy link
Contributor

lilic commented Apr 27, 2020

Could you expand on why it did not work well?

As Frederic said, we do not use helm or that helm chart, and we cannot maintain it or provide support for it when users open issues about it.

Certain CRDS are missing in the community supported helm chart such as GrafanaDashboard, AlertManagerRule, GrafanaChart that could sync with Grafana and Prometheus AlertManager, and alows further automation of essential cluster state.

We are not preventing those who maintain or use that chart to add those things in the upstream community supported chart. Why are you or someone else not adding those things in the chart upstream? Not sure what the difference between it being here or in the upstream?

@izelnakri
Copy link
Author

izelnakri commented Apr 27, 2020

Hi @lilic, I would still appreciate an explanation on why having a chart in this repo did not work well and is a bad idea.

" Why are you or someone else not adding those things in the chart upstream? Not sure what the difference between it being here or in the upstream?"

-> If the Chart configurations were in this repo then it is much easier to create a automated release process where chart would always be kept up to date and not dependent on the outdated helm stable PR request process. Also it serves as another way of documentation of the system for developers who would like to contribute or simply want to understand the code as an introduction or progression, other than reading the source code.

@brancz
Copy link
Contributor

brancz commented Apr 27, 2020

Because none of the maintainers used and use helm and helm contributions were always sporadic and no one ever truly took maintainership of them they just naturally decayed, and people opened issues against the repo for something the maintainers had no reasonable way to respond that would actually be satisfactory.

Again, we're super happy to give someone that responsibility, even in a blessed space, but not within this repository.

@vsliouniaev
Copy link
Contributor

@izelnakri Are those additional CRDs you mentioned in-use somewhere?

@haf
Copy link

haf commented May 5, 2020

Instead of having a Helm chart, how about having plain kubernetes manifests in this repo? That way, people can [c|k]ustomize the way they want, downstream.

@izelnakri
Copy link
Author

izelnakri commented May 6, 2020

@vsliouniaev havent seen them yet but based on my use of them, it seem needed.

@haf I think Helm should make versioning declarative with package.json(maybe helm.json) and helm_charts folder. Of course we can put all the manifests under one file however versioning it and updating it is still a challenge. In my opinion, helm will eventually need to have explicit versioning and most CNCF projects are getting deployed and used as helm charts across the board, thus it is better to support helm directly.

@lilic
Copy link
Contributor

lilic commented May 6, 2020

how about having plain kubernetes manifests in this repo?

We have plain manifests in this repo -> https://github.com/coreos/prometheus-operator/tree/master/example :)

@haf
Copy link

haf commented May 6, 2020

We have plain manifests in this repo

Yes, but I mean a complete setup that I can point my kustomization to. Would you let me PR what I mean?

Example apply:

kustomize build k8s/dev | kubectl apply --context fou-dev -f -
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com unchanged
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/prometheus-operator-admission configured
serviceaccount/kube-state-metrics unchanged
serviceaccount/prometheus unchanged
serviceaccount/prometheus-alertmanager unchanged
serviceaccount/prometheus-node-exporter unchanged
serviceaccount/prometheus-operator unchanged
serviceaccount/prometheus-operator-admission unchanged
podsecuritypolicy.policy/prometheus-operator configured
podsecuritypolicy.policy/po-kube-state-metrics configured
podsecuritypolicy.policy/prometheus configured
podsecuritypolicy.policy/prometheus-alertmanager configured
podsecuritypolicy.policy/prometheus-node-exporter configured
podsecuritypolicy.policy/prometheus-operator-admission configured
role.rbac.authorization.k8s.io/prometheus-alertmanager unchanged
role.rbac.authorization.k8s.io/prometheus-operator-admission unchanged
clusterrole.rbac.authorization.k8s.io/prometheus unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator-admission unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-operator-psp unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-psp unchanged
clusterrole.rbac.authorization.k8s.io/psp-kube-state-metrics unchanged
clusterrole.rbac.authorization.k8s.io/psp-prometheus-node-exporter unchanged
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-alertmanager unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-operator-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator-psp unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-psp unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/psp-kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/psp-prometheus-node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
configmap/dash-apiserver created
configmap/dash-cluster-total created
configmap/dash-controller-manager created
configmap/dash-etcd created
configmap/dash-k8s-coredns created
configmap/dash-k8s-resources-cluster created
configmap/dash-k8s-resources-namespace created
configmap/dash-k8s-resources-node created
configmap/dash-k8s-resources-pod created
configmap/dash-k8s-resources-workload created
configmap/dash-k8s-resources-workloads-namespace created
configmap/dash-kubelet created
configmap/dash-namespace-by-pod created
configmap/dash-namespace-by-workload created
configmap/dash-node-cluster-rsrc-use created
configmap/dash-node-rsrc-use created
configmap/dash-nodes created
configmap/dash-persistentvolumesusage created
configmap/dash-pod-total created
configmap/dash-prometheus created
configmap/dash-proxy created
configmap/dash-scheduler created
configmap/dash-statefulset created
configmap/dash-workload-total created
secret/istio-scrape-configs unchanged
service/coredns unchanged
service/kube-controller-manager unchanged
service/kube-etcd unchanged
service/kube-proxy unchanged
service/kube-scheduler unchanged
service/kube-state-metrics unchanged
service/prometheus unchanged
service/prometheus-alertmanager unchanged
service/prometheus-node-exporter unchanged
service/prometheus-operator unchanged
deployment.apps/kube-state-metrics configured
deployment.apps/prometheus-operator configured
daemonset.apps/prometheus-node-exporter unchanged
job.batch/prometheus-operator-admission-create unchanged
job.batch/prometheus-operator-admission-patch configured
sealedsecret.bitnami.com/alertmanager-clusterwide unchanged
alertmanager.monitoring.coreos.com/clusterwide unchanged
prometheus.monitoring.coreos.com/clusterwide unchanged
prometheusrule.monitoring.coreos.com/apps unchanged
prometheusrule.monitoring.coreos.com/etcd unchanged
prometheusrule.monitoring.coreos.com/kube-apiserver unchanged
prometheusrule.monitoring.coreos.com/kube-calculations unchanged
prometheusrule.monitoring.coreos.com/kube-prometheus-node-recording unchanged
prometheusrule.monitoring.coreos.com/kube-resources unchanged
prometheusrule.monitoring.coreos.com/kube-scheduler unchanged
prometheusrule.monitoring.coreos.com/kube-storage unchanged
prometheusrule.monitoring.coreos.com/kube-system unchanged
prometheusrule.monitoring.coreos.com/node-exporter unchanged
prometheusrule.monitoring.coreos.com/node-exporter-calculations unchanged
prometheusrule.monitoring.coreos.com/node-network unchanged
prometheusrule.monitoring.coreos.com/node-rules unchanged
prometheusrule.monitoring.coreos.com/node-time unchanged
prometheusrule.monitoring.coreos.com/prometheus unchanged
prometheusrule.monitoring.coreos.com/prometheus-absent unchanged
prometheusrule.monitoring.coreos.com/prometheus-alertmanager unchanged
prometheusrule.monitoring.coreos.com/prometheus-onprem unchanged
prometheusrule.monitoring.coreos.com/prometheus-operator unchanged
prometheusrule.monitoring.coreos.com/watchdog unchanged
servicemonitor.monitoring.coreos.com/apiserver unchanged
servicemonitor.monitoring.coreos.com/coredns unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-etcd unchanged
servicemonitor.monitoring.coreos.com/kube-proxy unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
servicemonitor.monitoring.coreos.com/prometheus unchanged
servicemonitor.monitoring.coreos.com/prometheus-alertmanager unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
virtualservice.networking.istio.io/alertmanager unchanged
virtualservice.networking.istio.io/prometheus unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/prometheus-operator-admission configured

@lilic
Copy link
Contributor

lilic commented May 6, 2020

This seems like a problem with kustomize if it needs very specific directory or something for it. We have this https://github.com/coreos/prometheus-operator/blob/master/kustomization.yaml no idea around the backstory for it nor do I use kustomize, but yes if it doesn't work lets fix it, as it seems like it worked before?

cc @brancz whats the history around that file?

@paulfantom
Copy link
Member

paulfantom commented May 6, 2020

Full working kustomize setup is provided by kube-prometheus repository. Kustomize file is available at https://github.com/coreos/kube-prometheus/blob/master/kustomization.yaml

@haf
Copy link

haf commented May 6, 2020

Ok, let's review improvement opportunities for those:

  • grafana dashboards definitions — too large, you'll run into problems applying; instead use configMapGenerator in kustomization.yaml and point to plain json files. The upside of this is a clean workflow from Grafana where you create the dashboard, to JSON files, which kustomize will pick up
  • grafana deployment — this should be split into a separate folder, so we don't have to deploy and old grafana just to use prometheus operator; create a folder per 'concept' and let me as the user compose the setups I care about
  • extra configmaps should be created using kustomization.yaml and the above configMapGenerator, so that we don't have to keep nesting yaml with JSON, but can edit the JSON in a JSON editor
  • setup as a folder is not really expressive
  • Where's the admission hook?
    • and its role binding etc
  • Where's the validating hook?
    • and its role bindings etc
  • Alertmanager is missing a pod security policy
  • kust-state-metrics is missing a pod security policy
  • node-exporter is missing a pod security policy
  • are all rules missing?

Need to update https://github.com/coreos/prometheus-operator#prometheus-operator-vs-kube-prometheus-vs-community-helm-chart with the info that the above is the recommended way

@paulfantom
Copy link
Member

paulfantom commented May 6, 2020

Ok, let's review improvement opportunities for those:

This is something that should be discussed in another issue in kube-prometheus.

@haf
Copy link

haf commented May 6, 2020

Need to update https://github.com/coreos/prometheus-operator#prometheus-operator-vs-kube-prometheus-vs-community-helm-chart with the info that the above is the recommended way

@paulfantom
Copy link
Member

paulfantom commented May 6, 2020

the info that the above is the recommended way

There is no one recommended way and don't plan on recommending one other the other in prometheus-operator repository. Readme points out the differences between two different mechanisms of deploying stack and it is out for the end-user to decide which one is the one he or she wants to use.

We as maintainers of prometheus-operator use jsonnet from kube-prometheus to generate deployment manifests. We don't directly use kustomize nor we use helm. However since kustomize only needs one file to list all plain yaml manifests, we added that to kube-prometheus project and we are open to ideas and contributions to kube-prometheus project :)

That said let's move discussion about kustomize to kube-prometheus as this issue is about helm.

@brancz
Copy link
Contributor

brancz commented May 6, 2020

FWIW I think the list of suggestions is awesome! Let’s work on those things! :)

@haf-afa
Copy link

haf-afa commented May 8, 2020

Ok, here's the WIP prometheus-operator/kube-prometheus#523

@sc250024
Copy link

I think the thing to do here is to use Bitnami's Prometheus Operator Helm chart here: https://hub.helm.sh/charts/bitnami/prometheus-operator. They seem to have integrated most of the core functionality into the chart, and they maintain their charts very well.

On a longer note, I'm really not sure why the maintainers / people in general have such a revulsion to Helm. Having used it extensively myself, I know it's strengths and weaknesses quite well. It's a legitimate packaging format for Kubernetes and is now CNCF Graduated. Does it do everything? No, but no tool does.

@brancz
Copy link
Contributor

brancz commented Jun 3, 2020

We've had our share of experience with it as well and we chose to abandon it because of that. We replaced it with something we're happier with :) We've run this project for >4 years now, helm chart maintainers come and go every ~6 months. We're promised maintainership and then people abandon it because it's too hard to maintain. We've experienced this too often to have this happen in the main repo again. If this happens in a separate repo, then it's easier for us to distance ourselves from it and not have the responsibility fall back on us (like it has happened repeatedly before).

@sc250024
Copy link

sc250024 commented Jun 3, 2020

@brancz That's a good point. Is the issue you all have with Helm itself (i.e. its method of constructing the manifest), or with the fact that there's no steady maintainer? What tool do you all use instead?

@lilic
Copy link
Contributor

lilic commented Jun 3, 2020

What tool do you all use instead?

We (in openshift) use operators (controllers) to reconcile and further customize manifests that are generated by prometheus https://github.com/coreos/kube-prometheus.

@stale
Copy link

stale bot commented Aug 2, 2020

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@stale stale bot added the stale label Aug 2, 2020
@scottrigby
Copy link

Hello everyone! I was here from Freenode #prometheus-dev IRC channel, where I'm discussing moving the stable/prometheus-operator helm chart to a new home in advance of the stable and incubator helm repo deprecation this November (see https://github.com/helm/charts/issues/21103 for tracking that progress overall).

Related to this issue discussion, I want to mention that as of Helm 3.1.0 there is a post-renderer flag that allows end users to customize chart output for their specific use cases. This can allow charts to be dramatically simplified. You can see a demo for how to use kustomize as the post-renderer here: https://github.com/thomastaylor312/advanced-helm-demos/tree/master/post-render

We will be presenting on this during the Helm Deep Dive KubeCon EU 2020 session coming up, but you don't need to wait for that to check it out. I'm mentioning here to address concerns posted above.

As part of the goal of moving stable charts to new homes, my main question is: should stable/prometheus-operator be moved to the https://github.com/prometheus-operator or the https://github.com/prometheus-community GitHub org?

@scottrigby
Copy link

I should also note we have created Helm GitHub actions to make this an easy process, and myself and several others are volunteering to set up the new homes for these charts with essentially the same CI/CD as github.com/helm/charts use today. I also suggest using a separate git repo for the charts, and bringing in previous history from stable the stable chart(s) (git filter-branch etc) to not only make the automation process smooth, but also the process of future maintenance. Please hit me up for help with that once the best org for this chart is decided. I can give recommendations for processes, and help with setup, but I where the prometheus-operator chart should live is is better discussed between members of the two orgs.

@lilic
Copy link
Contributor

lilic commented Aug 6, 2020

I am curious @scottrigby you mentioned you would help setup, but who would maintain these charts? My fear of adding them to our org is that we the maintainers of the org would eventually be responsible for them which has happened in the past. Wonder why did they not move just within the helm organization, and why is there a need to move it to each org instead of under helm for example?

I am not blocking the move, just trying to understand what this means for us maintainers, as we ourselves (the majority of us) don't use the helm chart so would be harder to maintain it in the long run. If people decide to not maintain it, do we remove the helm chart what is the process there? Thanks!

@brancz
Copy link
Contributor

brancz commented Aug 6, 2020

I kind of agree with @lilic's direction. Projects don't tend to maintain all flavors of deb, rpm, flatpak, snap, ebuilds, pacman, etc. either. I guess the biggest problem I have with putting it in the prometheus-operator org is a kind of "endorsement" that that would give towards that solution, which does not exist in the current set of maintainers. I'm all for enabling the community to have helm charts, so because of that and the above reasoning I think the prometheus-community org is a good place for the helm chart. I'm happy to sponsor the helm charts going into the prometheus-community org, under the condition that it's renamed to kube-prometheus-chart to more accurately reflect its content.

Aside from that, I'm excited to see how what you mentioned @scottrigby works out in practice around minimal helm charts, and hope to see the chart evolve in that direction.

@scottrigby
Copy link

@lilic to start it sounds like it would be two of the current maintainers (I just reached out to the third again to check in, they had stepped back for a time but wanted to remain present to help merge PRs as needed), and also at least one more person volunteered in the related thread. However I don't have an opinion on where it should move, only to help once a new home is decided 😄

Quick answer to your question about why not move the chart to the Helm org: while monorepos for related charts is an idiomatic approach, having all the charts in one repo proved to be unsustainable. Several years ago when the Distributed Repositories proposal was made I initially suggested breaking the charts up into individual git repos under the helm org, and then if/when more appropriate homes were found they could be transferred, but this would pose a problem for communities that want to move related charts back to one monorepo in their own org (more difficult at that point to splice the git history back together), so we instead used this year to work with app communities to identify their forever homes before moving over the appropriate chart(s) with history from helm/charts the the new location. Hope this makes sense?

@scottrigby
Copy link

@brancz OK that makes sense, especially with what @lilic is saying 👌

Since you are sponsoring the prometheus-operator chart for prometheus-community, and prometheus-operator devs feel the prometheus-operator org is not the ideal home for the chart, it sounds like we can follow up in prometheus-community/community#28 and close this issue 👍 Thanks everyone!

@lilic
Copy link
Contributor

lilic commented Aug 10, 2020

Thanks for your explanation! Happy to hear of that outcome, just want it to find a permanent home.

@scottrigby
Copy link

This is now complete ✅ I thought I had updated it everywhere, but just remembered this issue.

https://twitter.com/r6by/status/1303742371427909632?s=20

📢 Big news for prometheus users and contributors: the
@HelmPack stable Prometheus charts (17 of them) have moved to https://github.com/prometheus-community/helm-charts 🎉 🥳

Find them on CNCF Artifact Hub: https://artifacthub.io/packages/search?page=1&org=prometheus&kind=0 🕵️‍♀️ 🔎

Contribute here: https://github.com/prometheus-community/helm-charts 🙋‍♂️

https://twitter.com/r6by/status/1303744992717008900?s=20

🚧 Important change notice: the stable "prometheus-operator" chart name has changed: it's now "kube-prometheus-stack", to more accurately reflect the upstream project it installs: https://github.com/prometheus-operator/kube-prometheus. Enjoy!

(also see the thanks tweet in that thread. mainly wanted to post news here)

@paulfantom
Copy link
Member

Closing issue as helm chart is now maintained at https://github.com/prometheus-community/helm-charts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants