-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintain a helm chart #3169
Comments
None of the maintainers use helm and we used to have a helm chart as part of this repo and it certainly did not work out well, which is exactly why it was removed. I'm not opposed to have a separate repo for it, as mentioned here: #3161, but we will not be having one as part of this repo again. It would give a false perception of the maintainers properly taking care of it, which would not be the case. |
Could you expand on why it did not work well? A lot happened & happening on the helm development side of things. Today I find the best way to deploy any 3rd party k8s project is to helm output their chart and run kustomization over it when values.yaml falls short. This approach works with jsonnet as well meanwhile the current suggested jsonnet customization doesnt work well with kustomize(which is default/standard now) and is custom. |
As Frederic said, we do not use helm or that helm chart, and we cannot maintain it or provide support for it when users open issues about it.
We are not preventing those who maintain or use that chart to add those things in the upstream community supported chart. Why are you or someone else not adding those things in the chart upstream? Not sure what the difference between it being here or in the upstream? |
Hi @lilic, I would still appreciate an explanation on why having a chart in this repo did not work well and is a bad idea. " Why are you or someone else not adding those things in the chart upstream? Not sure what the difference between it being here or in the upstream?" -> If the Chart configurations were in this repo then it is much easier to create a automated release process where chart would always be kept up to date and not dependent on the outdated helm stable PR request process. Also it serves as another way of documentation of the system for developers who would like to contribute or simply want to understand the code as an introduction or progression, other than reading the source code. |
Because none of the maintainers used and use helm and helm contributions were always sporadic and no one ever truly took maintainership of them they just naturally decayed, and people opened issues against the repo for something the maintainers had no reasonable way to respond that would actually be satisfactory. Again, we're super happy to give someone that responsibility, even in a blessed space, but not within this repository. |
@izelnakri Are those additional CRDs you mentioned in-use somewhere? |
Instead of having a Helm chart, how about having plain kubernetes manifests in this repo? That way, people can [c|k]ustomize the way they want, downstream. |
@vsliouniaev havent seen them yet but based on my use of them, it seem needed. @haf I think Helm should make versioning declarative with |
We have plain manifests in this repo -> https://github.com/coreos/prometheus-operator/tree/master/example :) |
Yes, but I mean a complete setup that I can point my kustomization to. Would you let me PR what I mean? Example apply:
|
This seems like a problem with kustomize if it needs very specific directory or something for it. We have this https://github.com/coreos/prometheus-operator/blob/master/kustomization.yaml no idea around the backstory for it nor do I use kustomize, but yes if it doesn't work lets fix it, as it seems like it worked before? cc @brancz whats the history around that file? |
Full working kustomize setup is provided by kube-prometheus repository. Kustomize file is available at https://github.com/coreos/kube-prometheus/blob/master/kustomization.yaml |
Ok, let's review improvement opportunities for those:
Need to update https://github.com/coreos/prometheus-operator#prometheus-operator-vs-kube-prometheus-vs-community-helm-chart with the info that the above is the recommended way |
This is something that should be discussed in another issue in kube-prometheus. |
|
There is no one recommended way and don't plan on recommending one other the other in prometheus-operator repository. Readme points out the differences between two different mechanisms of deploying stack and it is out for the end-user to decide which one is the one he or she wants to use. We as maintainers of prometheus-operator use jsonnet from kube-prometheus to generate deployment manifests. We don't directly use kustomize nor we use helm. However since kustomize only needs one file to list all plain yaml manifests, we added that to kube-prometheus project and we are open to ideas and contributions to kube-prometheus project :) That said let's move discussion about kustomize to kube-prometheus as this issue is about helm. |
FWIW I think the list of suggestions is awesome! Let’s work on those things! :) |
Ok, here's the WIP prometheus-operator/kube-prometheus#523 |
I think the thing to do here is to use Bitnami's Prometheus Operator Helm chart here: https://hub.helm.sh/charts/bitnami/prometheus-operator. They seem to have integrated most of the core functionality into the chart, and they maintain their charts very well. On a longer note, I'm really not sure why the maintainers / people in general have such a revulsion to Helm. Having used it extensively myself, I know it's strengths and weaknesses quite well. It's a legitimate packaging format for Kubernetes and is now CNCF Graduated. Does it do everything? No, but no tool does. |
We've had our share of experience with it as well and we chose to abandon it because of that. We replaced it with something we're happier with :) We've run this project for >4 years now, helm chart maintainers come and go every ~6 months. We're promised maintainership and then people abandon it because it's too hard to maintain. We've experienced this too often to have this happen in the main repo again. If this happens in a separate repo, then it's easier for us to distance ourselves from it and not have the responsibility fall back on us (like it has happened repeatedly before). |
@brancz That's a good point. Is the issue you all have with Helm itself (i.e. its method of constructing the manifest), or with the fact that there's no steady maintainer? What tool do you all use instead? |
We (in openshift) use operators (controllers) to reconcile and further customize manifests that are generated by prometheus https://github.com/coreos/kube-prometheus. |
This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions. |
Hello everyone! I was here from Freenode #prometheus-dev IRC channel, where I'm discussing moving the Related to this issue discussion, I want to mention that as of Helm We will be presenting on this during the Helm Deep Dive KubeCon EU 2020 session coming up, but you don't need to wait for that to check it out. I'm mentioning here to address concerns posted above. As part of the goal of moving stable charts to new homes, my main question is: should |
I should also note we have created Helm GitHub actions to make this an easy process, and myself and several others are volunteering to set up the new homes for these charts with essentially the same CI/CD as github.com/helm/charts use today. I also suggest using a separate git repo for the charts, and bringing in previous history from stable the stable chart(s) (git filter-branch etc) to not only make the automation process smooth, but also the process of future maintenance. Please hit me up for help with that once the best org for this chart is decided. I can give recommendations for processes, and help with setup, but I where the prometheus-operator chart should live is is better discussed between members of the two orgs. |
I am curious @scottrigby you mentioned you would help setup, but who would maintain these charts? My fear of adding them to our org is that we the maintainers of the org would eventually be responsible for them which has happened in the past. Wonder why did they not move just within the helm organization, and why is there a need to move it to each org instead of under I am not blocking the move, just trying to understand what this means for us maintainers, as we ourselves (the majority of us) don't use the helm chart so would be harder to maintain it in the long run. If people decide to not maintain it, do we remove the helm chart what is the process there? Thanks! |
I kind of agree with @lilic's direction. Projects don't tend to maintain all flavors of deb, rpm, flatpak, snap, ebuilds, pacman, etc. either. I guess the biggest problem I have with putting it in the prometheus-operator org is a kind of "endorsement" that that would give towards that solution, which does not exist in the current set of maintainers. I'm all for enabling the community to have helm charts, so because of that and the above reasoning I think the prometheus-community org is a good place for the helm chart. I'm happy to sponsor the helm charts going into the prometheus-community org, under the condition that it's renamed to Aside from that, I'm excited to see how what you mentioned @scottrigby works out in practice around minimal helm charts, and hope to see the chart evolve in that direction. |
@lilic to start it sounds like it would be two of the current maintainers (I just reached out to the third again to check in, they had stepped back for a time but wanted to remain present to help merge PRs as needed), and also at least one more person volunteered in the related thread. However I don't have an opinion on where it should move, only to help once a new home is decided 😄 Quick answer to your question about why not move the chart to the Helm org: while monorepos for related charts is an idiomatic approach, having all the charts in one repo proved to be unsustainable. Several years ago when the Distributed Repositories proposal was made I initially suggested breaking the charts up into individual git repos under the helm org, and then if/when more appropriate homes were found they could be transferred, but this would pose a problem for communities that want to move related charts back to one monorepo in their own org (more difficult at that point to splice the git history back together), so we instead used this year to work with app communities to identify their forever homes before moving over the appropriate chart(s) with history from helm/charts the the new location. Hope this makes sense? |
@brancz OK that makes sense, especially with what @lilic is saying 👌 Since you are sponsoring the prometheus-operator chart for prometheus-community, and prometheus-operator devs feel the prometheus-operator org is not the ideal home for the chart, it sounds like we can follow up in prometheus-community/community#28 and close this issue 👍 Thanks everyone! |
Thanks for your explanation! Happy to hear of that outcome, just want it to find a permanent home. |
This is now complete ✅ I thought I had updated it everywhere, but just remembered this issue. https://twitter.com/r6by/status/1303742371427909632?s=20
https://twitter.com/r6by/status/1303744992717008900?s=20
(also see the thanks tweet in that thread. mainly wanted to post news here) |
Closing issue as helm chart is now maintained at https://github.com/prometheus-community/helm-charts |
Since the introduction of
kubectl -k
andhelm 3
, it is now possible to version control helm charts and run kustomize over the output of an helm chart without installing additional software. This gives developers 100% control over their manifest and provides an easy way to automate their workflow using the native/and standard software.Having
prometheus-operator
chart as 'community-supported' creates many roadblocks for the adoption of this project. Certain CRDS are missing in the community supported helm chart such asGrafanaDashboard
,AlertManagerRule
,GrafanaChart
that could sync with Grafana and Prometheus AlertManager, and alows further automation of essential cluster state.In my opinion this is an essential part of administering a kubernetes cluster and resorting to jsonnet instead of the now-default
kubectl -k
would be a disservice to many developers. Helm 3 allows for custom repositories, so I believe there is no reason not to maintain a helm chart that people can use easily. People can use jsonnet over thehelm output
so a helm chart has to be maintained in this repository in my opinion:https://helm.sh/docs/topics/chart_repository/
The text was updated successfully, but these errors were encountered: