-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for provider CRs in the same helm chart as the operator #188
Comments
Thanks for describing the issue. What about using a post-install hook for applied CRs like this: apiVersion: operator.cluster.x-k8s.io/v1alpha1
kind: CoreProvider
metadata:
name: cluster-api
namespace: capi-system
annotations:
"helm.sh/hook": post-install then you can run helm install with |
If I'm not mistaken, that would also fail validations during the first install (or any update that introduces new fields) since I believe the post-install hooks are also validated before any resource is applied. It's possible that I'm missing something though 😃 |
/triage accepted |
I believe we can split this into two Helm charts: one for installing the Cluster API Operator itself, and another for the resources like CoreProvider that utilize it. This is a common structure, and in my tests, |
User Story
As an operator I would like to define my own chart to manage the CAPI providers and include the operator as a subchart so I don't need to manage multiple charts independently.
Detailed Description
The current helm chart for the operator includes the CRDs as
templates
which makes impossible to bundle CRs for those CRDs in the same or parent chart. In this scenario, the first install will fail in the validation phase, since the CRDs won't be yet defined in the API server. Moreover, even if the first install is handled separately, any new field added to the CRDs will also fail during upgrade validations, since helm runs validations against the API server before applying any resource and at that time the installed CRDs will still be missing the new field included in the CRs. With the current setup, it's necessary two manage the two charts (once including the operator, another with the provider definitions) independently and orchestrate them in sequence.Helm supports CRDs as first class citizens (the
crds
folder), making sure they are installed before any CR and allowing discovery validations to succeed when the template contains CRs that are instances of those CRDs. However, helm only supports creating CRDs the first time, not updating them. More info about the history of CRDs in helm and their challenges can we found here. This limitation makes using thecrds
folder for CRD updates a no-go.I propose to modify the operator helm chart to apply CRDs from a
Job
invoked aspre-upgrade
andpre-install
hook. During chart installation/upgrade, helm would create aJob
configured to apply all CRD manifests (using an image that containskubectl
). These CRDs could be injected in theJob
pod throughConfigMaps
. These jobs will be executed before any other resource is installed, ensuring the CRDs exist before the operator deployment is installed. Any consumer of this chart can write their own hooks (with a lower precedence that ours) to create/update the provider CRs after the CRDs have been installed/updated. CRDs and CRs won't be run through the initial helm validations before the hooks are created, but helm will stop the installation if any of theJob
s fail. Helm will cleanup automatically all these Jobs and ConfigMaps after the install/upgrade operation is done.The drawback of this solution is that the CRDs won't be part of the helm release (won't be listed as part of the release resources and won't be deleted with
helm unistall
). We could circumvent this issue (if we wanted to) by using a post-deleteJob
that deletes them with kubectl in the same way it created them,Anything else you would like to add:
I would love to contribute these changes to the project. But opening this issue first to align on the solution and make sure this is a feature the community is interested in. There might be other solutions, happy to discuss if anyone has other ideas.
/kind feature
The text was updated successfully, but these errors were encountered: