Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for provider CRs in the same helm chart as the operator #188

Open
g-gaston opened this issue Jul 13, 2023 · 4 comments
Open

Allow for provider CRs in the same helm chart as the operator #188

g-gaston opened this issue Jul 13, 2023 · 4 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@g-gaston
Copy link

User Story

As an operator I would like to define my own chart to manage the CAPI providers and include the operator as a subchart so I don't need to manage multiple charts independently.

Detailed Description

The current helm chart for the operator includes the CRDs as templates which makes impossible to bundle CRs for those CRDs in the same or parent chart. In this scenario, the first install will fail in the validation phase, since the CRDs won't be yet defined in the API server. Moreover, even if the first install is handled separately, any new field added to the CRDs will also fail during upgrade validations, since helm runs validations against the API server before applying any resource and at that time the installed CRDs will still be missing the new field included in the CRs. With the current setup, it's necessary two manage the two charts (once including the operator, another with the provider definitions) independently and orchestrate them in sequence.

Helm supports CRDs as first class citizens (the crds folder), making sure they are installed before any CR and allowing discovery validations to succeed when the template contains CRs that are instances of those CRDs. However, helm only supports creating CRDs the first time, not updating them. More info about the history of CRDs in helm and their challenges can we found here. This limitation makes using the crds folder for CRD updates a no-go.

I propose to modify the operator helm chart to apply CRDs from a Job invoked as pre-upgrade and pre-install hook. During chart installation/upgrade, helm would create a Job configured to apply all CRD manifests (using an image that contains kubectl). These CRDs could be injected in the Job pod through ConfigMaps. These jobs will be executed before any other resource is installed, ensuring the CRDs exist before the operator deployment is installed. Any consumer of this chart can write their own hooks (with a lower precedence that ours) to create/update the provider CRs after the CRDs have been installed/updated. CRDs and CRs won't be run through the initial helm validations before the hooks are created, but helm will stop the installation if any of the Jobs fail. Helm will cleanup automatically all these Jobs and ConfigMaps after the install/upgrade operation is done.

Drawing 2023-07-13 14 11 53 excalidraw

The drawback of this solution is that the CRDs won't be part of the helm release (won't be listed as part of the release resources and won't be deleted with helm unistall). We could circumvent this issue (if we wanted to) by using a post-delete Job that deletes them with kubectl in the same way it created them,

Anything else you would like to add:

I would love to contribute these changes to the project. But opening this issue first to align on the solution and make sure this is a feature the community is interested in. There might be other solutions, happy to discuss if anyone has other ideas.

/kind feature

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 13, 2023
@alexander-demicev
Copy link
Contributor

Thanks for describing the issue. What about using a post-install hook for applied CRs like this:

apiVersion: operator.cluster.x-k8s.io/v1alpha1
kind: CoreProvider
metadata:
  name: cluster-api
  namespace: capi-system
  annotations:
    "helm.sh/hook": post-install

then you can run helm install with --wait flag and it will apply CRs after CRDs and all other components are created. I'm already working on a PR for "quickstart" bootstrapping of management cluster using helm.

@g-gaston
Copy link
Author

Thanks for describing the issue. What about using a post-install hook for applied CRs like this:

apiVersion: operator.cluster.x-k8s.io/v1alpha1
kind: CoreProvider
metadata:
  name: cluster-api
  namespace: capi-system
  annotations:
    "helm.sh/hook": post-install

then you can run helm install with --wait flag and it will apply CRs after CRDs and all other components are created. I'm already working on a PR for "quickstart" bootstrapping of management cluster using helm.

If I'm not mistaken, that would also fail validations during the first install (or any update that introduces new fields) since I believe the post-install hooks are also validated before any resource is applied.

It's possible that I'm missing something though 😃

@alexander-demicev
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 21, 2023
@kahirokunn
Copy link

kahirokunn commented May 27, 2024

I believe we can split this into two Helm charts: one for installing the Cluster API Operator itself, and another for the resources like CoreProvider that utilize it. This is a common structure, and in my tests, helm upgrade --install works very well. I will share an actual sample in a zip file.

capi-operator-provider.zip
capi-operator.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants