-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨[WIP]feat: Add the proposal about the helm chart autogenerate plugin. #3632
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: dashanji <dashanjic@gmail.com>
Hi @dashanji. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
# Auto-generate Helm Chart Plugin | ||
=================== | ||
|
||
The proposal is to add a new alpha command to auto-generate a helm chart for a given kustomize directory created by kubebuilder. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposal is to add a new alpha command to auto-generate a helm chart for a given kustomize directory created by kubebuilder. | |
This proposal aims to introduce an optional mechanism that allows users to generate a Helm Chart from their Kubebuilder-scaffolded project. This will enable them to effectively package and distribute their solutions. | |
To achieve this goal, we can introduce a Helm Chart plugin (i.e. helmchart/v1alpha), which will provide the scaffolds. Alternatively, we could implement a command to generate the Helm Chart from a provided Kubebuilder project. |
After analyzing the results, I believe that the plugin will be a better fit. However, I think we need to draft both options, then discuss the pros and cons, and gather the community's opinions. After receiving other inputs, we can finalize the decision here. Also, add why one solution was select instead other.
For instance, a simple comparison might be:
Command:
Pros: Always generates the Helm Chart from the project, ensuring that the output is synchronized.
Cons: Does not preserve customizations made by the users.
Plugin:
Pros: Aligned with the Kubebuilder ecosystem and seems easier to maintain. Provides better flexibility in trying to keep the Operator and Helm synchronized while preserving the changes.
Cons: We cannot ensure that the Helm Chart will always be equivalent to the Kubebuilder.
Let's engage with the community, evaluate other inputs, and then document the final decision here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let me try to understand, the Command type is to add a new subcommand such as kubebuilder generate-helm-chart
. And the Plugin type is to add a new internal plugin such as kubebuilder init --plugins=helm-chart/v1alpha
. This seems to be only a formal difference, but the actual functions are similar.
If so, I agree with the Plugin type. Users can get a default layout of helm chart from the existing kustomize configurations and project configurations.
As for synchronization with the kustomize directory, I think the main cost is Adding/Deleting the CRDs. However, we could leverage the kustomize and project configurations to generate a new helm chart once run the command kubebuilder init --plugins=helm-chart/v1alpha
. Also, If there is a generated helm chart, the plugin will recognize it and not affect the customizing part.
To make the above feature more clear, we can split it into two command kubebuilder init
and kubebuilder edit
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a concern, which I believe touches this subject.
When we add a new marker for a CRD, and run make manifests
, it generates new changes to the CRD in config/crds
. When we add RBAC markers, it changes the Roles in the config/rbac
. I'm talking about CRDs and RBAC, but could really be anything that changes the kustomize bases, and consequently should also change the chart.
So I think that ideally, when we run make manifests
, these new changes should also be reflected in deploy/helm-chart/crds
(for CRD changes) and deploy/helm-chart/templates
(for RBAC). @dashanji @camilamacedo86 do you agree this is a goal of ours?
And if so, does either the plugin or command affect this flow? Would we still be able to change the Makefile to include this new behavior regardless of it being a plugin or a command?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think that ideally, when we run make manifests, these new changes should also be reflected in deploy/helm-chart/crds (for CRD changes) and deploy/helm-chart/templates (for RBAC). @dashanji @camilamacedo86 do you agree this is a goal of ours?
Agree.
And if so, does either the plugin or command affect this flow? Would we still be able to change the Makefile to include this new behavior regardless of it being a plugin or a command?
I think users are supposed to control the flow. First, generate a new kustomize directory by running make manifests
, then use the plugin or command to generate a helm chart from the new kustomize directory.
In my opinion, your concern is about how helm charts are consistent with the customize manifests. Can we use the idea of PATCH or MERGE (like kubectl patch
) to solve such concern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think users are supposed to control the flow. First, generate a new kustomize directory by running make manifests, then use the plugin or command to generate a helm chart from the new kustomize directory.
Ah ok, I was initially thinking of just running make manifests
, not by running the plugin/command again. And that once we run the command/plugin once, it changes the Makefile to make sure make manifests
also controls the helm chart.
I think this would be easier for users so that they don't forget to run an additional command to sync the chart, you know? I know I surely would forget many times 😁
How does that sound?
In my opinion, your concern is about how helm charts are consistent with the customize manifests. Can we use the idea of PATCH or MERGE (like kubectl patch) to solve such concern?
That wasn't my main concern here, but I agree that PATCH and MERGE will be good a good way to try to prevent undoing user customizations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok, I was initially thinking of just running make manifests
That is exactly what I am proposing. :-) Here's the workflow:
- Execute kubebuilder init | edit.
- During this step, inform the plugin that you're looking to use the helm chart.
- This action will scaffold a specific target within the Makefile tailored for this scenario.
- Hence, when you run make manifests or generate commands, the CRDs will automatically sync with the helm chart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe is valid we add an example of usage ^ as described in the comment above as well
|
||
## Summary | ||
|
||
TBF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBF | |
The Helm Chart created by the project should accurately reflect the default values utilized in Kubebuilder. Furthermore, any newly generated CRDs or Webhooks should be seamlessly integrated into the Helm Chart. To accomplish this, the new plugin needs to incorporate all the subCommands available to the [kustomize plugin](https://book.kubebuilder.io/plugins/kustomize-v2). Additionally, the edit feature, similar to what is available in the optional [Grafana plugin](https://book.kubebuilder.io/plugins/grafana-v1-alpha), should be integrated to enable modifications in projects that were previously created. | |
Thus, the init subCommand should take charge of creating the Helm Chart directory and scaffolding the base of the Helm Chart, based on the default scaffold provided by Kustomize. This means that the same default values, scaffolded to the manager and proxy, along with the options to enable metrics via Prometheus, cert-manager, and webhooks, should be disabled (commented out) by default, as seen in the [config/default/kustomization.yaml](https://github.com/kubernetes-sigs/kubebuilder/blob/f4744670e6fc8ed29f87161d39a8f2f3838c27f4/testdata/project-v4/config/default/kustomization.yaml#L21-L27) file. | |
Subsequently, if a user decides to scaffold webhooks, we should uncomment the relevant sections. Please refer to PRs https://github.com/kubernetes-sigs/kubebuilder/pull/3627 and https://github.com/kubernetes-sigs/kubebuilder/pull/3629 for examples of this implementation. | |
Lastly, when new API(s) (CRDs) are introduced, they must be added to the Helm Chart as well. As a result, the Makefile targets make generate and make manifest should be modified when the proposed plugin is in use to ensure that CRDs are accurately copied and pasted into the Helm Chart's respective directory. | |
To allow users to customize their Helm Charts, we should avoid overwriting files within the Helm Chart, with the exception of CRDs/webhooks, as changes made to these must be mirrored in the Helm Chart. However, what is scaffolded by the init/edit should remain primarily unchanged. We would only uncomment the options as described above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense.
There are several user cases:
Case 1: Users developed an Operator and would like to distribute the helm chart of the operator for the first time.
Case 2: Users already have a helm chart generated by the plugin, and they want to update the helm chart as the kustomize directory and files under it are updated. However, the update could be:
- CRDs (A new/old CRD is added/deleted/updated).
- Webhooks (A new/old Webhook is added/deleted/updated).
- RBAC roles.
- ...
Case 3: Users already have a helm chart generated by the plugin, and they update it for customizations (E.g, add new configurations in the values.yaml
). Once a new CRD is added, generate a new helm chart with the previous customizations.
Case 1 and Case 2 can be implemented in the common command kubebuilder init --plugin=helm-chart/v1alpha
. The init
command will overwrite all files under the directory deplot/chart
.
I think the key point is how we address the Case 3. The possible approach in my mind is using the kubebuilder edit --plugin
to reserve the previous files and then MERGE the previous files with the generated files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IHMO
- Case 1 : kubebuilder init|edit --plugins=helm.package/v1alpha1 (it will add the plugin to the project and scaffold the Helm chart to distribute the plugin)
- Case 2: (this one I think is the trick part) but what I thought to see if we could:
a) Hence, when we run make manifests or generate commands, the CRDs will automatically sync with the helm chart.
b) If we re-run edit then try to re-scaffold all and just skip the values.yaml - Case 3: Then, this one is like all in kubebuilder, we will try to update without re-write the values.yaml after it gets created (that is our best effort and we might need to write this down). However, we have some kind of limitation here.
It would be nice if we could add those points in the doc as well
The proposed command will auto-generate a helm chart from a kustomize directory created by kubebuilder with minimizing the manual effort. The generated helm chart will cover the frequently-used configurations. | ||
|
||
Context: | ||
|
||
- See the discussion [Is there a good solution/tool to generate a new helm chart from the manifests?](https://github.com/kubernetes-sigs/kubebuilder/discussions/3074) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposed command will auto-generate a helm chart from a kustomize directory created by kubebuilder with minimizing the manual effort. The generated helm chart will cover the frequently-used configurations. | |
Context: | |
- See the discussion [Is there a good solution/tool to generate a new helm chart from the manifests?](https://github.com/kubernetes-sigs/kubebuilder/discussions/3074) | |
**NOTE** For further context see the [discussion topic](https://github.com/kubernetes-sigs/kubebuilder/discussions/3074) |
I think we can link the topic but we can write down here what is relevant or not
|
||
- Help developers to generate a helm chart from a kustomize directory. | ||
- Generate a helm chart without human interactions once the kustomize directory updated. | ||
- Reserve the ability to customize the helm chart by the developer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my opinion, that is one of the reasons why the solution cannot be implemented solely through commands.
tolerations: {} | ||
|
||
## Configure the rbac roles. | ||
serviceAccountName: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need put the same value from: https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v4/config/manager/manager.yaml#L101
ports: | ||
- port: 443 | ||
protocol: TCP | ||
targetPort: 9443 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one would be config/default/manager_webhook_patch.yam
: https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v4/config/default/manager_webhook_patch.yaml#L6-L23
fullnameOverride: "" | ||
|
||
crd: | ||
create: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one for example should be scaffold as false when we run the init command with the plugin or the edit and no CRDs were created yet. See that we track all in the PROJECT file so we must to check in order to properly add false or true.
Then, when we run kubebuilder create api we should ensure that it has true if a resource was generated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen a couple of controllers offering the option not to install CRDs via the chart, but honestly never seen an use case for it being disabled. @dashanji have you ever come across the need to disable it?
If it really is uncommon, I would push for not exposing this field in the first place, and letting the small percentage of users that need this add it manually.
For example, we don't conditionally install CRDs via the generated kustomize: they're always installed. I think it's a good idea to follow the same decisions that were taken for Kustomize here in Helm, so that users are not surprised with inconsistent behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen a couple of controllers offering the option not to install CRDs via the chart, but honestly never seen an use case for it being disabled. @dashanji have you ever come across the need to disable it?
Then that is a too-specific edge case I think
But in the future if needed we can revisit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, we don't conditionally install CRDs via the generated kustomize: they're always installed
How they are installed already?
If you create the API/CRD in the Operator how it can exist on the cluster in a common scenario before it be installed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, we don't conditionally install CRDs via the generated kustomize: they're always installed
How they are installed already? If you create the API/CRD in the Operator how it can exist on the cluster in a common scenario before it be installed?
I mean to say that there is no way to conditionally install the CRD via Kustomize, as an argument that there shouldn't be one for Helm either. I have never came across a scenario on which the CRD was installed prior to the controller being deployed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, same controller installed multiple times in the same cluster? Don't they conflict with each other, in the sense that they all reconcile the same resources? Just trying to understand. In our case, we only install each controller once per cluster, so I guess that's why we never came across this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, same controller installed multiple times in the same cluster? Don't they conflict with each other, in the sense that they all reconcile the same resources?
Yep, especially when we use cert-manager as a sub chart, the cert-manager component will exist under each namespace along with the operator. Also, there is a situation where you do not use make undeploy/uninstall
to delete resources such as CRD and webhook. Instead, you directly use a command like kubectl delete deployment xxx
to delete the controller.
Also, I believe there are several operators that use the option to adapt the complex and diverse kubernetes clusters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, I can see how this is useful for controllers that will be used as a dependency.
We took a different approach, in which we just assume the CRD from another controller we require exists (and if it doesn't, it should be fixed elsewhere). For example, in controllers that depend on cert-manager, we just add our Issuers, Certificates, etc, but no CRD. The CRD should be provided by the cert-manager installation. This way we don't need to always define all CRDs we depend on, only the ones we provide in each specific controller.
But the bottom line is: should kubebuilder adopt this, given that there are many different ways to solve this particular problem? Since many charts provide this, I'd be ok with it being here as well and no longer have a strong opinion on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if we have cert-manager enable, then we should scaffold it as a dependency in helm-charts.
But this is maybe. good points for open questions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How to deal with CertManager and Prometheus?
^ I think we need add those in the OpenQuestions
registry: gcr.io | ||
repository: "kubebuilder/kube-rbac-proxy" | ||
tag: v0.13.0 | ||
pullPolicy: IfNotPresent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one we need put the data that we have in : https://github.com/kubernetes-sigs/kubebuilder/blob/master/testdata/project-v4/config/default/manager_auth_proxy_patch.yaml
|
||
### Risks and Mitigations | ||
|
||
TBF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBF | |
**Difficulty in Maintaining the Solution** | |
Maintaining the solution may prove challenging in the long term, particularly if it does not gain community adoption and, consequently, collaboration. To mitigate this risk, the proposal aims to introduce an optional alpha plugin or to implement it through an alpha command. This approach provides us with greater flexibility to make adjustments or, if necessary, to deprecate the feature without definitively compromising support. | |
Additionally, it is crucial to cover the solution with end-to-end tests to ensure that, despite any changes, the Helm Chart remains deployable and continues to function well, as is the practice with our other plugins and solutions. |
|
||
## Drawbacks | ||
|
||
TBF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBF | |
**Generation of Deployable Helm Chart:** | |
There might be challenges in generating a deployable Helm Chart without human intervention, similar to the current process for Kubebuilder projects. | |
|
||
## Alternatives | ||
|
||
TBF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I think we need disccuss the command option
ALso, add the pros and cons of each one of them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @dashanji,
That's fantastic! I've added some comments in the initial review. Let's work together to refine this proposal and ensure that it's polished. Once we've made progress, we can share it with the community and gather feedback.
This one is quite complex so we will need to work in some back and forwards and feedbacks and etc. But bare with us and we will be there.
Well done! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the way this is headed, great work! I left a few comments
fullnameOverride: "" | ||
|
||
crd: | ||
create: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen a couple of controllers offering the option not to install CRDs via the chart, but honestly never seen an use case for it being disabled. @dashanji have you ever come across the need to disable it?
If it really is uncommon, I would push for not exposing this field in the first place, and letting the small percentage of users that need this add it manually.
For example, we don't conditionally install CRDs via the generated kustomize: they're always installed. I think it's a good idea to follow the same decisions that were taken for Kustomize here in Helm, so that users are not surprised with inconsistent behavior.
generation to be as simple as possible without the need to write any additional duplicate files. | ||
|
||
- As a user, I want the helm chart can cover all potential configurations when I deploy it on the Kubernetes cluster. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- As a platform engineer, I want to be able to manage different versions and configurations of a controller/operator across multiple clusters and environments based on the same distribution artifact (Helm Chart), with versioning and dependency locking for supply chain security. |
Hi @ dashanji, Do you think that you could work on this one to try to incorporate as much as possible the comments in the review so that we can go from another round and see what is missing etc? |
Okay, I try to gather these comments and push a new commit these days. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: dashanji The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: dashanji <dashanjic@gmail.com>
377d8a7
to
bfbf651
Compare
Hi @camilamacedo86 @LCaparelli, I update the design doc via your constructive comments. Thanks. In round 1, we talked a lot about Motivation and Plugin or Command, etc. In round 2, I think we'd better talk about some details. Assume we already have a kubebuilder project:
|
├── crds | ||
│ ├── <CRDs YAML files generate under config/crds/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think some concern needs to be given to how this is done. In our experience converting our kubebuilder projects to helm charts the CRD folder in helm was not what we wanted as it precluded us from every upgrading the CRD again with future changes. We ended up treating the CRD as another file in templates to facilitate upgrades. this is an approach used by several operators today however with the obvious drawback of "if you uninstall the operator you uninstall the crds"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in helm was not what we wanted as it precluded us from every upgrading the CRD
What do you mean with this one?
Could you please give an example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once helm installs a CRD from the crd/
folder it will skip it if it detects that the CRD is already installed.
This meant we were unable to change the crd schema after initial installation. We just put the CRD definition in the templates/
folder to facilitate actual updates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Jay-Madden
It seems a good point. it seems a good approach but that means that the lifecycle of the CRD is tied to the Helm release. This could potentially lead to issues if the Helm chart is deleted or if different releases try to manage the same CRD. Also, another approaches seems that are:
- Versioning CRDs: When making significant changes to a CRD schema, consider versioning the CRD. This allows you to deploy a new version of the CRD without affecting existing resources that use the old schema.
- Helm Hooks: You could use Helm hooks to manage the lifecycle of CRDs. For example, a pre-upgrade hook could apply changes to a CRD before the rest of the Helm chart is processed.
So, I think we need an Open Question in the proposal
and we need POC and investigate the pros and cons of each case scenario
as check how people has been doing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a newer user of kubebuilder, I am very excited about this proposal!
An additional approach for CRDs could be a separate helm chart for crds, alongside the main chart for deploying an operator. The AWS Karpenter Operator uses this approach and it has allowed full control of both CRD versions and operator version in automated production environments using helm technologies without some of the drawbacks like deleting crds when operator chart is deleted.
This approach also plays nicely with tools like terraform and ArgoCD alike.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helm typically uses the crd/ directory as the default location for Custom Resource Definitions (CRDs). In my opinion, if we're developing a plugin to distribute Operators via Helm, it should adapt to the Helm layout.
However, it's important to recognize that preferences vary across teams and companies; we can't satisfy everyone. Our best approach is to adhere to Helm's standard layout and guidelines.
From my experience, and as observed in many Helm Charts distributing Operators (such as CertManager), it's crucial to ensure that CRDs are applied before installing or upgrading a Helm Chart. This practice is generally achievable with any solution (terraform, argocd etc). However, accommodating those variations to try to work around the Helm limitations may be beyond our scope.
^ PS.: IMO, we need to add it in the OpenQuestion, and something like the above would be the answer. The plugin will package the Operator following the Helm's layout definition - we cannot attend to everybody's deviations and personal preferences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, it's important to recognize that preferences vary across teams and companies; we can't satisfy everyone. Our best approach is to adhere to Helm's standard layout and guidelines.
Totally agree you cant please everyone, however an option for either the crd/
folder or just put them in templates/
seems easy enough low friction implementation wise to offer it as an option that i believe that will account for 90%+ of user needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Jay-Madden
Helm best practices: https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you
That is what we should follow up
If a user wants to change things, then they can tool their releases to change places where the CRD will be distributed as they please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get the best practices push, and I also get the point @Jay-Madden makes that this seems simple to implement and would cover a very substantial amount of use-cases, where the best practices wouldn't. A nice-to-have feat.
I think that this can be done as a further improvement if/when folks request it as a feature, and there is no need to support this on a first version. Let's get this out of the paper first.
^ PS.: IMO, we need to add it in the OpenQuestion, and something like the above would be the answer. The plugin will package the Operator following the Helm's layout definition - we cannot attend to everybody's deviations and personal preferences.
Agreed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chiming in with my two cents on this crucial question.
The helm project essentially punted on CRD handling. Method 1 (linked above) is essentially that helm will create the CRDs on the initial install... and never touch them again. Last I checked, it will silently ignore CRDs in upgraded chart versions, even if they differ from what is installed. This is potentially surprising and unexpected behavior, and I think it would be wise for the kubebuilder project to avoid supporting or endorsing it.
I believe the basis for this decision by the helm project is that upgrading CRDs is a potentially destructive operation, so the maintainers opted for a safe implementation to avoid helm
itself being blamed for a CRD update the broke a cluster. The design rather puts the onus on cluster administrators to upgrade the CRDs fully out-of-band of helm
.
I would suggest supporting one or both of the following options:
- As was suggested by @Jay-Madden, the better choice is to put the CRDs into the
templates/
directory. This will cause helm to treat the CRDs like any other object and it will upgrade them as expected. The one downside to this approach is that it means that you cannot put both a CRD and a CR of that type in the same chart. Helm is unable to create the CRD, wait for it to be registered, and then proceed with the CR creation in a single release install/upgrade. This leads me to option 2. - Scaffold separate charts for the APIs.
Option 2 is actually its own can of worms, but one that likely needs to be opened sooner rather than later. If a CRD specified a conversion webhook, then the "API chart" would actually need to contain the CRDs and the webhook service/workload (and in many cases, it would also make sense to include validating/mutating webhooks in an "API chart"). This structure would then have implications on the scaffolding of the actual go project itself because it would need to scaffold separate main modules and image builds for the webhooks and controllers
- Converting any Kustomize configuration to Helm Charts like [helmify](https://github.com/arttor/helmify) does. | ||
- Support the deprecated plugins. This option should be supported from go/v4 and kustomize/v2 | ||
- Introduce support for Helm in addition to Kustomize, or replace Kustomize with Helm entirely, similar to the approach taken by Operator-SDK, thereby allowing users to utilize Helm Charts to build their Operators. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Attend standard practices that deviate from HelmCharts layout, definition, or conventions to workaround its limitations | |
- Provide values and options to allow users to perform customizations on the HelmChart of features that are not used by default in the Operator layout |
IHMO: We need to add more one here.
Otherwise, we will end up in the business to start to discuss how company/team A does and how company/team B does when we cannot attend to everybody. So, we need to stick with Helm definitions and official docs to generate the HelmChart
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree.
|
||
## The environment variables of the controller container. | ||
## More info: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/ | ||
env: [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have our default Kubebuilder Operator layout or any plugin?
If not, then I think we need to remove it.
If yes, we need to add a comment here to justify why/when we need/use this one.
Or goal is just package the Operator in HelmChart
Any extra thing or customizations that might be done on top shows out of our scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have our default Kubebuilder Operator layout or any plugin?
You mean that here we should only take inputs we already have in kustomize, for example? Inputs that kubebuilder currently already knows how to work with.
I think that makes sense, just clarifying.
## Configure the scheduling rules. | ||
## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity | ||
affinity: {} | ||
|
||
## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector | ||
nodeSelector: {} | ||
|
||
## More info: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration | ||
tolerations: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. The values must be exactly only what we have in the default scaffold
We need to stick with each otherwise will be very hard we maintain and make it work
**Via a new command (Alternative Option)** | ||
By running the following command, the plugin will generate a helm chart from the specific kustomize directory and output it to the directory specified by the `--output` flag. | ||
|
||
```shell | ||
kubebuilder alpha generate-helm-chart --from=<path> --output=<path> | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to move this one to Alternative section
Also, add the pros and cons
|
||
## Open Questions [optional] | ||
|
||
TBF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IHMO: We need to add here: https://github.com/kubernetes-sigs/kubebuilder/pull/3632/files#r1420716711
### Goals | ||
|
||
- Allow Kubebuilder users distribute their projects using Helm easily. | ||
- Make the best effort to preserve any customizations made to the Helm Charts by the users. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Make the best effort to preserve any customizations made to the Helm Charts by the users. | |
- Make the best effort to preserve any customizations made to the Helm Charts by the users - which means we will skip syncs in the values.ymal. | |
- Stick with Helm layout definitions and externalize into the relevant values-only options to distribute the default scaffold done by Kubebuilder. We should follow https://helm.sh/docs/chart_best_practices |
I think we need to add something like that here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think skipping syncs in values.yaml
also kind of implies in skipping at least part of syncs in the templates/*.yaml
files, or use markers or some other mechanism so we clearly mark which parts kubebuilder can and can't change.
My point is that if the user has custom variables in values.yaml
, it necessarily means they also changed something in templates/
to get that value, optionally transform it and render in the resulting manifest.
Not sure if worth mentioning in the proposal, but just wanted to make sure we're on the same page here.
Example
The name of the image would probably be sourced from the Makefile/env, and inserted directly into a templates/deployment.yaml
:
kind: Deployment
# (...)
spec:
template:
spec:
image: foo/bar:latest
Let's assume our values file doesn't provide a way for users to pass in env vars. The user would do something like this in the template:
kind: Deployment
# (...)
spec:
template:
spec:
image: foo/bar:latest
containers:
- name: not-important-for-the-example
{{- if .Values.env }}
env:
{{- $k, $v := range .Values.env }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}
{{- end }}
When syncing the template again, we can't remove the user customization (or this gets pretty useless pretty quickly -- if I have to write my own tooling to make any kind of customization to run on top of kubebuilder, why don't I just write all of this tooling without a Helm Chart plugin in the first place?).
On the other hand, we must make sure the image is up-to-date.
What I mentioned about markers was something like (just an example, not an actual suggestion for how the marker should be named, placed, etc):
kind: Deployment
# (...)
spec:
template:
spec:
{{/* +kubebuilder:scaffolding:image */}}
image: foo/bar:latest
containers:
- name: not-important-for-the-example
{{- if .Values.env }}
env:
{{- $k, $v := range .Values.env }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}
{{- end }}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I think would be the next steps here:
- a) Ensure that we can address all comments in the proposal
- b) Ensure that we have 1 commit (squash all)
- c) It would be great if we could do a POC but just for example adding a PR against Kubebuilder where we add the deploy/helm-chart done for the testdata/project-v4 to just show how that would end up.
- d) Then, we can either share it in the slack channel and mailing list asking for further POV
This is great, while I worked as an LFX mentee under Cilium, I developed an operator using Kubebuilder. There I had to manually write Helm charts from Kustomize files. This plugin will really be helpful ! since it is under GSoC this tenure, I would love to work on it with Kubebuilder 🚀 Also, thanks to the team for such an open discussion, the file contains almost all the context I need to know to start formulating a solution. |
Cool project to work with. I would really like to be the part of this project |
Drafting proposal would be fun, the project sounds interesting!! |
Looking forward to this 😃 |
As titled.
This document is currently in RFC status, and corrections and additions are welcomed.