Adding --release-label flag to limit the operator to HelmReleases with a specific label #273
Comments
Thanks for your enhancement request and for taking the time to explain your use-case.
This would be much appreciated! I would like to hear a bit to understand the use-case better and ensure it can not be solved in an other way. See the first question below, others are suggestions on what the enhancement itself would look like.
What is your relationship to the customer in terms of RBAC, do you both have full control over the cluster and are thus equal to each other for Kubernetes?
My proposal would be to:
The annotation domain should be |
@hiddeco Thanks for taking the time to have a look at this and your feedback. I am unavailable for 1-2 weeks but I will get back to this topic afterwards. Just wanted to let you know, so it does not seem like I opened the issue and then just ghosted it. |
Sorry if your issue remains unresolved. The Helm Operator is in maintenance mode, we recommend everybody upgrades to Flux v2 and Helm Controller. A new release of Helm Operator is out this week, 1.4.4. We will continue to support Helm Operator in maintenance mode for an indefinite period of time, and eventually archive this repository. Please be aware that Flux v2 has a vibrant and active developer community who are actively working through minor releases and delivering new features on the way to General Availability for Flux v2. In the mean time, this repo will still be monitored, but support is basically limited to migration issues only. I will have to close many issues today without reading them all in detail because of time constraints. If your issue is very important, you are welcome to reopen it, but due to staleness of all issues at this point a new report is more likely to be in order. Please open another issue if you have unresolved problems that prevent your migration in the appropriate Flux v2 repo. Helm Operator releases will continue as possible for a limited time, as a courtesy for those who still cannot migrate yet, but these are strongly not recommended for ongoing production use as our strict adherence to semver backward compatibility guarantees limit many dependencies and we can only upgrade them so far without breaking compatibility. So there are likely known CVEs that cannot be resolved. We recommend upgrading to Flux v2 which is actively maintained ASAP. I am going to go ahead and close every issue at once today, |
Describe the feature
Allow running several instances of the operator in the same cluster by adding labels to the HelmRelease objects which control which operator is responsible for which HelmRelease.
What would the new user story look like?
--release-label=internal
flag.fluxcd.io/release-label
label.fluxcd.io/release-label=internal
label.--release-label
flag.Expected behavior
We have situations with multiple instances of the helm-operator in a cluster. Usually it is one instance we have full control of and one which is deployed by a customer, so we have very limited control of the latter one.
The user story described above would avoid any interference between both instances.
(The exact names of the flag and label are obviously open for discussion.)
It shouldn't break existing setups
The behavior only changes when a HelmRelease has a
fluxcd.io/release-label
. No existing objects should have this label, so the introduction of this new feature should not break any existing setups.Why
--allow-namespace
is not enough for this use caseThe helm-operator already has a flag
--allow-namespace
which allows us to limit the operator to a single namespace. With this feature we can limit theinternal instance
to a namespace with our internal HelmReleases. This is not optimal because we would like to put our internal HelmReleases in several namespaces.But the main issue is that the customer may deploy its own
customer instance
without the--allow-namespace
flag and then their instance will also manage our internal HelmRelease objects.Pull Request
Thanks for taking the time to read this feature request. We are looking forward to hear your opinion about it and hope we can find a way to implement this.🙂
In case you consider this feature useful, we can probably implement it ourselves and open a PR for it. But we wanted to talk to you about the feature before we start working on it.
The text was updated successfully, but these errors were encountered: