This repository has been archived by the owner on Nov 1, 2022. It is now read-only.
/ helm-operator Public archive
Adding --release-label flag to limit the operator to HelmReleases with a specific label #273
blocked needs validation
In need of validation before further action
New feature or request
Describe the feature
Allow running several instances of the operator in the same cluster by adding labels to the HelmRelease objects which control which operator is responsible for which HelmRelease.
What would the new user story look like?
We have situations with multiple instances of the helm-operator in a cluster. Usually it is one instance we have full control of and one which is deployed by a customer, so we have very limited control of the latter one.
The user story described above would avoid any interference between both instances.
(The exact names of the flag and label are obviously open for discussion.)
It shouldn't break existing setups
The behavior only changes when a HelmRelease has a
fluxcd.io/release-label. No existing objects should have this label, so the introduction of this new feature should not break any existing setups.
--allow-namespaceis not enough for this use case
The helm-operator already has a flag
--allow-namespacewhich allows us to limit the operator to a single namespace. With this feature we can limit the
internal instanceto a namespace with our internal HelmReleases. This is not optimal because we would like to put our internal HelmReleases in several namespaces.
But the main issue is that the customer may deploy its own
customer instancewithout the
--allow-namespaceflag and then their instance will also manage our internal HelmRelease objects.
Thanks for taking the time to read this feature request. We are looking forward to hear your opinion about it and hope we can find a way to implement this.
In case you consider this feature useful, we can probably implement it ourselves and open a PR for it. But we wanted to talk to you about the feature before we start working on it.
The text was updated successfully, but these errors were encountered: