Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Adding --release-label flag to limit the operator to HelmReleases with a specific label #273

Closed
ekeih opened this issue Feb 6, 2020 · 3 comments
Labels
blocked needs validation In need of validation before further action enhancement New feature or request

Comments

@ekeih
Copy link

ekeih commented Feb 6, 2020

Describe the feature

Allow running several instances of the operator in the same cluster by adding labels to the HelmRelease objects which control which operator is responsible for which HelmRelease.

What would the new user story look like?

  1. A first instance of the operator is started.
  2. A second instance is started using the new --release-label=internal flag.
  3. The first instance manages all HelmRelease objects which do not have a fluxcd.io/release-label label.
  4. The second instance manages all HelmRelease objects with a fluxcd.io/release-label=internal label.
  5. It is possible to start more instances with different values for the --release-label flag.

Expected behavior

We have situations with multiple instances of the helm-operator in a cluster. Usually it is one instance we have full control of and one which is deployed by a customer, so we have very limited control of the latter one.
The user story described above would avoid any interference between both instances.
(The exact names of the flag and label are obviously open for discussion.)

It shouldn't break existing setups

The behavior only changes when a HelmRelease has a fluxcd.io/release-label. No existing objects should have this label, so the introduction of this new feature should not break any existing setups.

Why --allow-namespace is not enough for this use case

The helm-operator already has a flag --allow-namespace which allows us to limit the operator to a single namespace. With this feature we can limit the internal instance to a namespace with our internal HelmReleases. This is not optimal because we would like to put our internal HelmReleases in several namespaces.
But the main issue is that the customer may deploy its own customer instance without the --allow-namespace flag and then their instance will also manage our internal HelmRelease objects.

Pull Request

Thanks for taking the time to read this feature request. We are looking forward to hear your opinion about it and hope we can find a way to implement this.
In case you consider this feature useful, we can probably implement it ourselves and open a PR for it. But we wanted to talk to you about the feature before we start working on it. 🙂

@ekeih ekeih added blocked needs validation In need of validation before further action enhancement New feature or request labels Feb 6, 2020
@ekeih ekeih changed the title Adding --release-label flag to limit the operator to HelmReleases with a specific flag Adding --release-label flag to limit the operator to HelmReleases with a specific label Feb 6, 2020
@hiddeco
Copy link
Member

hiddeco commented Feb 12, 2020

Thanks for your enhancement request and for taking the time to explain your use-case.

In case you consider this feature useful, we can probably implement it ourselves and open a PR for it.

This would be much appreciated! I would like to hear a bit to understand the use-case better and ensure it can not be solved in an other way. See the first question below, others are suggestions on what the enhancement itself would look like.

But the main issue is that the customer may deploy its own customer instance without the --allow-namespace flag and then their instance will also manage our internal HelmRelease objects.

What is your relationship to the customer in terms of RBAC, do you both have full control over the cluster and are thus equal to each other for Kubernetes?

A second instance is started using the new --release-label=internal flag.

My proposal would be to:

  1. Make this accept a slice of strings,
    this would be an alternative approach to:

    It is possible to start more instances with different values for the --release-label` flag.

  2. Make the flag a bit more descriptive, maybe --watch-release-labels?

The first instance manages all HelmRelease objects which do not have a fluxcd.io/release-label label.

The annotation domain should be helm.fluxcd.io/release-label.

@ekeih
Copy link
Author

ekeih commented Feb 17, 2020

@hiddeco Thanks for taking the time to have a look at this and your feedback. I am unavailable for 1-2 weeks but I will get back to this topic afterwards. Just wanted to let you know, so it does not seem like I opened the issue and then just ghosted it.

@kingdonb
Copy link
Member

kingdonb commented Sep 2, 2022

Sorry if your issue remains unresolved. The Helm Operator is in maintenance mode, we recommend everybody upgrades to Flux v2 and Helm Controller.

A new release of Helm Operator is out this week, 1.4.4.

We will continue to support Helm Operator in maintenance mode for an indefinite period of time, and eventually archive this repository.

Please be aware that Flux v2 has a vibrant and active developer community who are actively working through minor releases and delivering new features on the way to General Availability for Flux v2.

In the mean time, this repo will still be monitored, but support is basically limited to migration issues only. I will have to close many issues today without reading them all in detail because of time constraints. If your issue is very important, you are welcome to reopen it, but due to staleness of all issues at this point a new report is more likely to be in order. Please open another issue if you have unresolved problems that prevent your migration in the appropriate Flux v2 repo.

Helm Operator releases will continue as possible for a limited time, as a courtesy for those who still cannot migrate yet, but these are strongly not recommended for ongoing production use as our strict adherence to semver backward compatibility guarantees limit many dependencies and we can only upgrade them so far without breaking compatibility. So there are likely known CVEs that cannot be resolved.

We recommend upgrading to Flux v2 which is actively maintained ASAP.

I am going to go ahead and close every issue at once today,
Thanks for participating in Helm Operator and Flux! 💚 💙

@kingdonb kingdonb closed this as completed Sep 2, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
blocked needs validation In need of validation before further action enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants