Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Add option to limit helm-op to a single namespace #1664

Merged
merged 7 commits into from
Jan 24, 2019
Merged

Conversation

stefanprodan
Copy link
Member

  • add namespace restriction to Kubernetes informer and chart sync component
  • by default all namespaces are used

Fix #1406

- add namespace restriction to Kubernetes informer and chart sync component
@2opremio
Copy link
Contributor

I think it would make sense to align the argument name to that of fluxd .

In #1471 @squaremo suggested using --kubenernetes-allow-namespace

@squaremo
Copy link
Member

Is there a reason for it being no restriction vs exactly one namespace, rather than no restriction vs some namespaces?

@stefanprodan
Copy link
Member Author

@squaremo the NewSharedInformerFactoryWithOptions takes one namespace only, we could have many informers but that will require greater changes to the operator logic. Anyway most requests I've seen from our users is to be able to run one Flux/helm-op/tiller per namespace. This PR makes it possible.

@errordeveloper
Copy link
Contributor

I also feel that it'd be a good idea to match what Flux does, i.e. a list of namespaces, however Tiller can only be used in a context of a single names or the entire cluster, and I don't think making operator support
multiple Tillers in different namespaces would make much sense.

@@ -21,6 +21,8 @@ helmOperator:
repository: quay.io/weaveworks/helm-operator
tag: 0.5.3
pullPolicy: IfNotPresent
# Limit the operator scope to a single namespace
namespace:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be namespace: ""?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the namespace is not set then the default from the pflag is applied. I've added the namespace in the values.yaml to document the option not to provide a default value for it.

Copy link
Contributor

@errordeveloper errordeveloper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. Once it's in master I'll be happy to test in the project where we use it this way.

@stefanprodan
Copy link
Member Author

@2opremio if I use the same flag as Flux --kubenernetes-allow-namespace people will think helm-op accepts multiple namespaces.
@squaremo what's your opinion on this? should I rename --namespace to --kubenernetes-allow-namespace ?

@stephenmoloney
Copy link
Contributor

@stefanprodan

For the purpose of understanding the consequences of this PR better, lets say in the repo https://github.com/stefanprodan/gitops-helm the following scenarios:

1.

I set the helmop namespace to adm, what happens/doesn't happen or all same as before ?

2.

I set the helmop namespace to dev, what happens/doesn't happen or all same as before ?

@stefanprodan
Copy link
Member Author

stefanprodan commented Jan 17, 2019

@stephenmoloney if you set helm-op namespace to dev then only the HelmReleases in that namespace will be installed/upgraded; all the others will be ignored. Flux will still sync them all but there will be no actual releases in Tiller except for the ones in that namespace. Same for adm namespace.

@stephenmoloney
Copy link
Contributor

@stefanprodan ok, thanks.

So is the motivation for this PR to allow having one flux/helm operator per namespace ?

Also, would it make sense to allow the option of a list of namespaces ?

@stefanprodan
Copy link
Member Author

@stephenmoloney the motivation is to be able to run one flux/helm-op/tiller per namespace and since Tiller can't target a list of namespaces it doesn't make sense for the helm-op to do so.

@2opremio
Copy link
Contributor

2opremio commented Jan 17, 2019

However flux is soon going to support proper multi-namespace filtering soon (#1668) and it seems that mutiple namespaces are (or were) in the plans for Tiller too helm/helm#838 (comment)

@2opremio
Copy link
Contributor

2opremio commented Jan 17, 2019

See helm/helm#2060 (comment) and helm/helm#2060 (comment) as well

@stephenmoloney
Copy link
Contributor

@stephenmoloney the motivation is to be able to run one flux/helm-op/tiller per namespace and since Tiller can't target a list of namespaces it doesn't make sense for the helm-op to do so.

@stefanprodan thanks.

@stefanprodan
Copy link
Member Author

@2opremio Tiller is going away with Helm v3. Once v3 is out and Tiller is gone we can revisit the multi-namespace issue.

Copy link
Member

@squaremo squaremo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tiller can only be used in a context of a single names or the entire cluster

Is the argument to Tiller also --namespace? If so there's a fair argument for mirroring that. Otherwise I'd prefer --include-namespace, or --allow-namespace.

Nitpicks in the code itself :-)

integrations/helm/chartsync/chartsync.go Outdated Show resolved Hide resolved
integrations/helm/status/status.go Outdated Show resolved Hide resolved
Copy link
Member

@squaremo squaremo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An important improvement, thanks Stefan

@stefanprodan stefanprodan merged commit 86630a8 into master Jan 24, 2019
@stefanprodan stefanprodan deleted the helm-op-ns branch January 24, 2019 07:59
@kraney
Copy link

kraney commented Mar 21, 2019

What about this use case - I want a "cluster owner" flux / op / helm that monitors all, and then 'namespace owner' flux / op / helm that watches a specific namespace. I don't see a practical way to make the cluster-level operator ignore that namespace & leave it to the helm that lives there.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants