Skip to content
This repository has been archived by the owner on Oct 12, 2023. It is now read-only.

Fix rbac error listing Secrets at cluster scope. #38

Merged
merged 2 commits into from Sep 17, 2020

Conversation

dgoodwin
Copy link
Contributor

@dgoodwin dgoodwin commented Sep 4, 2020

The Secret Watch established in the ApplicationSet controller is global,
and rbac for global secrets was recently removed resulting in a broken
controller.

To work past this and avoid re-introducing global Secret watch RBAC, the
Manager's cache is now limited to only the namespace in which it is
running. This implies that any attempts to use the client outside that
namespace will not work. However this should be safe as we work with
Applications and Secrets, both of which should not exist beyond the
ArgoCD namespace where we're running.

@mgoodness
Copy link
Contributor

Cluster-wide Secret access was restored in #21, but I agree that this is a far better approach.

@dgoodwin
Copy link
Contributor Author

dgoodwin commented Sep 4, 2020

My mistake I forgot to update branch today I think. I will push an addition here to drop global Secret rbac, sound ok?

The Secret Watch established in the ApplicationSet controller is global,
and rbac for global secrets was recently removed resulting in a broken
controller.

To work past this and avoid re-introducing global Secret watch RBAC, the
Manager's cache is now limited to only the namespace in which it is
running. This implies that any attempts to use the client outside that
namespace will not work. However this should be safe as we work with
Applications and Secrets, both of which should not exist beyond the
ArgoCD namespace where we're running.
@dgoodwin
Copy link
Contributor Author

dgoodwin commented Sep 4, 2020

Actually do we have any need for a cluster-install at all? I believe Argo does that so it can sync resource to the local cluster, however we don't need to do that. I think a namespace install may be all we need. Does that seem right?

@mgoodness
Copy link
Contributor

Actually do we have any need for a cluster-install at all? I believe Argo does that so it can sync resource to the local cluster, however we don't need to do that. I think a namespace install may be all we need. Does that seem right?

You may very well be correct. I'll look into this today.

main.go Outdated
@@ -21,6 +23,10 @@ var (
setupLog = ctrl.Log.WithName("setup")
)

const (
defaultNamespace = "argocd"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another namespace is better ?

if len(ns) == 0 {
ns = defaultNamespace
}
setupLog.Info("using argocd namespace", "namespace", ns)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe application set is better

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're destined to be incorporated in argocd it might be best to use the namespace where we expect to land. Although I'm not sure if argocd assumes anything about the namespace it's running in...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we're unsure, why don't we remove the concept of a default namespace. If there is no NAMESPACE env var, we error out and ask you to set it, which would only affect developers running the binary locally on their systems. The pod definition will always ensure it's set to whatever the current namespace is.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushed as a separate commit for your consideration.

@xianlubird xianlubird merged commit f117f9b into argoproj:master Sep 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants