Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux-Applier cluster-admin role binding doesn't work with AKS and Azure RBAC #3182

Closed
1 task done
joshuadmatthews opened this issue Oct 7, 2022 · 4 comments
Closed
1 task done

Comments

@joshuadmatthews
Copy link

Describe the bug

I'm not sure if this is a Flux issue or an AKS Azure RBAC issue

When I try to install a Helm chart through Flux, I get the following error:

reconciliation failed: failed to get last release revision: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:default:flux-applier" cannot list resource "secrets" in API group "" in the namespace "default": Azure does not have opinion for this user.

It seems that Azure RBAC is denying the operation because it doesn't recognize Flux-Applier... but Azure RBAC documentation states that for unrecognized identities like a ServiceAccount it should fall back to K8s RBAC

If the identity making the request exists in Azure AD, Azure will team with Kubernetes RBAC to authorize the request. If the identity exists outside of Azure AD (i.e., a Kubernetes service account), authorization will defer to the normal Kubernetes RBAC.

https://learn.microsoft.com/en-us/azure/aks/concepts-identity

The other part that is confusing to me is the error says it is using "default:flux-applier" yet when I check service accounts in my cluster there is no flux-applier in the default namespace? It exists in the flux-system namespace. Is Flux generating service accounts on the fly to perform operations in namespaces outside flux-system?

Hard to tell if Flux is doing something strange here, or if Azure RBAC isn't working as advertised. I'm leaning towards the issue being Flux, as I was able to work around it by manually assigning the cluster-admin role to the "ghost" accounts that don't exists.

kubectl create clusterrolebinding flux-nginx-cluster-admin --clusterrole=cluster-admin --serviceaccount=nginx:flux-applier

This command succeeds even though there is no flux-applier in the nginx namespace, and once this clusterrolebinding is in place then flux is able to do it's thing.

Any information on what is going on behind the scenes here with these ServiceAccounts would be appreciated.

Steps to reproduce

  1. Spin up AKS with Flux addon and enableAzureRBAC=true
  2. Try to install ingress-nginx helm chart through Flux

Expected behavior

Installation should succeed

Screenshots and recordings

No response

OS / Distro

Linux Ubuntu

Flux version

N/A

Flux check

My Cluster is private so I don't have an easy way to use the Flux CLI against it, should be the latest version of Flux and I believe everything is working as my manifests are being published, it's just the helm release that is failing.

Git provider

Github Enterprise

Container Registry provider

ACR

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@sudivate
Copy link

@joshuadmatthews For Flux extension on AKS, multi-tenancy is enabled to assure security by default in your clusters. Please ensure the Kind:HelmRelease and Kind:HelmRepository are in the same namespace where the fluxconfigs are created with az k8s-configuration flux create. If you don't need multi-tenancy you can also update Flux extension as mentioned here

@joshuadmatthews
Copy link
Author

joshuadmatthews commented Oct 11, 2022

Will changing the namespace of a Kustomization be equivalent to this? I create my initial flux configuration through Bicep, and then through that initial kustomization deploy additional kind: Kustomization for my manifests (using the existing git repo created for the first one) so that I can use flux substitutions which can't be deployed through Bicep. So I never actually use "az k8s-configuration flux-create"

It will be difficult to set the initial fluxConfiguration to the nginx namespace because that namespace doesn't exists until it is created through that same fluxConfiguration.

@joshuadmatthews
Copy link
Author

joshuadmatthews commented Oct 11, 2022

Or I suppose I can follow the article you posted and deploy nginx into the cluster-config namespace.

Seems a bit restrictive though if I now have to deploy all helm charts to cluster-config simply because my initial fluxConfiguration must be in that namespace.

@sudivate
Copy link

Will changing the namespace of a Kustomization be equivalent to this? I create my initial flux configuration through Bicep, and then through that initial kustomization deploy additional kind: Kustomization for my manifests (using the existing git repo created for the first one) so that I can use flux substitutions which can't be deployed through Bicep. So I never actually use "az k8s-configuration flux-create"

It will be difficult to set the initial fluxConfiguration to the nginx namespace because that namespace doesn't exists until it is created through that same fluxConfiguration.

Yes, deploy all Flux objects into the same namespace as the fluxConfigurations . Multi tenancy is enabled by default, if you would like to disable it through the bicep set the configuration settings extension property to
"configurationSettings": { "multiTenancy.enforce": "false" }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants