Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RBAC should include separate permissions for deleting k8s resources #3593

Closed
jutley opened this issue May 15, 2020 · 19 comments · Fixed by #18124
Closed

RBAC should include separate permissions for deleting k8s resources #3593

jutley opened this issue May 15, 2020 · 19 comments · Fixed by #18124
Labels
component:rbac Issues related to Openshift and Racher enhancement New feature or request security Security related type:usability Enhancement of an existing feature

Comments

@jutley
Copy link
Contributor

jutley commented May 15, 2020

Summary

I'd like to be able to give delete permissions that are more granular than the application level. Ideally, I'd like to be able to enable them for specific resource types, such as Pods, or any other resource that is owned by a resource ArgoCD created.

Motivation

Sometimes I find that I would like to delete Pods in our clusters because they are misbehaving. ArgoCD allows for this, but requires that the user has the permission applications, get, /.

That same permission allows the user to delete the entire application, which can optionally cascade to all the dependent resources.

Proposal

I'd like to see support for permissions that are defined like this:

p, role:staging-db-admins, applications, delete, staging-db-admins/*/apps/Deployment, allow
p, role:staging-db-admins, applications, delete, staging-db-admins/*/Pods, allow
@jutley jutley added the enhancement New feature or request label May 15, 2020
@alexmt alexmt added type:usability Enhancement of an existing feature component:rbac Issues related to Openshift and Racher labels May 15, 2020
@jutley
Copy link
Contributor Author

jutley commented Jul 31, 2020

Someone posted (and quickly deleted) a comment suggesting that this can be done via RBAC on the Kubernetes cluster.

I think this is a viable workaround that I haven't thought of. I think it would looks something like this:

  • Edit the argocd-server ClusterRole so that it does not have universal permissions to delete resources
  • Edit the argocd-server ClusterRole so that is does have permissions to delete specific resources (such as Pods)
  • Update AppProject permissions to allow the delete action.

Then, ArgoCD will allow the authorized users to delete resources, but those requests may be denied by the kube-apiserver.

It's not the cleanest option, and I would like to see better support in ArgoCD directly. However, for anyone else following or finding this issue, this may be a workable approach.

@mateustanaka
Copy link

Hi @jutley, yes I posted the suggestion to configure your users with RBAC on the cluster side to allow delete pods or not but I said that because I'm using Argo integrated with Azure AD to login, so I was thinking I simply could configure the RBAC for all the users/groups (AKS and Azure AD).

But I still don't know if Argo is going to use my Azure AD account to perform requests to the kubeapi (I don't think so), that's was the reason I deleted the post before, only to think better about it.

@AmitBenAmi
Copy link

For us, allowing developers to have delete permissions on an application brings up the risk of whole application deletion, when all we want to do is give permission to delete everything inside the application, except the application itself.

@nabadger
Copy link

At this point, would it not be better to just base the entire ArgoCD RBAC impl. directly on the native K8s RBAC system?

It seems like we will just end up duplicating a lot of permission settings between the 2.

I imagine this would be a huge rework mind... :)

@mmckane
Copy link

mmckane commented Jun 4, 2021

Is this something that could be handled by a custom resource action. It seems silly that this isn't possible from an RBAC perspective. is there just some kind of syntax we are missing? Could @alexmt or one of the other maintainers weigh in?

@abangser
Copy link

I want to check in on this as I am looking for a custom RBAC for rollback so as to avoid granting the update permission which is so broad.

This seems like the place where the bigger question about how to handle more specific RBAC requests is being discussed and I don't want to pile on with more enhancement requests unnecessarily.

Thanks!

@IuryAlves
Copy link

Any updates on this ?

@crenshaw-dev crenshaw-dev added the security Security related label May 24, 2022
@scalen
Copy link
Contributor

scalen commented Jul 11, 2022

In terms of deferring to k8s RBAC, could we have a new kind of rule, or an addition to the group defining g rules, that maps groups to k8s users/groups? Behind the scenes, this would let manual operations be conducted using the --as and --as-group kubectl flags.

That would allow ArgoCD RBAC to remain at the Application level, keep the system permissions unchanged, but limit user capabilities regarding resources within the Applications.

@crenshaw-dev
Copy link
Collaborator

@scalen when using --as or --as-group, what kind of auth do you need? Do you have to pass the user's token, or can you impersonate through some other mechanism?

@scalen
Copy link
Contributor

scalen commented Jul 11, 2022

@crenshaw-dev Nope, it is all done through k8s RBAC, so admins of each ArgoCD system would need to allow the ArgoCD serviceaccount to impersonate users and groups (and can limit which users and groups it can imitate), but no authentication as an imitated user is needed at any point.

This should work smoothly for mapping ArgoCD groups to k8s groups, where membership of each can be managed in the corresponding systems without changing the group-level RBAC of either or the mapping between the two.

Doing it at a user level would also work, but would add a new place to configure new users (k8s RBAC allowing ArgoCD to impersonate specific users 🤷🏼).


I was trying to put together a PR to add a column/option to the g RBAC lines for this, but couldn't find where those are handled... all the rbac handling I could find was about the p policies 🙁 Hopefully someone with a better working knowledge of the codebase can find time! 🤞🏼

@scalen
Copy link
Contributor

scalen commented Aug 12, 2022

I was looking into this again recently, and found that this can be done simply by setting rest.Config{}.Impersonate{User: "<username>", Groups: []string{"<groupname1>", ...}. We could specify this in ArgoCD RBAC as a new line type:

i, <user/group/role>, <type>, <k8s name>

where i stands for "impersonate"; <type> is one of user or group; and <k8s name> is the name of the user or group. This line can be specified once per <user/group/role> with type user, or multiple times with type group.

I would then argue that impersonation should only be used where the ArgoCD RBAC is not explicit. I would also suggest an additional keyword alongside allow and deny: defer, which explicitly returns the given action on the given object to an implicit state; this would be useful if you wanted to explicitly deny deletion of resources in all apps, except one specific app where you wanted to rely on k8s RBAC.

For example:

For us, allowing developers to have delete permissions on an application brings up the risk of whole application deletion, when all we want to do is give permission to delete everything inside the application, except the application itself.

In this situation, you would allow the developer role to impersonate a k8s user or group(s) with RBAC permissions to delete all the necessary resource types, and make sure that the delete action on that application defers to k8s RBAC:

p, role:developer, applications, delete, */*, deny
p, role:developer, applications, delete, <appproject>/<app>, defer
i, role:developer, group, pod-deleters
i, role:developer, group, config-deleters
i, role:developer, group, deployment-deleters
...

Alternatively, the deny rule may be omitted to allow the developers to delete any k8s-permitted resource from any app; in this case the defer rule may also be omitted, as it is redundant.


Implementation-wise, I see two options:

  1. When enforcing RBAC in ArgoCD, the specific k8s object to be targeted should also be specified. The enforcer then checks for applicable allow/deny/defer rules, as normal; in the case where a defer rule (or no rule) applies, and at least one impersonate rule applies, the enforcer performs a kubectl auth can-i check, constructed from the combination of the target object and the RBAC action requested, with rest.Config including the applicable impersonation(s). All other behaviour remains unchanged.
  2. When enforcing RBAC in ArgoCD, the enforcer checks for applicable allow/deny/defer rules, as normal; in the case where a defer rule (or no rule) applies, and at least one impersonate rule applies, the enforcer returns some deferral config (a rest.ImpersonateConfig, which would have to be an additional return value, in addition to the nilable error). Each case that calls the enforcer then has to decide if a deferral is relevant; if so, normal processing would continue, but any rest.Config would have to be created with the deferral config provided by the enforcer.

I feel like 2. is more intrusive, but also more flexible because you just end up trying the exact action you want to take, and letting k8s RBAC decide. 1., on the other hand, just involves a change to the calls to the enforcer, but asks the enforcer to infer a lot more about what permissions the caller is asking of k8s, which is more brittle.

@jessesuen
Copy link
Member

This issue was brought up in Argo security meeting. I think impersonation is the right way to achieve granular RBAC for the use case requested in this issue. So rather than introduce more syntax in the casbin RBAC rules (of which I'm opposed to), I rather we implement #7689 and provide the option for users to use standard Kubernetes RBAC in lieu of Argo CD RBAC.

How impersonation should work alongside our existing Argo CD RBAC is still to be determined. My initial feeling is the admin will pick one or the other and can do so at a project level and so we would not merge Argo CD RBAC + Kubernetes RBAC concepts. But this is a discussion that should happen in #7689

@SergeyLadutko
Copy link
Contributor

How soon can this kind of functionality be included in the release ?

@crenshaw-dev
Copy link
Collaborator

There is currently no open PR for adding impersonation and, afaik, no one is currently planning to start a PR.

@crenshaw-dev
Copy link
Collaborator

Over time, I've started to lean more towards the idea that this should be handled in Argo native RBAC rather than via impersonation. Actions are a possible workaround, but they only make sense if you want to grant delete on an individual kind rather than all kinds or some subset of kinds.

@Raziel9572
Copy link

Is there a specific action to do it or do we have to create a custom one (if it's possible)?

https://github.com/argoproj/argo-cd/tree/master/resource_customizations/apps/Deployment/actions
I see we can use pause, restart and resume. My scope is to use something like this to grant deletion of Deployments and Daemonsets for a specific project role.

@vyrwu
Copy link

vyrwu commented Dec 11, 2023

We allowed our users to create one-off Jobs from CronJobs, which is a feature of Argo. We can grant them that permission in that case using Argo native RBAC. We really wish that the the inverse (deleting one-off Jobs) could be handled via native RBAC as well, and not via the impersonation (feels hacky).

@caleb-nintex
Copy link

💯 It would be very useful to have this delete functionality via ArgoCD

@nbarrientos
Copy link

nbarrientos commented Jun 18, 2024

May I ask what release this is planned to be cherry-picked on? Thanks.

2.12.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:rbac Issues related to Openshift and Racher enhancement New feature or request security Security related type:usability Enhancement of an existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.