Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APB's are unable to perform actions that require cluster scoped privileges #715

Closed
rthallisey opened this issue Jan 31, 2018 · 15 comments
Closed
Assignees
Labels
feature lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@rthallisey
Copy link
Contributor

Feature:

Currently the broker does not allow an apb to have access to anything outside it's own namespace (defined as cluster resources). The goal of this feature is to provide developers the ability to give their apbs full access to the cluster without restrictions for testing purposes.

When we turn on auto_escalate, we no longer audit whether a user has permission to launch an apb with the sandbox_role. This allows the user full control over a namespace. I'm proposing that when auto_escalate is on, that we give a user full control over the cluster. Auto_escalate is not a production setting, so it won't affect security for anything other than developer environments where security is not a priority.

Another option would be to add an additional flag like cluster_access, which would require we turn on both cluster_access and auto_escalate to give a apbs full access to the cluster.

non-goal:

This feature is not meant to address the problem for production environments. I think production environments require the admin to have more oversight on what cluster level apbs are allowed to run. This will be addressed in the future.

@rthallisey rthallisey added feature 3.10 | release-1.2 Kubernetes 1.10 | Openshift 3.10 | Broker release-1.2 labels Jan 31, 2018
@rthallisey
Copy link
Contributor Author

@jmrodri @eriknelson @karmab @shawn-hurley What do folks think of that approach?

@shawn-hurley
Copy link
Contributor

shawn-hurley commented Jan 31, 2018

I'm proposing that when auto_escalate is on, that we give a user full control over the cluster. Auto_escalate is not a production setting, so it won't affect security for anything other than developer environments where security is not a priority.

This is not the case. In what we called a limited tenant environment where an admin wants� to escalate the permissions of the user to the sandbox_role.

What do you think of adding logic to the broker, that when the sandbox_role is cluster_admin, we attempt to create a clusterrolebinding for the transient service account?

Alternatively, we can add dual configs here to make it very clear that an admin is granting this sandbox_role cluster-wide by having a config for type of role binding (clusterrole or role I think would be the options and default to role).

@enj
Copy link

enj commented Jan 31, 2018

As @shawn-hurley noted, auto_escalate may be used in some production environments (Kubernetes being an example).

If you want this as a way to make development easier, why not just have an extra step in your dev setup that just adds the cluster role binding? I do not see why it need to be turned into a first class option.

@rthallisey
Copy link
Contributor Author

This is not the case. In what we called a limited tenant environment where an admin wants� to escalate the permissions of the user to the sandbox_role.

Thanks for the correction.

Maybe we can use the option dev_broker instead.

@rthallisey
Copy link
Contributor Author

Alternatively, we can add dual configs here to make it very clear that an admin is granting this sandbox_role cluster-wide by having a config for type of role binding (clusterrole or role I think would be the options and default to role).

We could, but I'm mainly trying to avoid adding another config option if possible.

@rthallisey
Copy link
Contributor Author

To follow up on this, I think we're going to use Karim's workaround which is to sign into cluster from the apb.
https://github.com/karmab/kubevirt-apb/blob/master/roles/kubevirt-apb/tasks/provision.yml#L3-L9

Closing this issue. A proposal will follow at a later date to address this problem for production deployments.

@tj13
Copy link

tj13 commented Sep 20, 2018

To follow up on this, I think we're going to use Karim's workaround which is to sign into cluster from the apb.
https://github.com/karmab/kubevirt-apb/blob/master/roles/kubevirt-apb/tasks/provision.yml#L3-L9

Closing this issue. A proposal will follow at a later date to address this problem for production deployments.

@rthallisey the link is broken. any tips how to create project in an apb?

@eriknelson
Copy link
Contributor

@tj13 When running an APB, the broker applies a RoleBinding of edit or admin (this can be configured in the broker's configmap), inside of the target namespace (the namespace that the APB is operating on). Notably, it's a RoleBinding, and not a ClusterRoleBinding, which means the SA is not able to create namespaces or projects in the cluster. This is by design, as you can imagine, there are security concerns with granting APB's cluster scoped privileges.

In the past, folks have worked around this by passing credentials through to the APB similar to any other parameter, and logging in as that user as an initial task in the APB. I'd recommend some caution around this for a production case though, it's deliberately subverting the security model.

@eriknelson eriknelson changed the title Enable access to cluster roles when auto_escalate is true APB's are unable to perform actions that require cluster scoped privileges Sep 20, 2018
@eriknelson eriknelson removed the 3.10 | release-1.2 Kubernetes 1.10 | Openshift 3.10 | Broker release-1.2 label Sep 20, 2018
@eriknelson
Copy link
Contributor

I'm reopening this to continue the discussion around APB's and their access to cluster scoped privileges.

@rthallisey
Copy link
Contributor Author

@tj13
Copy link

tj13 commented Sep 21, 2018

@rthallisey tks

@tj13
Copy link

tj13 commented Sep 21, 2018

@tj13 When running an APB, the broker applies a RoleBinding of edit or admin (this can be configured in the broker's configmap), inside of the target namespace (the namespace that the APB is operating on). Notably, it's a RoleBinding, and not a ClusterRoleBinding, which means the SA is not able to create namespaces or projects in the cluster. This is by design, as you can imagine, there are security concerns with granting APB's cluster scoped privileges.

In the past, folks have worked around this by passing credentials through to the APB similar to any other parameter, and logging in as that user as an initial task in the APB. I'd recommend some caution around this for a production case though, it's deliberately subverting the security model.

yes, it's by design. the workaround can solve the problem now

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 26, 2020
@jmrodri
Copy link
Contributor

jmrodri commented Sep 20, 2020

/close

@openshift-ci-robot
Copy link

@jmrodri: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

9 participants