-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support workspace namespace configuration in che-operator #15300
Comments
@metlos does this task include updating default namespace strategy to |
@metlos default for |
with
Do we want to include that or do we require admins to pre-create namespaces? Is it in the scope of this task? cc: @metlos |
https://github.com/eclipse/che-operator/blob/master/pkg/deploy/che_configmap.go#L88
yes As for the missing perms for the che service account - the logic in the operator is currently kinda simplistic - it just creates both the The As for the missing privs on the I didn't know how che operator handled the service accounts before so this aspect didn't come to my mind during the initial formulation of this issue. My apologies. |
yes, but I'm asking whether we should change the default value to
If che-server does it, I would remove it. But I think this will complicate it :/ https://github.com/eclipse/che-operator/blob/master/pkg/controller/che/che_controller.go#L415
something like this ?
However, I'm not sure whether it make sense to have logic like this, or just simply always create |
Well, currently, the CR namespace is the default, unless openshift oauth is active, at which point the default is |
I think this serves the same purpose as
It is just my opinion, but I believe we should require the minimum permissions given a configuration, so I would not just simply always require the |
I've created PR with new properties and not messing with defaults or service accounts eclipse-che/che-operator#136. I will do default/serviceaccounts in second PR. Or even new issue for that? It will definitely need some investigation and discussion. |
There is an issue with service accounts handling from che-operator. First of all, it is questionable, whether it's responsibility of che-operator to have any service accounts logic. we need to support these scenarios (without oauth)
current state of che-operatorwe always create 2 service accounts in che namespace:
requirementsTo support scenario 3, oauthIt's a bit easier with OpenShift oauth, because we're using user account to create the namespace. Thus user account has to have enough permissions to create namespace or have pre-created namespace for it's workspaces and we can't do anything about it as we're not owners. In current state, this scenario is working ok. We only again create unnecessary next?I see few options for insufficient permissions:
We probably should always use as less permissions as we need, but it is tricky to control it programatically, because we don't know needed permissions before che-operator deployment. Second issue with
|
cc @metlos |
IMHO #15300 (comment) is a great summary of the requirements we have on the SAs. I think it also highlights the fact that we have a little bit "blurred" permission story - we either delegate the infra permissions to openshift oauth or we require quite strong permissions ourselves and provide user separation using our own mechanisms. @davidfestal, @l0rd would you please comment on the solution you'd favor given the broader picture of Che installation conforming to the Kubernetes/OpenShift best practices for operator and operatee deployment and permissions for doing that and also the "Che permissions story" - how we want the admins and end users to grant the permissions to the che server for its correct function? |
I agree that #15300 (comment) is a great summary. I think we should agree that scenario n.1 should be avoided as all users will have access to the namespace objects (secrets with ssh keys etc...). At the same time we cannot risk that, once Che is installed on a cluster, users fail to create workspaces because of not enough privileges. Hence we should provide the following options to the admin that install
In scenario n.3 I would prefer if we could decouple namespace and workspace creation. In other words I would create the namespace as soon as possible (when the Che user is provisioned) and delete it as late as possible (at user deprovisioning). That's because we will fail earlier if something goes wrong and because it removes one responsibility from workspace creation flow and makes it simpler. @alexeykazakov I would like to know you opinion |
For hosted Toolchain, scenario n.2 is what we need. Namespace provisioning there is similar to OSIO. There are a couple of important details though:
While supporting a custom suffix should be pretty easy I guess, the second problem with compliant username is harder to solve. Basically you can't map the same username always to the same namespace just using the username itself. But we could relay on another source of that mapping which should be provided by the operator/cluster admin. For example it could be a specific annotation in the |
@l0rd That would mean that we need grant more privileges to che-operator by default. Even if we use pre-created namespaces, che-operator will have permissions to create namespaces. It is more tricky to control che-operator permissions, especially with one-click install from OperatorHub. We don't know the minimum permissions needed at deploy time so we would have to set maximum permissions. Is it ok? Or do you see any other option here?
@alexeykazakov PR is currently in review for this eclipse-che/che-operator#136
@alexeykazakov yes, the issue is on our radar #15323. I think you should comment there. |
@metlos @l0rd @alexeykazakov I've mystified you. The current state of che-operator is that it's able to run only scenario 1 - all in We're ok with os oauth. Last thing I want to test is if we need any permissions for Question is, whether and how we want to increase permissions of che-operator to be able to grant cluster wide permissions to |
@sparkoo yes it's ok to add |
@l0rd I was very hasty saying that we need just ClusterRole for `che`
|
I've updated the PR (eclipse-che/che-operator#137). It gives full permissions (to create namespace and to manage workspaces in that namespace) when disabled oauth and using different namespace for workspaces. Please check if it is acceptable. There are still few issues remaining to solve. Should be in description. |
@sparkoo does it mean that https://www.eclipse.org/che/docs/che-7/advanced-configuration-options/#one-namespace-per-user-strategy doesn't actually work in Che operator because lack of permissions for the che SA? |
|
What exactly do you mean by using oauth? Do you mean enabling OpenShift OAuth ( |
yes |
@l0rd full list of needed permissions is here https://gist.github.com/sparkoo/624bbd1e10c88b8ad8719b93bc847920. It's for both OpenShift and Kubernetes. |
scope of this issue is implemented. I've created separate issue for |
Is your enhancement related to a problem? Please describe.
Che server can be configured to use custom namespaces used for workspace deployment but che-operator currently hardcodes such configuration to the same namespace the che server is deployed to.
This wasn't configurable before because Che server didn't support safely changing this configuration without damaging the existing workspaces (the workspaces would loose their data, because the PVs are bound to a namespace). As of 7.5.0 this is no longer the case and the target namespace can be safely reconfigured. The workspaces now "remember" the namespace they should be deployed to.
Describe the solution you'd like
Che operator should enable configuration of the following 2 config properties in the Che CR:
che.infra.kuberentes.namespace.default
- this is currently just hardcoded to the same namespace as the che server CRche.infra.kubernetes.namespace.allow_user_defined
- this is a new property in Che 7.5.0 that allows the users to specify a custom namespace where a workspace should be deployed.This depends on #15040 to be merged in Che server.
The text was updated successfully, but these errors were encountered: