Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chectl] Add support for updating from Che 6 -> Che 7 #14793

Closed
4 tasks
nickboldt opened this issue Oct 7, 2019 · 23 comments
Closed
4 tasks

[chectl] Add support for updating from Che 6 -> Che 7 #14793

nickboldt opened this issue Oct 7, 2019 · 23 comments
Assignees
Labels
area/chectl Issues related to chectl, the CLI of Che area/install Issues related to installation, including offline/air gap and initial setup kind/enhancement A feature request - must adhere to the feature request template. severity/P1 Has a major impact to usage or development of the system. status/in-progress This issue has been taken by an engineer and is under active development.
Milestone

Comments

@nickboldt
Copy link
Contributor

nickboldt commented Oct 7, 2019

To better support migrating users from Che 6 to 7 (or CRW 1.x to 2.0), it would be hella sweet if chectl's server:update command allowed this.

$➔ chectl server:update --help
update Eclipse Che Server

USAGE
  $ chectl server:update

OPTIONS
  -h, --help                       show CLI help
  -n, --chenamespace=chenamespace  [default: che] Kubernetes namespace where Che resources will be deployed
  -t, --templates=./operator-templates   Folder where installer templates are stored and should be applied
  --installer  Installer that should be used for updating. # operator will be supported only within this issue
  --listr-renderer=listr-renderer  [default: default] Listr renderer. Can be 'default', 'silent' or 'verbose'

On server:update chectl applies templates - in case of operator they are ServiceAccount, Role, ClusterRole, RoleBinding, ClusterRoleBinding, CRD, operator deployment. And that's pretty it that we should do on phase 1.

Since chectl has only one prepackaged templates version (resources of che-operator) they are used by default if --templates= param is not specified. It means that if you use chectl 7.2.0 it will update your Che to 7.2.0.
If you use chectl 7.3.0 and want to update your Che to 7.2.0 (lower or greater than chectl version) that you should specify the corresponding templates files.

Mario asked to handle and test different scenarios:

  • if Che Server is not deployed yet;
  • if Che Server is already updated to version to which user tries to update;
  • if deployed Che Server version is lower than user tries to update;
  • should work on OCP 3.11 and 4.x, even if initial install didn't use chectl
@nickboldt nickboldt added the kind/enhancement A feature request - must adhere to the feature request template. label Oct 7, 2019
@che-bot che-bot added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label Oct 7, 2019
@nickboldt nickboldt added team/platform area/install Issues related to installation, including offline/air gap and initial setup labels Oct 7, 2019
@tolusha tolusha added severity/P1 Has a major impact to usage or development of the system. and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels Oct 7, 2019
@l0rd l0rd added the area/chectl Issues related to chectl, the CLI of Che label Oct 7, 2019
@skabashnyuk
Copy link
Contributor

skabashnyuk commented Oct 8, 2019

Task requires some clarification of scope.

  • Does this only one-time thing upgrade from 6.19.5 to 7.3(4)
  • To scale down workspaces in some cases we need Che-server admin password and (has to be checked) user tokens from running workspaces.
  • it's not clear what in practice this operation should do.

@l0rd
Copy link
Contributor

l0rd commented Oct 8, 2019

It doesn't make sense to make chectl server:update behave differently in the 6 to 7 case. We should instead document the extra steps needed in this case:

  1. chectl server:stop --token=${CHE_ADMIN_TOKEN} --stop-workspaces: --stop-workspaces option has to be implemented
  2. RBAC update: not sure what's this is about and if it make sense to make it a chectl command
  3. chectl server:update

@davidfestal
Copy link
Contributor

  1. RBAC update: not sure what's this is about and if it make sense to make it a chectl command

it is part of the update to me. when installing using the operator, you apply an number of yaml files (roles, role bindings, cluster role bindings, as well as the deployment yaml). We should apply (patch) them all during update.

@davidfestal
Copy link
Contributor

davidfestal commented Oct 8, 2019

  1. chectl server:stop --token=${CHE_ADMIN_TOKEN} --stop-workspaces: -stop-workspaces option has to be implemented

Is it really and reliably possible for the Che admin user to stop all the workspaces when the Openshift OAuth integration is enabled (which is the default, especially on CRW) ?
cc @sleshchenko

@skabashnyuk skabashnyuk added the status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering. label Oct 8, 2019
@l0rd
Copy link
Contributor

l0rd commented Oct 8, 2019

@davidfestal if the RBAC update is already in the deploy folder of the operator then we only need to stop and update. And maybe make it a unique command:

chectl server:update --stop-server --stop-workspaces

@davidfestal
Copy link
Contributor

@l0rd for the che-ctl-based installation why not, but not for the OperatorHub-based installations.

@davidfestal
Copy link
Contributor

I'm still quite skeptic about the real ability to stop all workspaces from the che admin account when Openshift OAuth is enabled (which is the default).
At least some time ago (months, years ?) it was not possible afair, but it would be great if Che server code owners could answer about the current state of this.

@skabashnyuk
Copy link
Contributor

At this moment Che7 has no code IDE/agents etc that support running che6 workspaces. I see no way how Che6 can be upgraded to che7 with running workspaces. The question here is the only how to stop/cleanup running parts.

@davidfestal
Copy link
Contributor

davidfestal commented Oct 8, 2019

At this moment Che7 has no code IDE/agents etc that support running che6 workspaces. I see no way how Che6 can be upgraded to che7 with running workspaces. The question here is the only how to stop/cleanup running parts.

I really see this as a blocker (update blocker, but also estimation blocker). If I understand correctly, it seems that:

  • we cannot ensure that updating from Che 6 to Che 7 with running Che 6 workspaces will work correctly
  • we cannot ensure that it is possible to reliably stop all running Che 6 workspaces (even with the Che admin user) when Openshift OAuth integration is enabled, which is the default.

We should have a clear answer to those questions, and how we answer them will highly drives the way we envision the update, operator-wise, and especially the potential OperatorHub automatic update.

@skabashnyuk
Copy link
Contributor

we cannot ensure that updating from Che 6 to Che 7 with running Che 6 workspaces will work correctly

I doubt that we have ever that assumption( che6 to che7 upgrade with running workspaces) in the roadmap. Che 6 workspace would not be accessible after an upgrade.

we cannot ensure that it is possible to reliably stop all running Che 6 workspaces (even with the Che admin user) when Openshift OAuth integration is enabled, which is the default.

I would say that is a question to you @davidfestal you work with workspaces in this mode for osio.

@davidfestal
Copy link
Contributor

we cannot ensure that it is possible to reliably stop all running Che 6 workspaces (even with the Che admin user) when Openshift OAuth integration is enabled, which is the default.

I would say that is a question to you @davidfestal you work with workspaces in this mode for osio.

As you may know, OSIO user management is really specific, and is not the same as the upstream Openshift OAuth integration, even though similar. So in OSIO it was probably possible, since we were using the oso-proxy and a specific service account with additional rights (Ilya would probably tell more).
But here I'm asking the question about the upstream case, about how the Che server is working now. and I'm not the code owner of the upstream che server kubernetes / openshift infrastructures.

@skabashnyuk
Copy link
Contributor

skabashnyuk commented Oct 8, 2019

But here I'm asking the question about the upstream case, about how the Che server is working now. and I'm not the code owner of the upstream che server kubernetes / openshift infrastructures.

Previous owners may not be reachable at this moment.
Anyway, I think you are right.

@skabashnyuk
Copy link
Contributor

@davidfestal @nickboldt can you explain again what has to be done in this task. It's not very clear to me.

@skabashnyuk
Copy link
Contributor

skabashnyuk commented Oct 8, 2019

This issue is related to #14810 Test UX of an upgrade to Che7 from the running Che6 workspace perspective
At this moment we assuming that

  • Che 6 workspace is running and upgrade to Che7 can happen at any point of time.
  • Users should receive some meaningful message about what to do next.

@sleshchenko sleshchenko self-assigned this Oct 9, 2019
@sleshchenko
Copy link
Member

sleshchenko commented Oct 9, 2019

It's needed to clarify the minimal scope of this issue.

  • Like server:start allows to configure TLS/Self-signed certs/OpenShiftAuth/che-operator-cr/wait timeout/plugin&devfile-registry (full list can be found here https://github.com/che-incubator/chectl/blob/master/src/commands/server/start.ts#L29)
    Should server:update allows to reconfigure all these stuff? Now or it could be implemented later and we want to start only with updating Che Operator resources (ServiceAccount, Role, RoleBinding, CRD, ...), updating che-operator deployment and image...
  • I assume that we're going to start only for supporting update for operator installer only, and then consider implementing it for helm.
  • How we see update command interface? Something like the following?
chectl server:update --installer=operator --cheimage=eclipse/che-server:7.0.0 --che-operator-image=eclipse/che-server:7.2.0 --che-operator-cr-yaml=che-cluster.yaml

or simpler

chectl server:update --installer=operator --version=7.0.0 --che-operator-cr-yaml=che-cluster.yaml
  • Should we control that user updating to greater version but not lower? Probably - yes.

  • Should we allow updating from nightly to lower release? Probably - no since there may be some incompatible changes, at least I can imagine DB migration scripts issues.

  • Question-related to Test Che6 to Che7 upgrade using che-operator #14200

Keycloak doesn't upgrade from che-keycloak:6.19.0 to che-keycloak:7.1.0 on operator upgrade.

Should we chectl override the user-defined configuration for CheCluster with empty values as @davidfestal mentioned in that issue?

@l0rd @davidfestal It would be useful to hear your opinion about it?

@davidfestal
Copy link
Contributor

It's needed to clarify the minimal scope of this issue.

* Like `server:start` allows to configure TLS/Self-signed certs/OpenShiftAuth/che-operator-cr/wait timeout/plugin&devfile-registry (full list can be found here https://github.com/che-incubator/chectl/blob/master/src/commands/server/start.ts#L29)
  Should server:update allows to reconfigure all these stuff? Now or it could be implemented later and we want to start only with updating Che Operator resources (ServiceAccount, Role, RoleBinding, CRD, ...), updating che-operator deployment and image...

I don't think so, at least if update means upgrade.

For the operator installer at least, we expect a Che operator to be already installed, and possibly a CheCluster to be already there as well. So the only thing we would have to update is the operator version (docker image tag) while applying the operator yaml files the same way as what is done in the start command. But we shouldn't apply the custom resource, since we expect it to be already there.

* I assume that we're going to start only for supporting update for operator installer only, and then consider implementing it for helm.

That's an interesting idea IMO.

* How we see update command interface? Something like the following?
chectl server:update --installer=operator --cheimage=eclipse/che-server:7.0.0 --che-operator-image=eclipse/che-server:7.2.0 --che-operator-cr-yaml=che-cluster.yaml

or simpler

chectl server:update --installer=operator --version=7.0.0 --che-operator-cr-yaml=che-cluster.yaml

We don't need the --che-operator-cr-yaml=che-cluster.yaml
since we shouldn't change the CheCluster custom resource.

* Should we control that user updating to greater version but not lower? Probably - yes.

* Should we allow updating from nightly to lower release? Probably - no since there may be some incompatible changes, at least I can imagine DB migration scripts issues.

* Question-related to #14200

Keycloak doesn't upgrade from che-keycloak:6.19.0 to che-keycloak:7.1.0 on operator upgrade.

Should we chectl override the user-defined configuration for CheCluster with empty values as @davidfestal mentioned in that issue?

That might be a nice-to-have, and easy to implement. But anyway this bug is already documented. So I assume it is not a hard requirement and could also be left as a documented manual step.

@l0rd @davidfestal It would be useful to hear your opinion about it?

@l0rd l0rd added this to the 7.3.0 milestone Oct 10, 2019
@skabashnyuk skabashnyuk removed this from the 7.3.0 milestone Oct 10, 2019
@skabashnyuk skabashnyuk added this to the Backlog - Platform milestone Oct 10, 2019
@sleshchenko
Copy link
Member

Taking into account the info above I would move in an iterative way.

Step 1 (scope of the current issue):
implement server:update --installer=operator --platform=[kubernetes|openshift|minishift...|] --chenamespace=che --templates=
Chectl has prepackaged templates (resources of che-operator) they are used by default if --templates= param is not specified.
On server:update chectl applies that templates - in case of operator they are ServiceAccount, Role, ClusterRole, RoleBinding, ClusterRoleBinding, CRD, operator deployment. And that's pretty it that we should do on phase 1.

Agreed with Mario on such minimal scope. Mario asked to handle and test different scenarios:

  • if Che Server is not deployed yet;
  • if Che Server is already updated to version to which user tries to update;
  • if deployed Che Server version is lower than user tries to update;

Possible Step 2:

Should we chectl override the user-defined configuration for CheCluster with empty values as @davidfestal mentioned in that issue?

That might be a nice-to-have, and easy to implement

Agreed with Mario that documentation with pitfalls should be enough and we are not going to implement resetting images in CheCluster CR.

And I would like to leave some more context here:
It might look easy to implement if we don't care about users defined custom images, but I have some concerns if we always should override all images.
For example, user-specified their custom plugin/devfile registry or any other image like - mycompany/che-plugin-registry:7.1.0. And resetting it to empty, the eclipse/che-plugin-registry:7.2.0 will be used and it could be not the right thing to do.
The better UX could be provided by interactive prompt like:

You're updating Che and you probably need newer versions of overridden images:
1. eclipse/che-server:7.1.0 -> eclipse/che-server:7.2.0
2. mycompany/che-plugin-registry:7.1.0 -> eclipse/che-plugin-registry:7.2.0
...
Choose which should be overridden [all, 1, 2, ...]: 

Possible Step 3:
Support updating Che deployed with helm.
It needs a dedicated issue since chectl should not just apply resources and also might care about triggering deployments - if there no changes in deployment but only changes in config map - k8s won't trigger it automatically.

Possible Step 4:

I don't think so, at least if update means upgrade.

Chectl provides some interface via parameters(--tls, --os-oauth) to help a user construct CR for deploying Che.
IMHO it seems logical to have a similar interface to help a user construct CR for updating existing Che Cluster: like enable debug, configure custom plugin-registry.
I quite often realize after deploying Che that I misses some parameter while deploying, and after that I should remove everything and start from scratch or edit CR manually.
With help chectl I would be able to reconfigure existing Che.

Agreed with Mario that we're going to implement parameters for server:update command that would allow patch existing Che installation (like --tls, --debug) but it definitely is not priority #1.

@sleshchenko
Copy link
Member

The issue description is updated in accordance to my the latest comment #14793 (comment)
cc @l0rd @skabashnyuk @nickboldt @davidfestal

@sleshchenko sleshchenko added status/in-progress This issue has been taken by an engineer and is under active development. and removed status/info-needed More information is needed before the issue can move into the “analyzing” state for engineering. labels Oct 10, 2019
@l0rd
Copy link
Contributor

l0rd commented Oct 11, 2019

@sleshchenko looks like a good plan.

@nickboldt
Copy link
Contributor Author

I like Phase 2 and would love to see it in chectl in time for CRW 2, but Phase 1 is minimum viable product requirement.

@davidfestal
Copy link
Contributor

And I would like to leave some more context here:
It might look easy to implement if we don't care about users defined custom images, but I have some concerns if we always should override all images.
For example, user-specified their custom plugin/devfile registry or any other image like - mycompany/che-plugin-registry:7.1.0. And resetting it to empty, the eclipse/che-plugin-registry:7.2.0 will be used and it could be not the right thing to do.
The better UX could be provided by interactive prompt like:

You're updating Che and you probably need newer versions of overridden images:
1. eclipse/che-server:7.1.0 -> eclipse/che-server:7.2.0
2. mycompany/che-plugin-registry:7.1.0 -> eclipse/che-plugin-registry:7.2.0
...
Choose which should be overridden [all, 1, 2, ...]: 

The idea was not to reset to empty string systematically. But more to detect when there is an update from Che to 6.x to 7.1+ (or CRW 1.x to 2.0) and do it only in this case. And in this case, when upgrading from Che 6 (or CRW 1.2 ) to Che 7, there is little chance that old customized images would corectly work with the rest of updated images. So resetting to everything or nothing to empty string seems also more robust.

Additionally, it would be a one-time fix (applied only once when updating from 6.x versions to at least 7.1.0 (or CRW 2.0). Because the underlying bug is fixed from 7.1.0.

@sleshchenko
Copy link
Member

The idea was not to reset to empty string systematically. But more to detect when there is an update from Che to 6.x to 7.1+ (or CRW 1.x to 2.0) and do it only in this case.

I figure it out that chectl always sets images for che-server, plugin and devfile registries https://github.com/che-incubator/chectl/blob/master/src/api/kube.ts#L771
So, if you initially install Che with operator and chectl, then after server:update you always need to edit CR as well, at least until we do not handle this automatically or do not set values during installation.

@sleshchenko
Copy link
Member

The minimal functionality that we can deliver is merged with che-incubator/chectl#353
I'm closing this issue and will create a separate issue for improving server:update command.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/chectl Issues related to chectl, the CLI of Che area/install Issues related to installation, including offline/air gap and initial setup kind/enhancement A feature request - must adhere to the feature request template. severity/P1 Has a major impact to usage or development of the system. status/in-progress This issue has been taken by an engineer and is under active development.
Projects
None yet
Development

No branches or pull requests

7 participants