Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor chectl to make tasks reusable for different commands #257

Merged
merged 4 commits into from
Sep 13, 2019

Conversation

sleshchenko
Copy link
Contributor

@sleshchenko sleshchenko commented Aug 21, 2019

What does this PR do?

This PR has two major purposed:

  • Refactor chectl to make tasks reusable for different commands;
  • Implement an ability to start already deployed but stopped Che.

The main idea of refactoring is the following:
Have the following different packages:

  • api <- holds everything that might be reused while working with different APIs BUT not tasks, cli stuff here
  • tasks <- holds tasks that are devided into different classes by components
    | - installers
    | - platforms
    | che
  • commands <- holds commands of chectl, command should reuse tasks from tasks package or it can declare it's own unique small tasks

The main pros of this refactoring: encapsulate logic related to some component to one class,
like currently, all commands read flags['che-namespace'] everytime when they need to do something with che components (find ingress, scale deployment) but it could be encapsulated in one che-tasks class.

Note that this PR also contains changes how chectl check if there is already deployed Che, after that - with them it's possible to start already deployed but stopped Che.

This PR also contains a lot of small fixes but I did no manage to highlight them and move to separate commits, like:

  • platform parameter for server:start command is required only when there is no already Deployed Che;
  • server:stop now stops devfile and plugin registry as well as Che, Postgres and Keycloak;
  • print a warning that multiuser is automatically deployed when che-operator is used;
  • minishift addon is updated to use Deployment instead of DeploymentConfig, so chectl get rid of calling DC methods;

Is this PR tested?

I made sure that E2E tests are not broken but there are not much covered cases yet.

I tried to cover start-stop-start scenario with E2E tests https://github.com/sleshchenko/chectl/compare/startStopStartScenario...sleshchenko:startStopStartScenarioE2E?expand=1
but they failed because of the current implementation of api/kube.ts#waitUntilPodIsDeleted method, it says that it waits until the pod is deleted but in fact, it waits until pod is not ready. It causes an issue that server:start gets two Che pods intead of expected 1. I'll register a separate issue for it.

I did manual testing and here I'm posting detailed results. It may be useful to review them as well since it shows what is the actual output of chectl:

No Platform Specified

Screenshot

Screenshot_20190910_135245

Minishift

Minishift Addon
server:start --platform=minishift

Screenshot_20190910_141404

server:stop

Screenshot_20190910_145520

server:start

Screenshot_20190910_145635

server:delete

Screenshot_20190910_150316

Che-operator
server:start --platform=minishift --installer=operator

Screenshot_20190910_151016

server:stop

Screenshot_20190910_151318

server:start (does not work correctly because of https://github.com/eclipse-che/che/issues/14445)

Screenshot_20190910_151733

server:delete

Screenshot_20190910_151903

Minikube

Helm Single User
server:start --platform=minikube

Screenshot_20190910_154009

server:stop

Screenshot_20190910_154049

server:start

Screenshot_20190910_154152

server:delete

Screenshot_20190910_154217

Helm multiuser
server:start --platform=minikube --multiuser

Screenshot_20190910_154845

server:stop

Screenshot_20190910_155029

server:start

Screenshot_20190910_170303

server:delete

Screenshot_20190910_155526

CRC

server:start --platform=openshift --installer=operator

Screenshot_20190911_110921

server:stop

Screenshot_20190911_111117

server:start (does not work correctly because of https://github.com/eclipse-che/che/issues/14445)

Screenshot_20190910_151733

server:delete

Screenshot_20190910_151903

What issues does this PR fix or reference?

It's done in the scope of the following issue eclipse-che/che#14465

It depends on minishift/minishift#3333

@sleshchenko sleshchenko force-pushed the startStopStartScenario branch 3 times, most recently from 7d1e326 to 980c329 Compare August 27, 2019 15:25
Copy link
Collaborator

@benoitf benoitf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine after 7.1 release

@benoitf
Copy link
Collaborator

benoitf commented Sep 2, 2019

but you need to fix CI build and semantic commits

@sleshchenko
Copy link
Contributor Author

@benoitf Sure. This PR rare is not ready for merging. Hopefully, I'll continue testing and fixing this PR after finishing console link issue.

@sleshchenko sleshchenko force-pushed the startStopStartScenario branch 3 times, most recently from 2c720df to f654110 Compare September 6, 2019 08:36
@sleshchenko
Copy link
Contributor Author

Update, it's pretty ready to review. TODO list

  1. // TODO Che Deployment depends on Postgres and Keycloak. TAKE CARE OF THIS https://github.com/che-incubator/chectl/pull/257/files#diff-0c0044e814885b268bec876f26c37a87R240
  2. Clean up commits - separate unrelated changes to make review easier, make semantic commits
  3. Make sure E2E tests works (maybe add new ones not sure that I'll have enough time to do it)

@sleshchenko sleshchenko mentioned this pull request Sep 6, 2019
@sleshchenko sleshchenko force-pushed the startStopStartScenario branch 7 times, most recently from 8076cd0 to ab2d57b Compare September 9, 2019 14:59
@sleshchenko sleshchenko marked this pull request as ready for review September 10, 2019 11:59
@sleshchenko
Copy link
Contributor Author

@benoitf @l0rd Now I believe It's ready to review. Sorry that I did not manage to separate changes into different self-explaining commits, but it needs a lot of time because I made a lot of changes after actual refactoring.

src/api/kube.ts Outdated
const agent = new https.Agent({
rejectUnauthorized: false
})
let endpoint = ''
try {
endpoint = `${currentCluster.server}/healthz`
let response = await axios.get(`${endpoint}`, { httpsAgent: agent, headers: { Authorization: 'bearer ' + token } })
let response = await axios.get(`${endpoint}`, { httpsAgent: agent })
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why token is removed there ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, the following error appears on minishift when logged as developer
Screenshot_20190910_135245

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hum but if you remove it there are failures with some k8s clusters....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can not guarantee.
I only discovered that it does not work on minishift when logged in as non-admin user.
And tested my changes (without token) that is works on minikube(without explicit specifying RBAC enabled) and minishift.

I can test it on like minikube with RBAC specified or OpenShift 4.1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested on CRC 4.1 as well

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security is for remote, not local dev like minishift, minikube or crc.

eclipse-che/che#13800

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benoitf Thanks for sharing concrete implementation where it fails.
Because on one of OSIO cluster, healthz endpoint is public and it supposed by be secure
https://api.starter-us-east-2.openshift.com/healthz
I'm going to revert it and create a separate issue that there is an issue with healthz if user does not have cluster admin rights.

@SDAdham
Copy link

SDAdham commented Sep 12, 2019

Hello everyone, pls let me know once this is released

Copy link
Collaborator

@benoitf benoitf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repository shouldn't use sleshchenko ("eclipse-che-minishift": "git://github.com/sleshchenko/minishift#updateChe", )

and token shouldn't be removed in Kube (or be removed only for minishift)

tasks reusable

Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
…stry that is used by Che

Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
… addon

Signed-off-by: Sergii Leshchenko <sleshche@redhat.com>
@sleshchenko
Copy link
Contributor Author

repository shouldn't use sleshchenko ("eclipse-che-minishift": "git://github.com/sleshchenko/minishift#updateChe", )

Since my PR to minishift repo is merged, I'm able to do it.
master from minishift repo is used now!

and token shouldn't be removed in Kube (or be removed only for minishift)

I believe this issue may be resolved out of the scope of current PR, reverted that changes (removing of using service account token) and created the corresponding issue eclipse-che/che#14534 to investigate it deeper and fix

@sleshchenko sleshchenko merged commit b9f6ac7 into che-incubator:master Sep 13, 2019
@sleshchenko sleshchenko deleted the startStopStartScenario branch September 13, 2019 10:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants