Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added docker-service-kill experiment implementation in litmus-go. #379

Merged
merged 9 commits into from
Jun 12, 2021
Merged

Added docker-service-kill experiment implementation in litmus-go. #379

merged 9 commits into from
Jun 12, 2021

Conversation

Jonsy13
Copy link
Contributor

@Jonsy13 Jonsy13 commented Jun 11, 2021

Signed-off-by: Jonsy13 vedant.shrotria@chaosnative.com

What this PR does / why we need it:

This PR will add the implementation of docker-service-kill in litmus-go.

Some observations while testing on GKE Cluster -

  • While working with Image type as Container-Optimised OS with Containerd, It is only killing the docker service, but Node is still in Ready state.
  • While working with Image type as Ubuntu with Docker (Ubuntu), It is killing the docker service as well as Node goes in NotReady State.

Checklist:

  • Fixes #
  • PR messages has document related information
  • Labelled this PR & related issue with breaking-changes tag
  • PR messages has breaking changes related information
  • Labelled this PR & related issue with requires-upgrade tag
  • PR messages has upgrade related information
  • Commit has unit tests
  • Commit has integration tests
  • E2E run Required for the changes

Experiment Logs - (Ubuntu with Docker (Ubuntu))

➜  experiment git:(dockersvc) ✗ go run experiment.go --name "docker-service-kill" --kubeconfig ~/.kube/config
INFO[2021-06-11T11:49:12+05:30] Experiment Name: docker-service-kill         
INFO[2021-06-11T11:49:12+05:30] [PreReq]: Getting the ENV for the  experiment 
INFO[2021-06-11T11:49:12+05:30] [PreReq]: Updating the chaos result of docker-service-kill experiment (SOT) 
INFO[2021-06-11T11:49:14+05:30] [Info]: The application information is as follows  Node Label= Target Node= Ramp Time=0 Namespace= App Label=
INFO[2021-06-11T11:49:14+05:30] [Status]: Verify that the AUT (Application Under Test) is running (pre-chaos) 
INFO[2021-06-11T11:49:14+05:30] [Status]: No appLabels provided, skipping the application status checks 
INFO[2021-06-11T11:49:14+05:30] [Status]: Getting the status of target nodes 
INFO[2021-06-11T11:49:15+05:30] The Node status are as follows                Ready=true Node=gke-vedant-2-default-pool-5ba8b8ef-8mjh
INFO[2021-06-11T11:49:15+05:30] The Node status are as follows                Node=gke-vedant-2-default-pool-5ba8b8ef-cfsq Ready=true
INFO[2021-06-11T11:49:15+05:30] The Node status are as follows                Ready=true Node=gke-vedant-2-default-pool-5ba8b8ef-fv9q
INFO[2021-06-11T11:49:15+05:30] [Info]: Details of node under chaos injection  NodeName=gke-vedant-2-default-pool-5ba8b8ef-cfsq
INFO[2021-06-11T11:49:16+05:30] [Status]: Checking the status of the helper pod 
INFO[2021-06-11T11:49:18+05:30] docker-service-kill-helper-xtegvx helper pod is in Running state 
INFO[2021-06-11T11:49:20+05:30] [Status]: Check for the node to be in NotReady state 
INFO[2021-06-11T11:49:33+05:30] The Node status are as follows                Node=gke-vedant-2-default-pool-5ba8b8ef-cfsq Ready=false
INFO[2021-06-11T11:49:33+05:30] [Wait]: Waiting till the completion of the helper pod 
INFO[2021-06-11T11:49:34+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:35+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:37+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:38+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:40+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:41+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:43+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:44+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:45+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:47+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:48+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:49+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:51+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:52+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:53+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:55+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:56+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:58+05:30] helper pod status: Running                   
INFO[2021-06-11T11:49:59+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:00+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:02+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:03+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:04+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:06+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:07+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:08+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:10+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:11+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:13+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:14+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:15+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:17+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:18+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:20+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:21+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:23+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:24+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:26+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:27+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:28+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:30+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:31+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:33+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:34+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:36+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:37+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:38+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:40+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:41+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:43+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:44+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:46+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:47+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:49+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:50+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:52+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:53+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:54+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:56+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:57+05:30] helper pod status: Running                   
INFO[2021-06-11T11:50:59+05:30] helper pod status: Running                   
INFO[2021-06-11T11:51:00+05:30] helper pod status: Running                   
INFO[2021-06-11T11:51:01+05:30] helper pod status: Succeeded                 
INFO[2021-06-11T11:51:01+05:30] [Status]: The running status of Pods are as follows  Pod=docker-service-kill-helper-xtegvx Status=Succeeded
INFO[2021-06-11T11:51:02+05:30] [Status]: Getting the status of target nodes 
INFO[2021-06-11T11:51:03+05:30] The Node status are as follows                Ready=true Node=gke-vedant-2-default-pool-5ba8b8ef-cfsq
INFO[2021-06-11T11:51:03+05:30] [Cleanup]: Deleting the helper pod           
INFO[2021-06-11T11:51:05+05:30] [Confirmation]: docker-service-kill chaos has been injected successfully 
INFO[2021-06-11T11:51:05+05:30] [Status]: Verify that the AUT (Application Under Test) is running (post-chaos) 
INFO[2021-06-11T11:51:05+05:30] [Status]: No appLabels provided, skipping the application status checks 
INFO[2021-06-11T11:51:05+05:30] [The End]: Updating the chaos result of docker-service-kill experiment (EOT) 

Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
@Jonsy13 Jonsy13 self-assigned this Jun 11, 2021
ispeakc0de
ispeakc0de previously approved these changes Jun 11, 2021
Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
ispeakc0de
ispeakc0de previously approved these changes Jun 11, 2021
@uditgaurav
Copy link
Member

We can also try to add the support for Containerd ubuntu image. In that case, we have to wait for the docker service to be up and active also skip the node status check (as node will not get down). We can do this in upcoming PR also. -- @Jonsy13

@ksatchit ksatchit merged commit e229c73 into litmuschaos:master Jun 12, 2021
ispeakc0de pushed a commit to ispeakc0de/litmus-go that referenced this pull request Jun 15, 2021
…tmuschaos#379)

* Added docker-svc-kill implementation in litmus-go.

Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>
ksatchit pushed a commit that referenced this pull request Jun 15, 2021
* chore(dns): adding spoofmap env in helper pod (#363)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* Fix sequence env in kafka broker pod experiment (#369)

* Fix sequence env in kafka broker pod experiment

Signed-off-by: uditgaurav <udit@chaosnative.com>

* add pod affected percentage env

Signed-off-by: uditgaurav <udit@chaosnative.com>

* chore(contribution): Adding contribution guide, bch check, issue & PR templates (#367)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* fix: vendor/golang.org/x/net/http2/Dockerfile to reduce vulnerabilities (#361)

The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-UBUNTU1404-OPENSSL-1049144
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-1065770
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-406981
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-473059
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-546522

Co-authored-by: Karthik Satchitanand <karthik.s@mayadata.io>

* fix: vendor/golang.org/x/net/http2/Dockerfile to reduce vulnerabilities (#364)

The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-UBUNTU1404-OPENSSL-1049144
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-1065770
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-406981
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-473059
- https://snyk.io/vuln/SNYK-UBUNTU1404-SUDO-546522

Co-authored-by: Karthik Satchitanand <karthik.s@mayadata.io>

* chore(env): : updated the env setter function (#365)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* rm(vendor): removing the vendor directory from litmus-go (#366)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(image): reduce the go-runner image size (#371)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* Chore(e2e): Update e2e workflows and add more node level tests (#372)

* Chore(e2e): Update e2e workflows and add more node level tests

Signed-off-by: uditgaurav <udit@chaosnative.com>

* add kind config

Signed-off-by: uditgaurav <udit@chaosnative.com>

* chore(probe): adding probe abort (#370)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(helper): Adding statusCheckTimeouts for the helper status check (#373)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(chaosresult): updating verdict and status in chaosengine and chaosresult (#375)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(sdk): updating sdk (#378)

* chore(sdk): updating sdk

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(pod-delete): Adding target details inside chaosresult (#336)

* chore(pod-delete): Adding target details inside chaosresult

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(pod-delete): Adding target details inside chaosresult for pod-autoscaler

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(experiment): Adding target details inside chaosresult for the experiments which contains helper pod (#342)

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(experiment): Adding target details inside chaosresult for the experiments which doesn't contain helper pod (#341)

* chore(experiment): Adding target details inside chaosresult for the experiments which doesn't contain helper pod

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* chore(experiment): Adding target details inside chaosresult for the pumba helper

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

* Added docker-service-kill experiment implementation in litmus-go. (#379)

* Added docker-svc-kill implementation in litmus-go.

Signed-off-by: Jonsy13 <vedant.shrotria@chaosnative.com>

* Chore(Dockerfile): Update Dockerfile to take binaries from test-tool release build (#380)

Signed-off-by: udit <udit@chaosnative.com>

* chore(1.13.x): updating branch to 1.13.x in github actions

Signed-off-by: shubhamchaudhary <shubham@chaosnative.com>

Co-authored-by: Udit Gaurav <35391335+uditgaurav@users.noreply.github.com>
Co-authored-by: Snyk bot <github+bot@snyk.io>
Co-authored-by: Karthik Satchitanand <karthik.s@mayadata.io>
Co-authored-by: VEDANT SHROTRIA <40681425+Jonsy13@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants