Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug(fix): Fix for summary event and getting target container name #37

Merged
merged 1 commit into from
Jul 5, 2020
Merged

Conversation

uditgaurav
Copy link
Member

Signed-off-by: Udit Gaurav udit.gaurav@mayadata.io

  • This PR is for fixing the issues in getting the target container name and summary events.

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>
@uditgaurav uditgaurav changed the title bug(fix): Add for summary event and getting target container name bug(fix): Fix for summary event and getting target container name Jul 5, 2020
@uditgaurav
Copy link
Member Author

The following command gives correct pod list but also gives an error that resource name may not be empty:

podList, err := clients.KubeClient.CoreV1().Pods(experimentsDetails.AppNS).List(v1.ListOptions{LabelSelector: experimentsDetails.AppLabel})
if err != nil {
	return "", "", err
}

So converted it to:

podList, _ := clients.KubeClient.CoreV1().Pods(experimentsDetails.AppNS).List(v1.ListOptions{LabelSelector: experimentsDetails.AppLabel})
if len(podList.Items) == 0 {
	return "", "", errors.Wrapf(err, "Fail to get the application pod in %v namespace", experimentsDetails.AppNS)
}

Now this will throw an error if we get nothing in the pod list for a particular app_label and app_namespace.

@uditgaurav uditgaurav self-assigned this Jul 5, 2020
@uditgaurav
Copy link
Member Author

Logs after fix:

W0705 11:59:13.158721       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
time="2020-07-05T11:59:13Z" level=info msg="[PreReq]: Getting the ENV for the  experiment"
time="2020-07-05T11:59:13Z" level=info msg="[PreReq]: Updating the chaos result of pod-network-duplication experiment (SOT)"
time="2020-07-05T11:59:13Z" level=info msg="The application informations are as follows\n" Namespace=default Label="run=nginx" Ramp Time=0
time="2020-07-05T11:59:13Z" level=info msg="[Status]: Verify that the AUT (Application Under Test) is running (pre-chaos)"
time="2020-07-05T11:59:13Z" level=info msg="[Status]: Checking whether application pods are in running state"
time="2020-07-05T11:59:13Z" level=info msg="The running status of Pods are as follows" Pod=nginx-7bb7cd8db5-tvpcj Status=Running
time="2020-07-05T11:59:13Z" level=info msg="[Status]: Checking whether application containers are in running state"
time="2020-07-05T11:59:13Z" level=info msg="The running status of container are as follows" Status=Running container=nginx Pod=nginx-7bb7cd8db5-tvpcj
time="2020-07-05T11:59:13Z" level=info msg="[Prepare]: Application pod name under chaos: nginx-7bb7cd8db5-tvpcj"
time="2020-07-05T11:59:13Z" level=info msg="[Prepare]: Application node name: ip-172-31-0-194.us-east-2.compute.internal"
time="2020-07-05T11:59:13Z" level=info msg="[Prepare]: Target container name: nginx"
time="2020-07-05T11:59:13Z" level=info msg="[Status]: Checking the status of the helper pod"
time="2020-07-05T11:59:13Z" level=info msg="[Status]: Checking whether application pods are in running state"
time="2020-07-05T11:59:15Z" level=info msg="The running status of Pods are as follows" Pod=pumba-netem-kmwvls Status=Running
time="2020-07-05T11:59:15Z" level=info msg="[Status]: Checking whether application containers are in running state"
time="2020-07-05T11:59:15Z" level=info msg="The running status of container are as follows" container=pumba Pod=pumba-netem-kmwvls Status=Running
time="2020-07-05T11:59:15Z" level=info msg="[Wait]: waiting till the completion of the helper pod"
time="2020-07-05T11:59:15Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:16Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:17Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:18Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:19Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:20Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:21Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:22Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:23Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:24Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:25Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:26Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:27Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:28Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:29Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:30Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:31Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:32Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:33Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:34Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:35Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:36Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:37Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:38Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:39Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:40Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:41Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:42Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:43Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:44Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:45Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:46Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:47Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:48Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:49Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:50Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:51Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:52Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:53Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:54Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:55Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:56Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:57Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:58Z" level=info msg="helper pod status: Running"
time="2020-07-05T11:59:59Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:00Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:01Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:02Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:03Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:04Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:05Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:06Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:07Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:08Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:09Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:10Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:11Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:12Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:13Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:14Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:15Z" level=info msg="helper pod status: Running"
time="2020-07-05T12:00:16Z" level=info msg="helper pod status: Succeeded"
time="2020-07-05T12:00:16Z" level=info msg="The running status of Pods are as follows" Status=Succeeded Pod=pumba-netem-kmwvls
time="2020-07-05T12:00:16Z" level=info msg="[Cleanup]: Deleting the helper pod"
time="2020-07-05T12:00:16Z" level=info msg="[Confirmation]: The app pod network duplication"
time="2020-07-05T12:00:16Z" level=info msg="[Status]: Verify that the AUT (Application Under Test) is running (post-chaos)"
time="2020-07-05T12:00:16Z" level=info msg="[Status]: Checking whether application pods are in running state"
time="2020-07-05T12:00:16Z" level=info msg="The running status of Pods are as follows" Pod=nginx-7bb7cd8db5-tvpcj Status=Running
time="2020-07-05T12:00:16Z" level=info msg="[Status]: Checking whether application containers are in running state"
time="2020-07-05T12:00:16Z" level=info msg="The running status of container are as follows" container=nginx Pod=nginx-7bb7cd8db5-tvpcj Status=Running
time="2020-07-05T12:00:16Z" level=info msg="[The End]: Updating the chaos result of pod-network-duplication experiment (EOT)"

@uditgaurav
Copy link
Member Author

Chaos Result describe:

Name:         nginx-network-chaos-pod-network-duplication
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  litmuschaos.io/v1alpha1
Kind:         ChaosResult
Metadata:
  Creation Timestamp:  2020-07-02T15:02:13Z
  Generation:          20
  Resource Version:    6997774
  Self Link:           /apis/litmuschaos.io/v1alpha1/namespaces/default/chaosresults/nginx-network-chaos-pod-network-duplication
  UID:                 1c120398-cafc-4cd6-a512-f7bb676627dc
Spec:
  Engine:      nginx-network-chaos
  Experiment:  pod-network-duplication
Status:
  Experimentstatus:
    Fail Step:  N/A
    Phase:      Completed
    Verdict:    Pass
Events:
  Type    Reason   Age   From                                  Message
  ----    ------   ----  ----                                  -------
  Normal  Summary  117s  pod-network-duplication-o6z2a0-mgbdk  pod-network-duplication experiment has been Passed

@ksatchit ksatchit merged commit f0f3d97 into litmuschaos:master Jul 5, 2020
@uditgaurav uditgaurav deleted the fix branch July 5, 2020 12:17
ksatchit pushed a commit that referenced this pull request Jul 6, 2020
* refactor(experiments): Refactor litmus go experiments (#29)

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>

* feat(experiments): Add pod memory hog experiment (#31)

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>

* refactor(go-experiments): separate the types.go file for each experiment (#34)

Signed-off-by: shubhamchaudhary <shubham.chaudhary@mayadata.io>

* update(contribution-guide): updating contribution guide according to new schema changes (#35)

Signed-off-by: shubhamchaudhary <shubham.chaudhary@mayadata.io>

* chore(experiment): Add pod network duplication experiment in generic experiments of LitmusChaos (#27)

* chore(experiment): Add pod network duplication experiment in generic experiments of LitmusChaos

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>

* bug(fix): Add for summary event and getting target container name (#37)

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>

* bug(fix): Remove extra index from the list in pod duplication experiment (#38)

Signed-off-by: Udit Gaurav <uditgaurav@gmail.com>

Co-authored-by: Shubham Chaudhary <shubham.chaudhary@mayadata.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants