Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to execute termination for v1.StatefulSet . Error: no terminations requested for v1.StatefulSet #227

Closed
puagg opened this issue Feb 23, 2022 · 0 comments

Comments

@puagg
Copy link

puagg commented Feb 23, 2022

I have deployed the kube-monkey in default namespace and following is my configuration.

root@k8s master0 ]# kubectl get po

  • kubectl get po
    NAME READY STATUS RESTARTS AGE
    kubemonkey-kube-monkey-57b6d94c-2lmbs 1/1 Running 0 3h48m

[root@k8s-master0 ]# kubectl get cm kubemonkey-kube-monkey -o yaml

  • kubectl get cm kubemonkey-kube-monkey -o yaml
    apiVersion: v1
    data:
    config.toml: |
    [kubemonkey]
    dry_run = false
    run_hour = 8
    start_hour = 10
    end_hour = 16
    blacklisted_namespaces = [ "kube-system", ]
    time_zone = "America/New_York"
    [debug]
    enabled = true
    schedule_immediate_kill = true
    [notifications]
    enabled = false
    [notifications.attacks]

Application pod (busybox and alpine are stateful set and has below labels set)
apiVersion: apps/v1
kind: {{ $kind }}
metadata:
name: test-seaas-{{$pod}}
namespace: {{$namespace}}
labels:
kube-monkey/enabled: enabled
kube-monkey/identifier: monkey-victim
kube-monkey/kill-mode: random-max-percent
kube-monkey/kill-value: "100"
kube-monkey/mtbf: "1"
...
template:
metadata:
...
labels:
kube-monkey/enabled: enabled
kube-monkey/identifier: monkey-victim

I have already set the label on the pods which kube-monkey needs to target. I could see it did not terminates all the pods which are labeled as kube-monkey victim and throws the below error for these.

Below is the scheduled generated.
I0223 11:10:45.994038 1 schedule.go:66] Status Update: 3 terminations scheduled today
I0223 11:10:45.994058 1 schedule.go:68] v1.Deployment seaas-controller scheduled for termination at 02/23/2022 06:11:31 -0500 EST
I0223 11:10:45.994062 1 schedule.go:68] v1.StatefulSet test-seaas-alpine scheduled for termination at 02/23/2022 06:11:11 -0500 EST
I0223 11:10:45.994066 1 schedule.go:68] v1.StatefulSet test-seaas-busybox scheduled for termination at 02/23/2022 06:11:17 -0500 EST
I0223 11:10:45.994101 1 kubemonkey.go:76] Status Update: Waiting to run scheduled terminations.
----------- -------------- --------- ----------------
v1.Deployment seaas-system seaas-controller 02/23/2022 06:11:31 -0500 EST
v1.StatefulSet app-test test-seaas-alpine 02/23/2022 06:11:11 -0500 EST
v1.StatefulSet app-test test-seaas-busybox 02/23/2022 06:11:17 -0500 EST
********** End of schedule **********
|
The interesting part is, this error keep on changing for the pods within same namespace. For example, in the first run, the error is for busy-box and it was not terminated however in the second run, it throws the error for alpine (other pod deployed in same namespace where busy-box is deployed) and terminates the busy-box. Am I missing something in configuration?

E0223 10:58:25.648957 1 kubemonkey.go:82] Failed to execute termination for v1.StatefulSet test-seaas-busybox. Error: no terminations requested for v1.StatefulSet test-seaas-busybox
E0223 10:58:39.651264 1 kubemonkey.go:82] Failed to execute termination for v1.Deployment seaas-controller. Error: no terminations requested for v1.Deployment seaas-controller

@puagg puagg changed the title Failed to execute termination for v1.StatefulSet test-seaas-busybox. Error: no terminations requested for v1.StatefulSet Failed to execute termination for v1.StatefulSet . Error: no terminations requested for v1.StatefulSet Feb 23, 2022
tgaudillat02 added a commit to tgaudillat02/kube-monkey that referenced this issue Dec 5, 2022
To kill at least one pod if the replicaSet is above 2 replicas.
It correct this issue:  asobti#227
worldtiki pushed a commit that referenced this issue Dec 6, 2022
* Add round for killNum

To kill at least one pod if the replicaSet is above 2 replicas.
It correct this issue:  #227
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants