Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kill random openshift running pods #16

Merged
merged 1 commit into from
Jul 7, 2020

Conversation

paigerube14
Copy link
Collaborator

Adding a scenario to kill a random openshift running pod from all namespaces for #13

Based on the bash script Mike had said to base this scenario on there was a 2 second wait in between kills. Not sure if that is still necessary here, definitely can remove

@rht-perf-ci
Copy link

Can one of the admins verify this patch?

@mffiedler
Copy link
Collaborator

I'm not quite familiar with kraken/powerfulseal pod killing yet. The scenario I have in mind (open for discussion of course) is something that might be a bit more random than the current commit. On each Kraken iterations, pick 1-3 pods from openshift-* namespaces and kill them. Or even kill them with a probability < 1. The idea would be to make random stuff happen in the background while other kraken/cerberus/scale-ci tests are going.

@paigerube14 @chaitanyaenr @yashashreesuresh Thoughts?

@chaitanyaenr
Copy link
Collaborator

@mffiedler Random pod killing in openshift namespaces makes sense. The loop number is not set in the config added by this commit, so it will run in the background until we stop it. @paigerube14 we just need to modify the config to filter pods in openshigt-* instead of *.

Copy link
Collaborator

@mffiedler mffiedler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM as a starting point for this scenario. Can tweak in the future if needed.

@paigerube14
Copy link
Collaborator Author

@chaitanyaenr @mffiedler With the addition of the continuous iterations issue also in progress, should I add the loopsNumber to the top of the yaml scenario so only 1 random pod gets killed each iteration?

@chaitanyaenr
Copy link
Collaborator

@paigerube14 randomSample field determines the number of pods to kill in each loop, so if we set the randomSample to 3 and loopNumber to 10, it will kill 3 random pods for 10 times. The advantage with iterations setting is that it's at a much higher level than the loopNumber in the config meaning kraken can loop through all the scenarios mentioned in the config for n number of times.

Think it's better to set the defaults to minimal values i.e loopNumber to 1 in the config with a comment to remove it if we want to run it continuously since the users can tweak it according to the their needs when running Kraken.

# property filters (all the property filters support regexp)
- property:
name: "name"
value: "openshift-*"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wrong but think this will fetch pods in all the namespaces and kill the pods starting with openshift-* ( will miss pods like apiserver, etcd e.t.c since they don't have an openshift prefix ) vs fetching pods in openshift-* namespaces and killing them. Thoughts?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are correct about the current implementation only getting the pods from all namespaces that have "openshift-" in the name. I'm not sure which is the better implementation, I think the current implementation is the one described in the original issue but not sure if pods in openshift-* namespaces would be better. @mffiedler thoughts?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think the filter is being applied to the pods meaning it will kill the pods whose name starts with openshift-* ( we will miss apiserver, etcd and other pods since they don't have a openshift prefix in their name ). We might want to place the filters for the namespaces meaning it should the pods running in the namespaces matched by regex i,e openshift-* in this case. The current config maybe doing the same and there's a high possibility that I might have missed it -:)

@paigerube14
Copy link
Collaborator Author

To properly get the scenario Ravi outlined we need to be able to match
namespaces on a regex string. This requires the addition of code to powerfulseal which I have opened issue the following issue for #powerfulseal/powerfulseal#274

@mffiedler
Copy link
Collaborator

To properly get the scenario Ravi outlined we need to be able to match
namespaces on a regex string. This requires the addition of code to powerfulseal which I have opened issue the following issue for #bloomberg/powerfulseal#274

@paigerube14 Can you take a look at the powerfulseal source and see if creating a PR for that project would be straightforward? It would be nice if we can contribute to that project.

@paigerube14
Copy link
Collaborator Author

My code for regex namespaces in powerfulseal got merged but I am not sure how often the pip module gets updated. Probably should wait for that to properly test this scenario

@paigerube14
Copy link
Collaborator Author

@mffiedler @chaitanyaenr @yashashreesuresh Please have a look at this pull request when you get a chance. I changed the pip install to use the master branch of powerfulseal to get the regex namespace changes I had made.

Copy link
Collaborator

@chaitanyaenr chaitanyaenr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Tested the PR with help from @paigerube14 and it worked as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants