Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init scale test framework #413

Merged
merged 4 commits into from
Sep 23, 2022
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions tests/scalability/scale-test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash

source ./util.sh

GSB_NAME=gameserverbuild-sample-openarena

echo "test 1: scale up to 16 servers from 1 standby server"
kubectl apply -f ./standby/1.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these kubectl apply commands are not valid anymore, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It creates the gsb with 1 initial standby and 16 max. I think we still need it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, sorry, files were not visible and I thought they were deleted. Do we need all three of them now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need, but it shouldn't be different initial standby. We actually need to test the scale time for different max.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max in all three files is currently the same, 16. Plus, we can send a JSON patch via kubectl and change it dynamically if needed. Not a big deal TBH, I wouldn't block this PR for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated test cases to use max as the variant.

scale_up $GSB_NAME 16
kubectl delete gsb gameserverbuild-sample-openarena
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need to delete the build before scaling? we can downscale to zero if needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can scale down to 0. Which one is more appropriate here?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's scale to zero, any chance we can wait till the actual standing by is zero?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


echo "test 2: scale up to 16 servers from 4 standby server"
kubectl apply -f ./standby/4.yaml
scale_up $GSB_NAME 16
scale_clear
kubectl delete gsb gameserverbuild-sample-openarena

echo "test 3: scale up to 16 servers from 16 standby server"
kubectl apply -f ./standby/16.yaml
scale_up $GSB_NAME 16
scale_clear
kubectl delete gsb gameserverbuild-sample-openarena
27 changes: 27 additions & 0 deletions tests/scalability/standby/1.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: mps.playfab.com/v1alpha1
kind: GameServerBuild
metadata:
name: gameserverbuild-sample-openarena
spec:
titleID: "1E04" # required
buildID: "85ffe8da-c82f-4035-86c5-9d2b5f42d6f7" # must be a GUID
standingBy: 1 # required
max: 16 # required
portsToExpose:
- 27960
template:
spec:
containers:
- image: ghcr.io/playfab/thundernetes-openarena:0.5.0
name: thundernetes-sample-openarena
ports:
- containerPort: 27960 # your game server port
protocol: UDP # your game server port protocol
name: gameport # required field
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 100m
memory: 500Mi
28 changes: 28 additions & 0 deletions tests/scalability/standby/16.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@

apiVersion: mps.playfab.com/v1alpha1
kind: GameServerBuild
metadata:
name: gameserverbuild-sample-openarena
spec:
titleID: "1E04" # required
buildID: "85ffe8da-c82f-4035-86c5-9d2b5f42d6f7" # must be a GUID
standingBy: 16 # required
max: 16 # required
portsToExpose:
- 27960
template:
spec:
containers:
- image: ghcr.io/playfab/thundernetes-openarena:0.5.0
name: thundernetes-sample-openarena
ports:
- containerPort: 27960 # your game server port
protocol: UDP # your game server port protocol
name: gameport # required field
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 100m
memory: 500Mi
28 changes: 28 additions & 0 deletions tests/scalability/standby/4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@

apiVersion: mps.playfab.com/v1alpha1
kind: GameServerBuild
metadata:
name: gameserverbuild-sample-openarena
spec:
titleID: "1E04" # required
buildID: "85ffe8da-c82f-4035-86c5-9d2b5f42d6f7" # must be a GUID
standingBy: 4 # required
max: 16 # required
portsToExpose:
- 27960
template:
spec:
containers:
- image: ghcr.io/playfab/thundernetes-openarena:0.5.0
name: thundernetes-sample-openarena
ports:
- containerPort: 27960 # your game server port
protocol: UDP # your game server port protocol
name: gameport # required field
resources:
requests:
cpu: 100m
memory: 500Mi
limits:
cpu: 100m
memory: 500Mi
51 changes: 51 additions & 0 deletions tests/scalability/util.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#!/bin/bash

function scale_up_with_api() {
st=$(date +%s)
buildID=$1
replicas=$2

IP=$(kubectl get svc -n thundernetes-system thundernetes-controller-manager -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

echo build ID: $buildID
for i in $(seq 1 $replicas); do
session=$(uuidgen)
ret=500
while [ "$ret" -gt 400 ]; do
ret=$(curl -s -o /dev/null -w "%{http_code}" -H 'Content-Type: application/json' -d "{\"buildID\":\"${buildID}\",\"sessionID\":\"${session}\"}" http://${IP}:5000/api/v1/allocate)
done
echo

echo up $i - $session
done
et=$(date +%s)

echo "Scale up time: $((et-st))s"
}
echo "Added function scale_up_with_api(buildID, replicas)"

function scale_up() {
st=$(date +%s)

gsb_name=$1
replicas=$2

kubectl scale gsb $gsb_name --replicas $replicas

count=0
echo
while [ $count != $replicas ]; do
count=$(kubectl get gs -o=jsonpath='{range .items[?(@.status.state=="StandingBy")]}{.metadata.name}{" "}' | wc -w | xargs)
echo -e -n "\rScaled up: $count/$replicas"
sleep 1
done
et=$(date +%s)

echo -e "\nScale up time: $((et-st))s"
}
echo "Added function scale_up(gsb_name, replicas)"

function scale_clear(){
kubectl get gs -o=jsonpath='{range .items[?(@.status.state=="Active")]}{.metadata.name}{"\n"}' | xargs -I {} kubectl delete gs {}
}
echo "Added function scale_clear()"