Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Renaissance Benchmark #51

Draft
wants to merge 77 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 63 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
e1260ac
Create getmetrics-promql.sh
Prakalp23 Jul 15, 2022
fa68aab
Create getmetrics-promql.sh
Prakalp23 Jul 15, 2022
f9fc6f7
Delete getmetrics-promql.sh
Prakalp23 Jul 15, 2022
5d80c20
Rename getmetrics-promql.sh to getmetrics1-promql.sh
Prakalp23 Jul 15, 2022
115d4f7
Update getmetrics1-promql.sh
Prakalp23 Jul 15, 2022
0e41f94
Update getmetrics1-promql.sh
Prakalp23 Jul 16, 2022
d92ad51
Create renaissance-run.sh
Prakalp23 Jul 16, 2022
8ef4f37
Create renaissance-common.sh
Prakalp23 Jul 17, 2022
d773071
Update renaissance-run.sh
Prakalp23 Jul 17, 2022
eb1b05d
Create renaissance-cleanup.sh
Prakalp23 Jul 18, 2022
cc23294
Update renaissance-run.sh
Prakalp23 Jul 18, 2022
708638d
Create parsemetrics-wrk.sh
Prakalp23 Jul 18, 2022
754adae
Create common.sh
Prakalp23 Jul 18, 2022
5a47cf8
Create deploy.sh
Prakalp23 Jul 18, 2022
4ebd7d0
Update renaissance-run.sh
Prakalp23 Jul 18, 2022
7f3572c
Update renaissance-run.sh
Prakalp23 Jul 18, 2022
c52b5e0
Rename getmetrics1-promql.sh to getmetrics-promql.sh
Prakalp23 Jul 18, 2022
2bf007b
Create test
Prakalp23 Jul 18, 2022
22c7b33
Rename renaissance/common.sh to renaissance/scripts/common.sh
Prakalp23 Jul 18, 2022
02a68e8
Rename renaissance/deploy.sh to renaissance/scripts/deploy.sh
Prakalp23 Jul 18, 2022
8d44801
Update deploy.sh
Prakalp23 Jul 18, 2022
1d23b76
Rename renaissance/getmetrics-promql.sh to renaissance/scripts/getmet…
Prakalp23 Jul 18, 2022
1051fe3
Rename renaissance/parsemetrics-wrk.sh to renaissance/scripts/parseme…
Prakalp23 Jul 18, 2022
494b5b2
Rename renaissance/renaissance-cleanup.sh to renaissance/scripts/rena…
Prakalp23 Jul 18, 2022
243bef0
Rename renaissance/renaissance-common.sh to renaissance/scripts/renai…
Prakalp23 Jul 18, 2022
2ff505c
Rename renaissance/renaissance-run.sh to renaissance/scripts/renaissa…
Prakalp23 Jul 18, 2022
606349e
Create deployement.yaml
Prakalp23 Jul 18, 2022
b3686fd
Create config.yaml
Prakalp23 Jul 18, 2022
abc34c5
Delete test
Prakalp23 Jul 18, 2022
5014f5e
Update deploy.sh
Prakalp23 Jul 18, 2022
604ffd7
Update and rename deploy.sh to renaissance-deploy.sh
Prakalp23 Jul 18, 2022
c5da69a
Update renaissance-common.sh
Prakalp23 Jul 18, 2022
175bc4b
Update getmetrics-promql.sh
Prakalp23 Jul 18, 2022
eb0ec80
Update renaissance-run.sh
Prakalp23 Jul 18, 2022
db8c79d
Update renaissance-run.sh
Prakalp23 Jul 18, 2022
5642e99
Update renaissance-deploy.sh
Prakalp23 Jul 18, 2022
b52f955
Update renaissance-cleanup.sh
Prakalp23 Jul 18, 2022
02549fb
Update and rename deployement.yaml to renaissance.yaml
Prakalp23 Jul 18, 2022
5286e63
Create Dockerfile
Prakalp23 Jul 18, 2022
c61e0ff
Rename renaissance/manifests/config.yaml to renaissance/docker/config…
Prakalp23 Jul 18, 2022
d4b0955
Create service-monitor.yaml
Prakalp23 Jul 18, 2022
f1d4351
fixing issues
Prakalp23 Jul 18, 2022
b399d66
Rename renaissance/scripts/common.sh to renaissance/scripts/utils/com…
Prakalp23 Jul 19, 2022
30cb78d
Create parsemetrics-promql.sh
Prakalp23 Jul 19, 2022
17913f0
Update Dockerfile
Prakalp23 Jul 19, 2022
035f54e
Create readme
Prakalp23 Jul 19, 2022
a398e7f
Fixed some prometheus queries and errors
Prakalp23 Jul 19, 2022
efaa8f0
fixed issues
Prakalp23 Jul 20, 2022
add72fc
fixed issues
Prakalp23 Jul 20, 2022
6f2cf62
commiting changes made to parsemetrics-promql.sh
Prakalp23 Jul 22, 2022
da9fd9b
latest changes
Prakalp23 Jul 23, 2022
b90987c
changed few names in the script
Prakalp23 Jul 24, 2022
7df183c
errors fixed
Prakalp23 Jul 24, 2022
023ea32
changes made to parsemetrics-promql.sh
Prakalp23 Jul 25, 2022
aeba996
changes made to script
Prakalp23 Jul 25, 2022
6ea2a63
changes to scripts
Jul 27, 2022
d5246a0
changes to script
Prakalp23 Jul 27, 2022
3797bc8
changes to remove [] from results
Prakalp23 Jul 28, 2022
eb58112
fixed json format type
Prakalp23 Aug 17, 2022
9698bd0
Made changes to get the results in specific JSON format
Prakalp23 Aug 19, 2022
0e5ec5b
Updating the indentation of the scripts
Prakalp23 Sep 9, 2022
b96b527
Some more changes to the outline of the scripts
Prakalp23 Sep 9, 2022
df85794
Made changes to the copyright year in the scripts
Prakalp23 Sep 9, 2022
b7db1b6
Made the required changes to scripts
Prakalp23 Sep 9, 2022
3c1074d
Made changes to parsedatalog and parsememlog
Prakalp23 Sep 10, 2022
35dcef3
Made changes to the performance scripts
Prakalp23 Sep 12, 2022
86497f1
fixed the output format
Prakalp23 Sep 12, 2022
c718559
Add files via upload
Prakalp23 Sep 12, 2022
8815db4
Update Dockerfile
Prakalp23 Nov 14, 2022
a4d6982
Update Dockerfile
Prakalp23 Nov 14, 2022
e1ef0dc
Update Dockerfile
Prakalp23 Nov 14, 2022
3bee221
Update Dockerfile
Prakalp23 Jan 6, 2023
b02d9f3
Delete renaissance-cleanup.sh
Prakalp23 Jan 6, 2023
e608f62
Delete renaissance-common.sh
Prakalp23 Jan 6, 2023
ac914a4
Delete renaissance-deploy.sh
Prakalp23 Jan 6, 2023
a7b80e1
Add files via upload
Prakalp23 Jan 6, 2023
efe9258
Delete readme
Prakalp23 Jan 6, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ acmeair/acmeair/.metadata
acmeair/acmeair/.classpath
acmeair/acmeair/.project
acmeair/acmeair/.gradle
.idea/
acmeair/jmeter-driver/acmeair-jmeter/build
10 changes: 10 additions & 0 deletions renaissance/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM eclipse-temurin:17.0.3_7-jre
WORKDIR /target
RUN wget https://github.com/renaissance-benchmarks/renaissance/releases/download/v0.14.1/renaissance-gpl-0.14.1.jar
COPY renaissance-gpl-0.14.1.jar /target/renaissance-gpl-0.14.1.jar
COPY jmx_prometheus_javaagent-0.17.0.jar jmx_prometheus_javaagent-0.17.0.jar
Prakalp23 marked this conversation as resolved.
Show resolved Hide resolved
COPY config.yaml config.yaml
ENV JDK_JAVA_OPTIONS=
ENV BENCHMARK=all
ENV TIME_LIMIT=5
ENTRYPOINT java -javaagent:./jmx_prometheus_javaagent-0.17.0.jar=8080:config.yaml ${JDK_JAVA_OPTIONS} -jar /target/renaissance-gpl-0.14.1.jar -t ${TIME_LIMIT} --csv /output/renaissance-output.csv ${BENCHMARK}
2 changes: 2 additions & 0 deletions renaissance/docker/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
rules:
- pattern: ".*"
54 changes: 54 additions & 0 deletions renaissance/manifests/renaissance.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: renaissance-sample
labels:
app: renaissance-app
spec:
replicas: 1
selector:
matchLabels:
app: renaissance-deployment
template:
metadata:
labels:
name: renaissance-deployment
app: renaissance-deployment
# Add label to the application which is used by kruize/autotune to monitor it
app.kubernetes.io/name: "renaissance-deployment"
app.kubernetes.io/layer: "hotspot"
version: v1
spec:
volumes:
- name: test-volume
containers:
- name: renaissance-server
image: prakalp23/renaissance1041:latest
imagePullPolicy: IfNotPresent
env:
ports:
- containerPort: 8080
resources:
requests:
limits:
volumeMounts:
- name: "test-volume"
mountPath: "/opt/jLogs"
---
apiVersion: v1
kind: Service
metadata:
name: renaissance-service
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
labels:
app: renaissance-app
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
name: renaissance-port
selector:
name: renaissance-deployment
13 changes: 13 additions & 0 deletions renaissance/manifests/service-monitor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: renaissance
labels:
team: renaissance-frontend
spec:
selector:
matchLabels:
app: renaissance-app
endpoints:
- port: renaissance-port
path: '/metrics'
2 changes: 2 additions & 0 deletions renaissance/readme
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# to run the benchmark
Prakalp23 marked this conversation as resolved.
Show resolved Hide resolved
./renaissance-run.sh --clustertype=minikube -s localhost -e ./results -w 1 -m 1 -i 1 --iter=1 -r -n default --cpureq=1.5 --memreq=3152M --cpulim=1.5 --memlim=3152M -b "page-rank" -d 60
160 changes: 160 additions & 0 deletions renaissance/scripts/parsemetrics-wrk.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
#!/bin/bash
Prakalp23 marked this conversation as resolved.
Show resolved Hide resolved
#
# Copyright (c) 2020, 2021,2022 IBM Corporation, RedHat and others.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
### Script to parse hyperfoil/wrk2 data###

CURRENT_DIR="$(dirname "$(realpath "$0")")"
source ${CURRENT_DIR}/../common.sh
#source ${CURRENT_DIR}/../renaissance-common.sh
Prakalp23 marked this conversation as resolved.
Show resolved Hide resolved
#APP_NAME="tfb-qrh"

# Parse the result log files
# input:Type of run(warmup|measure), total number of runs, total number of iteration
# output:Throughput log file(throghput, response time, total memory used by the pod, total cpu used by the pod, cluster memory usage in percentage,cluster cpu in percentage and web errors if any)
function parseData() {
TYPE=$1
TOTAL_RUNS=$2
ITR=$3

for (( run=0 ; run<"${TOTAL_RUNS}" ;run++))
do
thrp_sum=0
resp_sum=0
wer_sum=0
responsetime=0
max_responsetime=0
stddev_responsetime=0
if [[ ${CLUSTER_TYPE} == "openshift" ]]; then
SVC_APIS=($(oc status --namespace=${NAMESPACE} | grep "${APP_NAME}" | grep port | cut -d " " -f1 | cut -d "/" -f3))
elif [[ ${CLUSTER_TYPE} == "minikube" ]]; then
SVC_APIS=($(${K_EXEC} get svc --namespace=${NAMESPACE} | grep "${APP_NAME}" | cut -d " " -f1))
fi
for svc_api in "${SVC_APIS[@]}"
do
throughput=0
responsetime=0

RESULT_LOG=${RESULTS_DIR_P}/wrk-${svc_api}-${TYPE}-${run}.log
throughput=`cat ${RESULT_LOG} | grep "Requests" | cut -d ":" -f2 `
responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f2 `
max_responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f6 `
stddev_responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f4 `
isms_responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f3 `
isms_max_responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f7 `
isms_stddev_responsetime=`cat ${RESULT_LOG} | grep "Latency:" | cut -d ":" -f2 | tr -s " " | cut -d " " -f5 `
if [ "${isms_responsetime}" == "s" ]; then
responsetime=$(echo ${responsetime}*1000 | bc -l)
elif [ "${isms_responsetime}" != "ms" ]; then
responsetime=$(echo ${responsetime}/1000 | bc -l)
fi

if [ "${isms_max_responsetime}" == "s" ]; then
max_responsetime=$(echo ${max_responsetime}*1000 | bc -l)
elif [ "${isms_max_responsetime}" != "ms" ]; then
max_responsetime=$(echo ${max_responsetime}/1000 | bc -l)
fi

if [ "${isms_stddev_responsetime}" == "s" ]; then
stddev_responsetime=$(echo ${stddev_responsetime}*1000 | bc -l)
elif [ "${isms_stddev_responsetime}" != "ms" ]; then
stddev_responsetime=$(echo ${stddev_responsetime}/1000 | bc -l)
fi

weberrors=`cat ${RESULT_LOG} | grep "Non-2xx" | cut -d ":" -f2`
if [ ! -z ${throughput} ]; then
thrp_sum=$(echo ${thrp_sum}+${throughput} | bc -l)
fi
if [ ! -z ${responsetime} ]; then
resp_sum=$(echo ${resp_sum}+${responsetime} | bc -l)
fi
if [ "${weberrors}" != "" ]; then
wer_sum=`expr ${wer_sum} + ${weberrors}`
fi
if [[ ${total_weberror_avg} -ge 500 ]]; then
echo "1 , 99999 , 99999 , 99999 , 99999 , 99999 , 999999 , 99999 , 99999 , 99999 , 99999 , 99999 , 99999 , 99999 , 99999 , 99999 , 99999" >> ${RESULTS_DIR_J}/../Metrics-prom.log
echo ", 99999 , 99999 , 99999 , 99999 , 9999 , 0 , 0" >> ${RESULTS_DIR_J}/../Metrics-wrk.log
fi
done
echo "${run},${thrp_sum},${resp_sum},${wer_sum},${max_responsetime},${stddev_responsetime}" >> ${RESULTS_DIR_J}/Throughput-${TYPE}-${itr}.log
echo "${run} , ${CPU_REQ} , ${MEM_REQ} , ${CPU_LIM} , ${MEM_LIM} , ${thrp_sum} , ${responsetime} , ${wer_sum} , ${max_responsetime} , ${stddev_responsetime}" >> ${RESULTS_DIR_J}/Throughput-${TYPE}-raw.log
done
}

# Parse the results of wrk load for each instance of application
# input: total number of iterations, result directory, Total number of instances
# output: Parse the results and generate the Metrics log files
function parseResults() {
TOTAL_ITR=$1
RESULTS_DIR_J=$2
SCALE=$3
for (( itr=0 ; itr<"${TOTAL_ITR}" ;itr++))
do
RESULTS_DIR_P=${RESULTS_DIR_J}/ITR-${itr}
parseData warmup ${WARMUPS} ${itr}
parseData measure ${MEASURES} ${itr}
#Calculte Average and Median of Throughput, Memory and CPU scores
cat ${RESULTS_DIR_J}/Throughput-measure-${itr}.log | cut -d "," -f2 >> ${RESULTS_DIR_J}/throughput-measure-temp.log
cat ${RESULTS_DIR_J}/Throughput-measure-${itr}.log | cut -d "," -f3 >> ${RESULTS_DIR_J}/responsetime-measure-temp.log
cat ${RESULTS_DIR_J}/Throughput-measure-${itr}.log | cut -d "," -f4 >> ${RESULTS_DIR_J}/weberror-measure-temp.log
cat ${RESULTS_DIR_J}/Throughput-measure-${itr}.log | cut -d "," -f5 >> ${RESULTS_DIR_J}/responsetime_max-measure-temp.log
cat ${RESULTS_DIR_J}/Throughput-measure-${itr}.log | cut -d "," -f6 >> ${RESULTS_DIR_J}/stdev_resptime-measure-temp.log
done
###### Add different raw logs we want to merge
#Cumulative raw data
paste ${RESULTS_DIR_J}/Throughput-measure-raw.log >> ${RESULTS_DIR_J}/../Metrics-raw.log

for metric in "${THROUGHPUT_LOGS[@]}"
do
if [ ${metric} == "cpu_min" ] || [ ${metric} == "mem_min" ]; then
minval=$(echo `calcMin ${RESULTS_DIR_J}/${metric}-measure-temp.log`)
eval total_${metric}=${minval}
elif [ ${metric} == "cpu_max" ] || [ ${metric} == "mem_max" ] || [ ${metric} == "responsetime_max" ]; then
maxval=$(echo `calcMax ${RESULTS_DIR_J}/${metric}-measure-temp.log`)
eval total_${metric}=${maxval}
else
val=$(echo `calcAvg ${RESULTS_DIR_J}/${metric}-measure-temp.log | cut -d "=" -f2`)
eval total_${metric}_avg=${val}
fi
metric_ci=`php ${SCRIPT_REPO}/perf/ci.php ${RESULTS_DIR_J}/${metric}-measure-temp.log`
eval ci_${metric}=${metric_ci}
done

## TODO Check for web-errors and update responsetime based on that
if [ ${total_weberror_avg} != 0 ]; then
echo "There are web_errors during the load run. For more details check in the results directory mentioned in setup.log"
fi
echo ", ${total_throughput_avg} , ${total_responsetime_avg} , ${total_responsetime_max} , ${total_stdev_resptime_avg} , ${total_weberror_avg} , ${ci_throughput} , ${ci_responsetime}" >> ${RESULTS_DIR_J}/../Metrics-wrk.log
}

THROUGHPUT_LOGS=(throughput responsetime weberror responsetime_max stdev_resptime)

TOTAL_ITR=$1
RESULTS_SC=$2
SCALE=$3
WARMUPS=$4
MEASURES=$5
NAMESPACE=$6
SCRIPT_REPO=$7
CLUSTER_TYPE=$8
APP_NAME=$9

if [[ ${CLUSTER_TYPE} == "openshift" ]]; then
K_EXEC="oc"
elif [[ ${CLUSTER_TYPE} == "minikube" ]]; then
K_EXEC="kubectl"
fi

parseResults ${TOTAL_ITR} ${RESULTS_SC} ${SCALE} ${WARMUPS} ${MEASURES} ${NAMESPACE} ${SCRIPT_REPO} ${CLUSTER_TYPE} ${APP_NAME}
Loading