Skip to content

Commit

Permalink
Fix maxWait value to match the newest version of xk6-kafka
Browse files Browse the repository at this point in the history
Use TestRun kubernetes resource, instead of k6, since it will be deprecated
Add max retries on connection status fetching(CONNECTION_OPEN_MAX_RETRIES), after which the test aborts execution
Add Service for mmock
Use better naming for some env variables
Add sample helm values.yaml for ditto
Add --quiet false option for full logging when running k6 test inside kubernetes
Remove AUTH_CONTEXT env var, value is equal to DITTO_PRE_AUTHENTICATED_HEADER_VALUE

Signed-off-by: Vasil Vasilev <vasil.vasilev@bosch.com>
  • Loading branch information
vvasilevbosch committed Dec 22, 2023
1 parent ee9555d commit b5d7a3f
Show file tree
Hide file tree
Showing 17 changed files with 219 additions and 364 deletions.
36 changes: 21 additions & 15 deletions benchmark-tool/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Also, there is a special scenario called **WARMUP**, which is used to warmup the

# Getting started:

## K6 is configurable via environment variables and the following must be set, in order to run the test(sample variables in [test-local.env](https://github.boschdevcloud.com/bosch-iot-things/ditto/blob/master/benchmark-tool/test-local.env) file):
## K6 is configurable via environment variables and the following must be set, in order to run the test(sample variables in [test-local.env](https://github.com/eclipse-ditto/ditto/blob/master/benchmark-tool/test-local.env) file):

## K6 test related

Expand All @@ -36,17 +36,19 @@ Also, there is a special scenario called **WARMUP**, which is used to warmup the
| KAFKA_CONSUMER_LOGGER_ENABLED | K6 kafka consumer logger enabled (0/1) |
| CREATE_DITTO_CONNECTIONS | If the test should create the needed for scenarios ditto connections, before executing the scenarios |
| DELETE_DITTO_CONNECTIONS | If the test should delete the needed for scenarios ditto connections, after executing the scenarios |
| CONNECTION_OPEN_MAX_RETRIES | Maximum times connection status is fetched to check if its opened. If connection is still not opened after, the test gets aborted.|
| SCENARIOS_TO_RUN | Array of scenarios names that should run, available options is: WARMUP, DEVICE_LIVE_MESSAGES, SERACH_THINGS, READ_THINGS, MODIFY_THINGS |
| LOG_REMAINING | Log the remaining things that need to be created. Useful for debugging purposes |
| BATCH_SIZE | Max number of simultaneous connections of a k6 http.batch() call, which is used for warming up things |
| CREATE_THINGS_LOG_REMAINING | Log the remaining things that need to be created. Useful for debugging purposes |
| THINGS_WARMUP_BATCH_SIZE | Max number of simultaneous connections of a k6 http.batch() call, which is used for warming up things |

## Ditto related

| Name | Description |
| ------------------------------- | ---------------------------------- |
| DITO_API_URI | Ditto api url |
| DITTO_AUTH_CONTEXT_HEADER | Authorization context header name |
| DITTO_AUTH_CONTEXT_HEADER_VALUE | Authorization context header value |
| DITTO_DEVOPS_AUTH_HEADER | Devops user authorization header name |
| DITTO_DEVOPS_AUTH_HEADER_VALUE | Devops user authorization header value |
| DITTO_PRE_AUTHENTICATED_HEADER_VALUE | Ditto x-ditto-pre-authenticated header value. https://eclipse.dev/ditto/installation-operating.html#pre-authentication |

## Kafka related

Expand All @@ -62,7 +64,6 @@ Also, there is a special scenario called **WARMUP**, which is used to warmup the
| ------------------- | -------------------------------------------------------------------------------------------- |
| WARMUP_MAX_DURATION | The maximum duration of warmup scenario. After, the scenario will be forcefully stopped |
| WARMUP_START_TIME | Time offset since the start of the test, at which point this scenario should begin execution |
| WARMUP_VUS | An integer value specifying the number of VUs to run concurrently |

###### Every other scenario has the same config variables, created by suffixing the variable name with the name of the scenario, f.e. SEARCH_THINGS_DURATION

Expand Down Expand Up @@ -291,7 +292,7 @@ The kafka 'target' connection looks like the following:

## Running the test

###### Running the test locally
### Running the test locally

Prerequisites:

Expand All @@ -303,7 +304,8 @@ Prerequisites:

- xk6 kafka extension binary

First export all the environment variables, needed for the test:
Change test.env values to match local setup
Export all the environment variables, needed for the test:

```bash
set -a
Expand All @@ -318,15 +320,16 @@ ${xk6-kakfa-bin} run test/k6-test.js

Logs and results are on the terminal standart output.

###### Running the test inside kubernetes cluster
### Running the test inside kubernetes cluster

Prerequisites:

- Running kubernetes cluster

- Running kafka cluster with topic deletion disabled
- Running kafka cluster with topic deletion enabled

- Running ditto inside the cluster, using the ditto helm chart https://github.com/eclipse-ditto/ditto/tree/master/deployment/helm/ditto
- Running ditto inside the cluster, using the ditto helm chart https://github.com/eclipse-ditto/ditto/tree/master/deployment/helm/ditto (ditto-values.yaml sample values)
- devops security must be disabled for now.

- Deploy the k6 operator [GitHub - grafana/k6-operator: An operator for running distributed k6 tests.](https://github.com/grafana/k6-operator)[GitHub - grafana/k6-operator: An operator for running distributed k6 tests.](https://github.com/grafana/k6-operator)

Expand All @@ -342,9 +345,7 @@ Needed kubernetes resources lie inside the kubernetes directory.

- **k6-test-configmap-cr.yaml** - custom k6 resource, includes all env variables needed for the test, that are inside test.env file

- **mmock-pvc.yaml** - Persistent volme claim for monster mock, use to copy the mmock configuration to the created PV, in order to mount it inside the mmock instance.

- **mmock.yaml** - Pod definition for monster mock
- **mmock.yaml** - Pod and Service definition for monster mock

K6 custom resource gets the source code for the test from a config map, that must be created:

Expand All @@ -355,7 +356,7 @@ K6 custom resource gets the source code for the test from a config map, that mus
K6 custom resource reads env variables from config map that must be created:

```bash
kubectl create configmap k6-ditto-benchmark --from-env-file test-cluster.env
kubectl create configmap k6-ditto-benchmark --from-env-file test.env
```

After all is set, create the k6 custom resource for the test:
Expand All @@ -365,3 +366,8 @@ kubectl create -f k6-ditto-benchmark-test.yaml
```

Logs of the k6 test can be inspected from the pod **k6-ditto-benchmark-test-1-xxxx**

After completing a test run, you need to clean up the test jobs created. This is done by running the following command:
**kubectl delete -f k6-ditto-benchmark-test.yaml**
or
**kubectl delete TestRun k6-ditto-benchmark-test**
10 changes: 10 additions & 0 deletions benchmark-tool/ditto-helm-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
gateway:
config:
authentication:
enablePreAuthentication: true
devops:
secured: false
swaggerui:
enabled: false
dittoui:
enabled: false
210 changes: 0 additions & 210 deletions benchmark-tool/kubernetes/README.md

This file was deleted.

Loading

0 comments on commit b5d7a3f

Please sign in to comment.