Purpose of this repository to give real working examples for development and testing the code. This repository contains a full working local environment, where you can execute your application with minimum installed tools setup.
- Java 17
- Spring Boot
- Gradle
- React
- Typescript
- Nightwatch
- Postgres
- Docker
- Docker compose
- Selenium grid
- Sonar
- Grafana
- Prometheus
- sitespeed.io
- Jmeter
- Keycloak
- Localstack s3
- AWS CLI docker
Following section describe which action can be performed, more details about the project can be found in
For local development please check the Readme section for each project and install appropriate requirements
- Docker https://docs.docker.com/install/ and Docker compose https://docs.docker.com/compose/install/
- Ensure you are sharing the drive where you clone the project https://docs.docker.com/docker-for-windows/#resources
- Bash support for windows https://gitforwindows.org/
- VNC viewer RealVNC for accessing selenium grid node when test are executed.
- Configure vnc: server-> localhost:5901, password-> secret, Name -> Chrome:5901
sample-realm json file
####Shell
Navigate to instance shell and execute
setup realm and client
cd /opt/jboss/keycloak/bin \ && ./kcadm.sh config credentials --server http://localhost:6180/auth --realm master --user admin --password admin \ && ./kcadm.sh create realms -s realm=realm-sample -s enabled=true -o \ && ./kcadm.sh create -x "client-scopes" -r realm-sample -s name=user -s protocol=openid-connect \ && ./kcadm.sh create clients -r realm-sample -s clientId=sample-client -s enabled=true -s publicClient="true" -s directAccessGrantsEnabled="true" -s 'webOrigins=["*"]' -s 'redirectUris=["*"]' -s 'defaultClientScopes=["user", "web-origins", "profile", "roles", "email"]'
Add manual client mapper for id to user_id from  are not destructive (only read data from database) you can run them multiple times)
- example run :
host: redis-sample port :6379
If you want to backup volume,because restart.sh is restoring your volume on each run
you should run the backup script
example : ./volume_backup.sh C:/Projects/IT-Labs/backyard
Run some of the infrastructure service by docker
docker-compose -f "docker-compose-infrastructure.yml" up -d --build api-postgres redis-sample redis-insight
- download jmeter
- extract and run jmeter.(bat,sh)
- open existing .jmx files or create a new one in the following location
this mode is using docker to run test, outputs can be found location
- tune the test J parameters in jmeter.sh(jmeter.sh)
- jmeter.sh(jmeter.sh)
- html reports can be found location
- stats are send to graphite by using Backend Listener to Graphite
- Grafana (Coming soon)
- ensure api-postgres container is running (it is used by grafana to store credentials and dashboards)
- run monitoring docker compose
docker-compose -f "docker-compose-monitoring.yml" up -d
- navigate to http://localhost:9092/ , credentials admin/admin
restart test environment: performance_test.sh path\to\repository\metrics (NOTE : this path is required because is volume for results and contains urls for running)
example : ./performance_test.sh C:/Projects/IT-Labs/backyard/metrics
- open VNC before running test
- open generated report json -> fe\e2e*tests\reports\cucumber.json or generated html fe\e2e_tests\reports\test*******.html
- open exported sitespeed.io folder
- open http://localhost:9092/dashboards
- https://www.sitespeed.io/documentation/sitespeed.io/configuration/
- https://www.sitespeed.io/documentation/sitespeed.io/lighthouse/
- https://www.sitespeed.io/documentation/sitespeed.io/performance-dashboard/#up-and-running-in-almost-5-minutes
- https://grafana.com/grafana/dashboards/10288
- https://github.com/sitespeedio/grafana-bootstrap-docker/tree/main/dashboards/graphite
- Configure redis insight
- https://hub.docker.com/r/amazon/aws-cli
start local stack and aws s3 cli docker compose services
-
navigate to aws s3 CLI
-
run
aws configure
then enter AWS Access Key ID [None]: sample AWS Secret Access Key [None]: sample Default region name [None]: Default output format [None]: -
run the command to create a bucket
aws --endpoint-url=http://localstack-sample:4566 s3 mb s3://config-sample
you shouldget make_bucket: config-sample
as response -
navigate to http://localhost:4566/config-sample you should get a response :
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>config-sample</Name> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> </ListBucketResult>
navigate to root folder
- copy content to s3 bucket
aws --endpoint-url=http://localstack-sample:4566 s3 cp ./myFolder/cloud-config s3://config-sample --recursive
remove content from s3 bucketaws --endpoint-url=http://localstack-sample:4566 s3 rm s3://config-sample --recursive
this is solving the production elastic search setup : https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_set_vm_max_map_count_to_at_least_262144
- open powershell
wsl -d docker-desktop
sysctl -w vm.max_map_count=262144
NOTE: for now this command must be run after each windows system restart
-
run
docker-compose -f docker-compose-sonar.yml up -d sonarqube-sample
-
navigate to sonar admin
-
login admin/admin
-
create sample-api for java , generate token i paste in docker-compose-sonar.yml
-
create sample-fe for fe , generate token i paste in docker-compose-sonar.yml
-
run
docker-compose -f docker-compose-sonar.yml up -d sonar-fe
for FE analysis -
run
docker-compose -f docker-compose-sonar.yml up -d sonar-api
for API analysis -
run
docker-compose -f docker-compose-sonar.yml up -d sonar-api-gateway
for API gateway analysis -
remove all containers :
docker-compose -f docker-compose-sonar.yml down
For debezium infrastructure you need to run the docker compose file docker-compose-kafka.yml:
docker-compose -f docker-compose-kafka.yml up
This docker compose file will run 5 containers:
- Kafka containers
- kafka-debezium
- zookeeper-debezium
- Debezium connector
- connect-debezium
- UI for Kafka interaction and monitoring
- kafka-ui-debezium
- UI for Debezium connectors (interaction and monitoring)
- debezium-ui
Debezium connector can be created by: - Using Debezium UI (debezium-ui container) - Using Debezium REST Interface
In order to create connector for PostgresSQL by using Debezium REST Interface following request should be executed:
- POST http://localhost:8083/connectors with request body:
"name": "connector-name",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"tasks.max": "1",
"database.hostname": "host.docker.internal",
"database.port": "5432",
"database.user": "XXXX",
"database.password": "XXXX",
"database.dbname": "XXXX",
"database.server.name": "XXXX",
"table.include.list": "XXXX,XXXX",
"plugin.name": "pgoutput",
"slot.name": "slottest",
"time.precision.mode":"connect"
}
}```
For available connectors and their properties for configuration, more info can be found on
- https://debezium.io/documentation/reference/stable/connectors/index.html