Releases: Beuth-Erdelt/Benchmark-Experiment-Host-Manager
Releases · Beuth-Erdelt/Benchmark-Experiment-Host-Manager
TPC-H and YCSB
Fixed vulnerabilities and minor improvements
V0.6.7 (#201) * Masterscript: ENVs for statefulsets * Masterscript: ENVs for statefulsets only adding * Masterscript: ENVs for sut-job * Masterscript: VolumeMounts optional in sut deployment * Masterscript: Volumes optional in sut deployment * Masterscript: service_name for loading scripts as argument * Masterscript: volumeClaimTemplates for statefulset reads storage requests * Masterscript: volumeClaimTemplates as a list * Masterscript: SUT's services can have component names different from selector * Masterscript: Catch more expections in get_host_diskspace_used_data() * Masterscript: storage_parameter in connection infos * Tool: Show worker volumes * Update README.md JOSS draft badge * Masterscript: Also remove worker storage after experiment * Masterscript: Label DBMS in job * Masterscript: Label DBMS in monitoring * Docs: CockroachDB tested successfully * Masterscript: Do not retry delete_pvc() if pvc cannot be found * Masterscript: Name of worker storage contains experiment code * Masterscript: Find worker pvc by labels * Masterscript: benchmarking_parameters per experiment and configuration * Masterscript: Wait 5s before (re)checking status of workers * Masterscript: Remove worker pv if not wanted * Masterscript: Loading waits 60 secs for all pods * Masterscript: Benchbase similar to ycsb * Masterscript: Benchbase convert results to df * Masterscript: Benchbase collect results into df * Masterscript: Benchbase collect results into df and set index * Masterscript: Benchbase uses name_format for results * Masterscript: Fix container to dashboard when copying to result component * Masterscript: Accept successful pods as completed * Masterscript: Benchbase all collect results into single df at end of benchmark * Masterscript: Benchbase no results for loading phase and benchmarker results per job only * Masterscript: YCSB all collect results into single df at end of benchmark * Masterscript: Debug messages about evaluation * Masterscript: BEXHOMA_CONNECTION set to connection for benchmarker, to configuration otherwise * Masterscript: HammerDB merge results into df * Masterscript: HammerDB merge results into dfm, differ between connection and configuration * Masterscript: BEXHOMA_CONNECTION test for benchbase * Masterscript: Benchbase merge results into dfm, differ between connection and configuration * Masterscript: Debug messages about evaluation for YCSB collected dfs * Masterscript: HammerDB extract pod name * Masterscript: YCSB extract pod name * Masterscript: YCSB dump more information * Masterscript: HammerDB extract pod name * Masterscript: YCSB extract pod name * Masterscript: YCSB evaluation improved * Masterscript: Benchbase evaluation improved * Masterscript: BEXHOMA meta data in job envs * Masterscript: HammerDB concat dbms infos * Masterscript: Fetch metrics for specific connection * Masterscript: HammerDB also keep config file for single connection * Masterscript: Benchbase requests schema file * Masterscript: All experiments keep config file for single connection * Masterscript: Allow all job ENVs to be overwritten * Masterscript: BEXHOMA_CLIENT set to number of benchmarker client * Masterscript: Fetch metrics for specific connection for all benchmarker * Masterscript: BEXHOMA_CLIENT set to number of benchmarker client, thus is 0 during loading * Masterscript: Show list of open benchmarks per configuration * Masterscript: NEVER rerun, only one connection in config for detached - all benchmarker collect all dbms in one connection file * Masterscript: Copy connection file for connection specified by name * Masterscript: Copy connection file for connection specified by name * Masterscript: fetch_metrics_loading() for connection * Masterscript: fetch_metrics_loading() for connection, run in dashboard pod * Masterscript: set connection file name * Masterscript: fetch_metrics_loading() after loading, dump results * Build script for Docker images * Python 3.11.5 instead of 3.12 because of bug in setuptools * Benchmarker: Less output * DBMS: YugyByteDB dummy deployment * Docs: YCSB at entry page * Docs: scaled-out drivers at entry page * Docs: TPC-C at entry page * Docs: scaled-out drivers at entry page * Docs: Example: Run a custom SQL workload * # Conflicts: # README.md # bexhoma/configurations.py # bexhoma/experiments.py * requirements: no nbconvert * requirements: python 3.11.15 * Docs: .readthedocs.yaml * requirements: no m2r2 * requirements: sphinx * Docs: Example: Run a custom SQL workload * Docs: Formatting * YCSB: scaling-factor-operations * YugabyteDB dummy less resources * fix: requirements.txt to reduce vulnerabilities * fix: requirements.txt to reduce vulnerabilities - Werkzeug>=3.0.1 * Require only Python 3.10.2 * v0.6.7 prerelease
Include more images and latest versions
V0.6.6 Fix vulnerabilities and docs (#193) * DBMSBenchmarker: use latest v0.13.4
Improved support for YCSB, TPC-C (HammerDB and Benchbase) and TPC-H
- also includes results as presented at TPCTC23
- removed some vulnerabilities
- improved evaluation support
Improved Resilience and Flexibility
- DBMS that are not single host / Docker images
- Metrics for more components
- HammerDB / YCSB support
Unified benchmarking / evaluation components, refined loading and indexing components
- same benchmarking component for HammerDB, DBMSBenchmarker, YCSB and Benchbase
- same evaluation component for HammerDB, DBMSBenchmarker, YCSB and Benchbase
- log timing data and span for schema, ingest, index, constraints and analyze (statistics)
Prepare benchmarking components: HammerDB, YCSB, Benchbase
- Improved support for sharded dbms
- Preparation for deployment and evaluation of further loading and benchmarking components
Refactoring, Naming and Docs, Prepare for more flexible Components
Draft YCSB and HammerDB
Introduced Cloud-Orchestration and Scalable Loading and Maintaining Components
V0.6.0 Introduced Cloud-Orchestration and Scalable Loading and Mainta… …ining Components (#102) * Prepare next release * Masterscript: maintaining does not change timer settings of benchmarker * Masterscript: reconnect and try again, if not failed due to "not found" * Masterscript: improved output about workflow * Masterscript: aws example nodegroup scale * Masterscript: aws example nodegroup get size * Masterscript: aws example nodegroup wait for size * Masterscript: aws example nodegroup show size * Masterscript: aws example nodegroup show and check size * Masterscript: aws example nodegroup name and type * Masterscript: aws example dict of nodegroups * Masterscript: aws example nodegroup name necessary for scaling * Masterscript: aws example nodegroup name and type * Masterscript: maintaining duration default 4h * Masterscript: maintaining parameters and nodeSelector * Masterscript: nodeSelector for sut, monitoring and benchmarker * Masterscript: maintaining is accepted running also when num_maintaining=0 * Masterscript: request resources from command line * Masterscript: prepare max_sut per cluster and per experiment * Masterscript: catch json exception in getNode() * Masterscript: maintaining example TSBS as experiment setup * Masterscript: jobtemplate_maintaining per experiment * Masterscript: initContainers in maintaining * Masterscript: maintaining also watches succeeded pods * Masterscript: maintaining also respects (longly) pending pods * Masterscript: loading pods controlled by redis queue * Masterscript: loading pods controlled by redis queue, include params * Masterscript: initContainers parameters set correctly * Masterscript: Stop also loading jobs and pods * Masterscript: Number of parallel loaders * Masterscript: Empty schema before loading pods * Masterscript: Stop also loading jobs and pods when putting sut down * Masterscript: Loading only finished, when outside and inside cluster are done * Masterscript: Stop also loading jobs and pods - in all configurations * Masterscript: Stop also loading jobs and pods - in all configurations (config, experiment, cluster) * Masterscript: Check status of parallel loading * Masterscript: Job status explained * Masterscript: Job status returns true iff all pods are completed * Masterscript: Job status more output * Masterscript: Job status returns true iff all pods are completed * Masterscript: Job status returns true iff all pods are completed, then delete all loading pods * Masterscript: Job status returns true iff all pods are completed, copy loading pods logs * Masterscript: Copy logs of all containers of loading pods * Masterscript: Mark SUT as loaded as soon as realizing all pods have status success - include this as timeLoading * Masterscript: Use maintaining structure for setting loading parameters * Masterscript: Mark SUT as loaded * Masterscript: Mark SUT as loaded, read old labels at first * Masterscript: Mark SUT as loaded, read old labels at first and convert to float * Masterscript: Mark SUT as loaded, read old labels at first and convert to float, debug output * Masterscript: Mark SUT as loaded, read old labels at first and convert to int * Masterscript: Mark SUT as loaded, read old labels at first and convert to int, cleaned
Draft for Maintaining Service
- Scaling factor is per experiment only
- draft for maintaining service
- don't do port-forwarding for checking DBMS
- improved docs