Skip to content

Commit

Permalink
V0.12.1 (#93)
Browse files Browse the repository at this point in the history
* Prepare next release

* Inspector: workload dict without metrics and reporting

* Docs: JOSS intro
  • Loading branch information
perdelt committed Jun 29, 2022
1 parent 662b2be commit e9c0f3c
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 8 deletions.
13 changes: 7 additions & 6 deletions dbmsbenchmarker/inspector.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
from colour import Color
from numpy import nan
from datetime import datetime, timezone
import copy

from dbmsbenchmarker import benchmarker, tools, evaluator, monitor

Expand Down Expand Up @@ -144,6 +145,11 @@ def load_experiment(self, code, anonymize=None, load=True):
self.benchmarks.computeTimerRun()
self.benchmarks.computeTimerSession()
self.e = evaluator.evaluator(self.benchmarks, load=load, force=True)
self.workload = copy.deepcopy(self.e.evaluation['general'])
# remove metrics
del(self.workload['loadingmetrics'])
del(self.workload['streamingmetrics'])
del(self.workload['reporting'])
def get_experiment_list_queries(self):
# list of successful queries
return self.benchmarks.listQueries()
Expand Down Expand Up @@ -260,12 +266,7 @@ def get_experiment_query_properties(self, numQuery=None):
return self.e.evaluation['query']
def get_experiment_workload_properties(self):
# dict of workload properties
workload = self.e.evaluation['general']
# remove metrics
del(workload['loadingmetrics'])
del(workload['streamingmetrics'])
del(workload['reporting'])
return workload
return self.workload
#def get_measures(self, numQuery, timer, warmup=0, cooldown=0):
def get_timer(self, numQuery, timer, warmup=0, cooldown=0):
# dataframe of dbms x measures
Expand Down
3 changes: 2 additions & 1 deletion paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ See the [homepage](https://github.com/Beuth-Erdelt/DBMS-Benchmarker) and the [do

# Statement of Need

There are a variety of (relational) database management systems (DBMS) and a lot of products.
Benchmarking of database management systems (DBMS) is an active research area.
There are a variety of DBMS and a lot of products.
The types thereof can be divided into for example row-wise, column-wise, in-memory, distributed and GPU-enhanced.
All of these products have unique characteristics, special use cases, advantages and disadvantages and their justification.
In order to be able to verify and ensure the performance measurement, we want to be able to create and repeat benchmarking scenarios.
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

setuptools.setup(
name="dbmsbenchmarker",
version="0.11.22",
version="0.12.1",
author="Patrick Erdelt",
author_email="perdelt@beuth-hochschule.de",
description="DBMS-Benchmarker is a Python-based application-level blackbox benchmark tool for Database Management Systems (DBMS). It connects to a given list of DBMS (via JDBC) and runs a given list of parametrized and randomized (SQL) benchmark queries. Evaluations are available via Python interface, in reports and at an interactive multi-dimensional dashboard.",
Expand Down

0 comments on commit e9c0f3c

Please sign in to comment.