Skip to content

Commit

Permalink
Docs: Relative paths
Browse files Browse the repository at this point in the history
  • Loading branch information
perdelt committed Sep 1, 2021
1 parent d1586a9 commit e217c91
Showing 1 changed file with 11 additions and 11 deletions.
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,23 +14,23 @@ See the [homepage](https://github.com/Beuth-Erdelt/DBMS-Benchmarker) and the [do

DBMS-Benchmarker
* is Python3-based
* connects to all [DBMS](docs/Options.md#connection-file) having a JDBC interface - including GPU-enhanced DBMS
* connects to all [DBMS]Options.html#connection-file) having a JDBC interface - including GPU-enhanced DBMS
* requires *only* JDBC - no vendor specific supplements are used
* benchmarks arbitrary SQL queries - in all dialects
* allows [planning](docs/Options.md#query-file) of complex test scenarios - to simulate realistic or revealing use cases
* allows [planning]Options.html#query-file) of complex test scenarios - to simulate realistic or revealing use cases
* allows easy repetition of benchmarks in varying settings - different hardware, DBMS, DBMS configurations, DB settings etc
* investigates a number of timing aspects - connection, execution, data transfer, in total, per session etc
* investigates a number of other aspects - received result sets, precision, number of clients
* collects hardware metrics from a Grafana server - hardware utilization, energy consumption etc
* helps to [evaluate](docs/Evaluations.md) results - by providing
* helps to [evaluate]Evaluations.html) results - by providing
* standard Python data structures
* predefined evaluations like statistics, plots, Latex reporting
* an [inspection tool](docs/Inspection.md)
* an [interactive dashboard](docs/Dashboard.md)
* an [inspection tool]Inspection.html)
* an [interactive dashboard]Dashboard.html)

In the end this tool provides metrics that can be analyzed by [aggregation](docs/Concept.md#aggregation-functions) in [multi-dimensions](docs/Concept.md#evaluation), like maximum throughput per DBMS, average CPU utilization per query or geometric mean of run latency per workload.
In the end this tool provides metrics that can be analyzed by [aggregation]Concept.html#aggregation-functions) in [multi-dimensions]Concept.html#evaluation), like maximum throughput per DBMS, average CPU utilization per query or geometric mean of run latency per workload.

For more informations, see a [basic example](#basic-usage), take a look at help for a full list of [options](docs/Options.md#command-line-options-and-configuration) or take a look at a [demo report](docs/Report-example-tpch.pdf).
For more informations, see a [basic example](#basic-usage), take a look at help for a full list of [options]Options.html#command-line-options-and-configuration) or take a look at a [demo report]Report-example-tpch.pdf).

The code uses several Python modules, in particular <a href="https://github.com/baztian/jaydebeapi" target="_blank">jaydebeapi</a> for handling DBMS.
This module has been tested with Brytlyt, Citus, Clickhouse, DB2, Exasol, Kinetica, MariaDB, MariaDB Columnstore, MemSQL, Mariadb, MonetDB, MySQL, OmniSci, Oracle DB, PostgreSQL, SingleStore, SQL Server and SAP HANA.
Expand All @@ -47,7 +47,7 @@ As a result we obtain an interactive dashboard to inspect timing aspects.
### Configuration

We need to provide
* a [DBMS configuration file](docs/Options.md#connection-file), e.g. in `./config/connections.config`
* a [DBMS configuration file]Options.html#connection-file), e.g. in `./config/connections.config`
```
[
{
Expand All @@ -63,7 +63,7 @@ We need to provide
]
```
* the required JDBC driver, e.g. `mysql-connector-java-8.0.13.jar`
* a [Queries configuration file](docs/Options.md#query-file), e.g. in `./config/queries.config`
* a [Queries configuration file]Options.html#query-file), e.g. in `./config/queries.config`
```
{
'name': 'Some simple queries',
Expand Down Expand Up @@ -91,7 +91,7 @@ Run the CLI command:

This is equivalent to `python benchmark.py run -e yes -b -f ./config`

For more options, see the [documentation](Options.md#command-line-options-and-configuration)
For more options, see the [documentation](Options.html#command-line-options-and-configuration)

After benchmarking has been finished we will see a message like
```
Expand Down Expand Up @@ -154,7 +154,7 @@ Other comparable products you might like
[2] [Orchestrating DBMS Benchmarking in the Cloud with Kubernetes](https://www.researchgate.net/publication/353236865_Orchestrating_DBMS_Benchmarking_in_the_Cloud_with_Kubernetes)

(old, slightly outdated [docs](docs/Docs_old.md))
(old, slightly outdated [docs]Docs_old.html))



0 comments on commit e217c91

Please sign in to comment.