Skip to content

Commit

Permalink
Docs: Relative paths
Browse files Browse the repository at this point in the history
  • Loading branch information
perdelt committed Sep 1, 2021
1 parent ab99598 commit f3a2679
Showing 1 changed file with 24 additions and 24 deletions.
48 changes: 24 additions & 24 deletions docs/Evaluations.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Predefined evaluations are
This evaluation is available in the evaluation dict and in the latex reports.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/dbms.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/dbms.png" width="480">
</p>

The user has to provide in a [config file](Options.html#connection-file)
Expand Down Expand Up @@ -90,7 +90,7 @@ qs_s = Queue size of sessions (tps_s1 * lat_s * 1000)
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/relative-tps-lat.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/relative-tps-lat.png" width="640">
</p>

For each query, latency and throughput is computed per DBMS.
Expand All @@ -101,7 +101,7 @@ Only successful queries and DBMS not producing any error are considered there.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/ranking.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/ranking.png" width="480">
</p>

We compute a ranking of DBMS for each query based on the sum of times, from fastest to slowest.
Expand All @@ -112,7 +112,7 @@ The chart shows the average ranking per DBMS.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/total_barh_ingest.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/total_barh_ingest.png" width="480">
</p>

This is part of the informations provided by the user.
Expand All @@ -125,7 +125,7 @@ Values are computed as arithmetic mean across benchmarking time.
Only successful queries and DBMS not producing any error are considered.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/monitoring-metrics.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/monitoring-metrics.png" width="640">
</p>

#### Host Metrics
Expand All @@ -135,7 +135,7 @@ The host information is provided in the [config file](#connection-file).
Here, cost is based on the total time.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/host-metrics.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/host-metrics.png" width="640">
</p>

### Drill-Down Timers
Expand All @@ -144,7 +144,7 @@ Here, cost is based on the total time.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/relative.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/relative.png" width="480">
</p>

For each query and timer, the best DBMS is considered as gold standard = 100%. Based on their times, the other DBMS obtain a relative ranking factor.
Expand All @@ -155,7 +155,7 @@ The chart shows the geometric mean of factors per DBMS.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/sum_of_times.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/sum_of_times.png" width="480">
</p>

This is based on the mean times of all benchmark test runs.
Expand All @@ -172,7 +172,7 @@ The chart shows the average of query times based on mean values per DBMS and per
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/heatmap-timer.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/heatmap-timer.png" width="480">
</p>

The relative ranking can be refined to see the contribution of each query.
Expand All @@ -186,7 +186,7 @@ All active queries and DBMS are considered.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/total_times.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/total_times.png" width="480">
</p>

This is based on the times each DBMS is queried in total. Measurement starts before first benchmark run and stops after the last benchmark run has been finished. Parallel benchmarks should speed up the total time in an ideal situation.
Expand All @@ -198,23 +198,23 @@ The chart shows the total query time per DBMS and query.
This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/total_times-norm.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/total_times-norm.png" width="480">
</p>

The chart shows total times per query, normalized to the average total time of that query.
Only successful queries and DBMS not producing any error are considered.
This is also available as a heatmap.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/total_times-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/total_times-heatmap.png" width="480">
</p>

#### Throughputs

This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/tps-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/tps-heatmap.png" width="480">
</p>

For each query, latency and throughput is computed per DBMS.
Expand All @@ -226,7 +226,7 @@ Only successful queries and DBMS not producing any error are considered there.
This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/lat-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/lat-heatmap.png" width="480">
</p>

For each query, latency and throughput is computed per DBMS.
Expand All @@ -238,7 +238,7 @@ Only successful queries and DBMS not producing any error are considered there.
This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/resultsets-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/resultsets-heatmap.png" width="480">
</p>

For each query, the size of received data per DBMS is stored.
Expand All @@ -251,7 +251,7 @@ All active queries and DBMS are considered.
This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/errors-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/errors-heatmap.png" width="480">
</p>

The chart shows per DBMS and per timer, if an error has occured.
Expand All @@ -262,7 +262,7 @@ All active queries and DBMS are considered.
This evaluation is available in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/warnings-heatmap.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/warnings-heatmap.png" width="480">
</p>

The chart shows per DBMS and per timer, if a warning has occured.
Expand All @@ -274,7 +274,7 @@ All active queries and DBMS are considered.
This evaluation is available as dataframes, in the evaluation dict and as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/tps-lat.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/tps-lat.png" width="640">
</p>

For each query, latency and throughput is computed per DBMS.
Expand All @@ -285,7 +285,7 @@ Only successful queries and DBMS not producing any error are considered there.
These metrics are available as png files and csv files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/hardware-metrics.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/hardware-metrics.png" width="640">
</p>

These metrics are collected from a Prometheus / Grafana stack.
Expand All @@ -295,7 +295,7 @@ This expects time-synchronized servers.
These plots are available as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/bar.png" width="480">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/bar.png" width="480">
</p>

This is based on the sum of times of all single benchmark test runs.
Expand All @@ -310,7 +310,7 @@ The bars are ordered ascending.
These tables are available as dataframes and in the evaluation dict.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/table.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/table.png" width="640">
</p>

These tables show [statistics](Concept.html#aggregation-functions) about benchmarking time during the various runs per DBMS as a table.
Expand All @@ -324,7 +324,7 @@ The DBMS are ordered ascending by factor.
These plots are available as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/plot.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/plot.png" width="640">
</p>

These plots show the variation of benchmarking time during the various runs per DBMS as a plot.
Expand All @@ -337,7 +337,7 @@ This is for inspection of time dependence.
These plots are available as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/boxplot.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/boxplot.png" width="640">
</p>

These plots show the variation of benchmarking time during the various runs per DBMS as a boxplot.
Expand All @@ -348,7 +348,7 @@ This is for inspection of variation and outliers.
These plots are available as png files.

<p align="center">
<img src="https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/histogram.png" width="640">
<img src="https://raw.githubusercontent.com/Beuth-Erdelt/DBMS-Benchmarker/master/docs/histogram.png" width="640">
</p>

These plots show the variation of benchmarking time during the various runs per DBMS as a histogram.
Expand Down

0 comments on commit f3a2679

Please sign in to comment.