You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Prepare next release
* Docs: Monitoring
* K8s: v1apps.list_namespaced_deployment (new K8s version in cluster)
* K8s: Resultfolder as attribute of cluster object
* Masterscript: K8s store number of requested gpus
* Masterscript: K8s run shell scripts for loading data
* Masterscript: Allow list of jars per dbms
* Requirements: Allow current versions
* TPC-H: New specifics
* Masterscript: K8s monitoring as option
* Masterscript: K8s optional UPPER parameter of db and schema
* Masterscript: K8s DEPRECATED: we must know all jars upfront
* TPC-H: DDL for MariaDB Columnstore
* TPC-H: Bigint at MonetDB
* TPC-H: SQL Server precision and DB
* TPC-H: OmniSci sharding
* TPC-H: OmniSci template
* a Grafana server collecting metrics from the Prometheus server
6
+
* some [configuration](#configuration) what metrics to collect
7
+
8
+
This document contains information about the
9
+
*[Concept](#concept)
10
+
*[Installation](#installation)
11
+
*[Configuration](#configuration)
12
+
13
+
## Concept
14
+
15
+
<palign="center">
16
+
<img src="architecture.png" width="640">
17
+
</p>
18
+
19
+
There is
20
+
* an Experiment Host - this needs Prometheus exporters
21
+
* a Monitor - this needs a Prometheus server and a Grafana server scraping the Experiment Host
22
+
* a Manager - this needs a configuration (which metrics to collect and where from)
23
+
24
+
## Installation
25
+
26
+
To be documented
27
+
28
+
### Kubernetes
29
+
30
+
* Experiment Host: Exporters are part of the [deployments](Deployments.md)
31
+
* Monitor: Servers are deployed using Docker images, fixed on a separate monitoring instance
32
+
* Manager: See [configuration](#configuration)
33
+
34
+
### AWS
35
+
36
+
* Experiment Host: Exporters are deployed using Docker images, fixed on the benchmarked instance
37
+
* Monitor: Servers are deployed using Docker images, fixed on a separate monitoring instance
38
+
* Manager: See [configuration](#configuration)
39
+
40
+
## Configuration
41
+
42
+
We insert information about
43
+
* the Grafana server
44
+
* access token
45
+
* URL
46
+
* the collection
47
+
* extension of measure intervals
48
+
* time shift
49
+
* metrics definitions
50
+
51
+
into the cluster configuration.
52
+
This is handed over to the [DBMS configuration](https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/Options.md#connection-file) of the [benchmarker](https://github.com/Beuth-Erdelt/DBMS-Benchmarker/blob/master/docs/Concept.md#monitoring-hardware-metrics).
53
+
54
+
### Example
55
+
56
+
The details of the metrics correspond to the YAML configuration of the [deployments](Deployments.md):
If the Grafana server has metrics coming from general Prometheus server, that is it scrapes more exporters than just the bexhoma related, we will need to specify further which metrics we are interested in.
135
+
136
+
There is a placeholder `{gpuid}` that is substituted automatically by a list of GPUs present in the pod.
0 commit comments