Skip to content

TickTockDB v.s. InfluxDB, max cardinality comparison in PI 0 Wireless (ARMv6, 32bit OS)

Yi Lin edited this page Mar 16, 2023 · 5 revisions

Table of Contents

1. Introduction

2. IoTDB-benchmark Introduction

3. Experiment Settings

4. 40K cardinality: Resource consumption comparison

4.1 CPU

4.2 IO Util

4.3 Write bytes rate

4.4 Read bytes rate

4.5 Memory

5. Max Cardinality: Resource consumption comparison

5.1 CPU

5.2 IO Util

5.3 Write bytes rate

5.4 Read bytes rate

5.5 Memory

6. Conclusion

1. Introduction

In our previous wiki, we have compared TickTockDB with InfluxDB in RPI-zero-Wireless (ARMv6, 32bit OS). Note the comparison experimental setup only used up to 9 clients and 10 time series (i.e., sensors) per client, which equals to a small cardinality of 90(=9 clients * 10 sensors). We would like to see how TickTockDB performs in high cardinality scenarios, what max cardinality it can support, compared with InfluxDB.

Besides, the previous wiki is a backfill case (i.e., clients keep sending requests once it received responses of previous requests). Backfill is usually applied to scenarios of data migration (e.g., Prometheus needs to migrate its data to a third party TSDB for long term storage). In normal scenarios, there should be certain intervals between two consecutive operations from a client. For example, CPU data are collected once every 10 seconds. In this test, we will use a 10-seconds interval between consecutive operations from a client.

2. IoTDB-benchmark Introduction

We select IoTDB-benchmark for performance evaluation. Please refer to README and the introduction in the previous wiki for details.

3. Experiment settings

3.1. Hardware

We run a TickTockDB in a PI-0-W. The figure shows a PI-0-W, a Single Board Computer (SBC) with

  • 1GHz single-core CPU (ARMv6),
  • 512MB memory,
  • 802.11 b/g/n wireless LAN,
  • running Raspberry PI OS (Bulleye), a Debian-based 32 bit Linux OS.
  • And it costs only $10.

PI-zero-W single board computer

We run IoTDB-benchmark in an Ubuntu laptop with 12 cores AMD Ryzen5 5600H cpu, 20GB memory. We try to minimize network bottleneck by connecting the laptop with PI-0-W by a network cable directly. We assign static IPs to PI-0-W and the laptop by running, e.g., in PI-0-W

sudo ip ad add 10.0.0.3/24 dev eth0

3.2. Software

  • TickTockDB
  • Version: 0.11.0
  • Config: tt.conf
  • Most configs are default except the followings. You can call config.sh to find out.
ylin30@raspberrypi:~/ticktock $ ./admin/config.sh
{
  "tsdb.gc.frequency": "5min",
  "tsdb.timestamp.resolution": "millisecond"
}

For comparison purpose, we pick InfluxDB since it is the most popular TSDB and one of very few TSDBs which can run in the very tiny ARM 32 bit SBC device, PI-zero-W. If you look up TSDB in RaspberryPI forum, Influxdb is the de facto option for TSDB.

  • Influxdb
  • IoTDB-benchmark
  • Version: main
  • Sample config:
  • Important settings in the config:
  • Read-Write ratio: reads(10%) and writes(90%).
  • Loop: 2160 and 10-seconds interval (which keeps each test running for 6 hours(=2160*10s))
  • Number of sensors per device: 200
  • We scale up loads by increasing the number of clients from 100 to 1400.
  • We bind each client to one device. So we will update CLIENT_NUMBER and DEVICE_NUMBER in config.properties for each test.

The above configs will simulate a list of clients collecting a list of metrics (DEVICE_NUMBER * 200 sensors per device) every 10 seconds, and sending the metrics to TickTockDB/InfluxDB. Note that we use InfluxDB line write protocols in both TickTockDB and InfluxDB since the protocol is more concise than both OpenTSDB plain put protocol and InfluxDB v1 batch writes. Essentially, the line write protocol can send multiple data points in just one line, e.g., you can send cpu.usr, cpu.sys, and cpu.idle of cpu 1 in host rpi in one line.

cpu,host=rpi,id=1 usr=10,sys=20,idle=70 1465839830000

4. 40K cardinality: Resource consumption comparison

We first test that 200 clients (i.e., 200 devices) send 200 sensors data each client at every 10 seconds, which equals to a cardinality of 40k. Note that it is not a backfill case and write throughput is fixed. So we can't compare throughputs in both TickTockDB and InfluxDB. Instead, we compare how much OS resources TickTockDB and InfluxDB consumed at this load. The lower OS resources a TSDB consumed, the better the TSDB is.

4.1. CPU

CPU idle, TickTockDB CPU idle, InfluxDB

The above figures show cpu.idle metric during tests. The higher, the better. Cpu.idle was 84% for TickTockDB and 50% for InfluxDB, respectively. Note that cpu.idle was 90% even after the tests were done. The 10% cpu was consumed by OS metric collectors running in PI-0-W. So actually TickTockDB consumed 6% of CPU and InfluxDB 40%, respectively.

Also note that InfluxDB's cpu.idle was suddenly dropped to 0% during 14:00 to 15:40. This means that CPU was completed saturated and InfluxDB couldn't handle the load of 40K cardinality at all. We dig into logs and found out the spikes was coincident with compaction. We believe the compaction caused IO util to 100% and consequently CPU usage also spiked to 100%.

InfluxDB compaction caused spikes of OS resources.

TickTockDB also enabled compaction and it runs well with 40K cardinality.

4.2. IO Util

IO util, TickTockDB IO util, InfluxDB

TickTockDB's IO util was almost negligible (0.668%) while InfluxDB's was 15%-20% without compaction and 100% with compaction.

4.3 Write bytes rate

write_bytes, TickTockDB write_bytes, InfluxDB

TickTockDB's write bytes rate was 24KB/sec and InfluxDB's 190KB/sec, respectively. The final data size in TickTockDB data dir was 126MB and in InfluxDB data dir 78MB, respectively. This indicates that TickTockDB's write IO is more efficient than InfluxDB, though InfluxDB data compression ratio is better.

4.4 Read bytes rate

read_bytes, TickTockDB read_bytes, InfluxDB

Both TickTockDB and InfluxDB read bytes rate were small normally. But InfluxDB's read bytes rate was spiked during compaction time.

4.5 Memory

RSS memory, TickTockDB RSS memory, InfluxDB

RSS memory of TickTockDB grew up to 91MB. InfluxDB's RSS memory kept at 200MB before compaction and grew to 300MB during compaction.

5. Max cardinality: Resource consumption comparison

We tested InfluxDB against 40K cardinality multiple times (see figures below) and it couldn't handle. It can handle 20K cardinality (100 devices * 200 sensors/device). So we consider InfluxDB's max cardinality as 20K in this test setup.

We would also like to know what is the max cardinality TickTockDB can handle. So we increased clients number (and device number correspondingly) gradually (200, 500, 800, 1100, 1400 devices) to see when TickTockDB would start to saturate one of OS resources, or the whole test would take longer than 6 hours to finish (it means averagely operations can't finish within 10 seconds).

The following figures show all kinds of OS resources during the tests. TickTockDB consumes more and more resources when cardinality is higher and higher, almost proportionally. CPU was saturated earliest among all OS resources when cardinality reached 280K (1400 devices). At the CPU saturation time, IO util was only 27%, memory was still below 300MB, and read and write rates were low. We noted that the whole test took 23373.80 seconds which is longer than 21600 seconds as planned. So we consider the max cardinality TickTockDB can handle is 220K (1100 devices * 200 sensors/device).

Please refer to the following figures for details. We skip explanations for simplicity.

5.1. CPU

CPU idle, TickTockDB CPU idle, InfluxDB

5.2. IO Util

IO util, TickTockDB IO util, InfluxDB

5.3 Write bytes rate

write_bytes, TickTockDB write_bytes, InfluxDB

5.4 Read bytes rate

read_bytes, TickTockDB read_bytes, InfluxDB

5.5 Memory

RSS memory, TickTockDB RSS memory, InfluxDB

6. Conclusion

  • We compared TickTockDB with InfluxDB in PI-zero-wireless (ARMv6, 32bit OS) in terms of max cardinality. Instead of using backfill scenarios, we simulated normal scenarios that a list of clients send a list of time series (200 sensors per client in one write) at every 10 seconds interval.
  • InfluxDB's max cardinality is 20K (i.e., 100 devices and 200 sensors/device).
  • TickTockDB's max cardinality is 220K (i.e, 1100 devices and 200 sensors/device).
  • At the same cardinality load, TickTockDB consumes much less OS resources than InfluxDB in CPU, IO, and memory.
  • In PI-zero-W, CPU is the bottleneck to TickTockDB. It was saturated earliest among all OS resources.
Clone this wiki locally