Skip to content

TickTockDB v.s. OpenTSDB, backfill scenario comparison, X86

Yi Lin edited this page May 14, 2023 · 4 revisions

Table of Contents

1. Introduction

2. IoTDB-benchmark Introduction

3. Experiment Settings

4. 2K cardinality, OpenTSDB v.s. TickTockDB

4.1 Throughput

4.2 Response time

4.3 CPU

4.4 IO Util

4.5 Write bytes rate

4.6 Memory

4.7 Summary

5. 100K cardinality, TickTockDB only

5.1 Throughput

5.2 Response time

5.3 CPU

5.4 IO Util

5.5 Write bytes rate

5.6 Memory

6. Conclusion

1. Introduction

In our previous wiki about max cardinality comparison, we have shown that TickTockDB can handle 4M time series while OpenTSDB only 60k on a 2vCpu 4GB X86 docker. This experiment is for backfill scenario, i.e., clients send data points to a TSDB back to back. We want to compare how fast TickTockDB and OpenTSDB, the original TimeSeriesDB which motivates us to develop TickTockDB, can ingest data. Please refer to TickTockDB README for our original motivations.

OpenTSDB supports write requests in Json format through HTTP and put format (e.g., put testM1 1514779734 2.33266 host=foo) through TCP. TickTockDB is compatible with OpenTSDB, and also supports influx line protocol (e.g., cpu,host=rpi,id=1 usr=10,sys=20,idle=70 1465839830). In this experiment, we use Json format for OpenTSDB writes, and Json, put, line format for TickTock writes. All requests are sent in HTTP protocol. We give up TCP since HBase in OpenTSDB docker always crashes if it runs longer than half an hour. We believe it was due to short of memory.

We used two different workloads (case 1: 2k cardinality, 10 data points/request; and case 2: 100k cardinality, 50 data points/request) since case (1) doesn't saturate TickTockDB. We didn't apply case 2 to OpenTSDB since HBase would crash.

2. IoTDB-benchmark Introduction

We select IoTDB-benchmark for performance evaluation. Please refer to README and the introduction in the previous wiki for details.

3. Experiment settings

3.1. Hardware

We run the whole tests on an Ubuntu laptop with AMD Ryzen5 5600H cpu (12 vCpus), 24GB of memory (DDR4 3200 MHz), 1TB 5400rpm HDD. We also allocated 2GB swap space. OpenTSDB and TickTockDB run on an Ubuntu docker (X86, 2 vCpus, 4GB memory). IoTDB-benchmark runs on the laptop host.

3.2. Software

  • TickTockDB
  • Version: 0.11.7
  • Command: ./bin/tt -c conf/tt.conf --tsdb.timestamp.resolution millisecond --http.server.port 6182,6183 --http.listener.count 2,2 --tsdb.compact.frequency 0h [--http.request.format json] & (Use --http.request.format json to test Json format)
  • Please update #openfile limits to a very high number. See this instruction.
  • OpenTSDB
  • Version: opentsdb-2.4.0. We use a docker image, petergrace/opentsdb-docker, which runs HBase on top of files instead of Hadoop.
  • Docker command: [yi-IdeaPad ~]$ docker run -d --name opentsdb --cpuset-cpus 1-2 -m 4g -h opentsdb -p 4242:4242 -v /opt/opentsdb:/etc/opentsdb petergrace/opentsdb-docker
  • Config: default
  • IoTDB-benchmark
  • Read-Write ratio: writes only(100%).
  • Case 1: 2k cardinality (OpenTSDB and TickTockDB)
  • 200 devices (=200 clients * 1 devices/client) and 10 sensors/device;
  • To ingest 10 weeks data with 10-seconds interval between 2 consecutive data points in the same time series for OpenTSDB, and 100 weeks for TickTockDB since otherwise tests would finish too fast;
  • Each request contains 10 data points (BATCH_SIZE_PER_WRITE=1).
  • Case 2: 100k cardinality (TickTockDB only)
  • 10,000 devices (=1000 clients * 10 devices/client) and 10 sensors/device,
  • To ingest 50 weeks data with 10-seconds interval between 2 consecutive data points in the same time series
  • Each request contains 50 data points (BATCH_SIZE_PER_WRITE=5).

In summary, the above config will simulate a list of clients collecting a list of metrics (DEVICE_NUMBER * 10 sensors per device), and write the metrics to TickTockDB/OpenTSDB in Json/put/line format back to back.

4. 2K cardinality, OpenTSDB v.s. TickTockDB

4.1 Throughput

Write Throughput

With OpenTSDB default write format(i.e., Json), OpenTSDB write throughput is 69685 and TickTockDB 688141. TickTockDB throughput is about 10 times of OpenTSDB. If TickTock uses put format and influx-line format in write requests, its throughput is 814049 and 824698, respectively. Line format is slightly better than put format.

4.2 Response time

Write response time

If write requests are in Json format, OpenTSDB response time per request is 28.59ms while TickTockDB 2.77ms in average. P999 response time of OpenTSDB and TickTockDB are 2721.13ms (out of figure boundary) and 13.99ms, respectively.

In average, line format is almost the same as put format which is faster than json format. But the p999 response time is the other way around.

4.3 CPU

cpu

With write requests in Json format, OpenTSDB and TickTockDB used all of 2 vCPUs (i.e., 200%). With write requests in put format and line format, CPU usage of TickTockDB was not saturated, only 160%-180% and 140%-150%, respectively. So line format processing is more lightweight than put format which is more lightweight than Json format.

By the way, OpenTSDB's CPU was still active even after the benchmark test finished. We think it may be due to some background work left to do. Actually OpenTSDB could barely handle 2k time series in backfill. If we ran the test longer, HBase might crash. We consider its throughput as 70k anyway.

4.4 IO Util

IO util

IO util of OpenTSDB is up to 27.5%, much higher than TickTockDB. With write requests in all three write formats, IO util of TickTockDB was below 2.5%.

4.5 Write bytes rate

Write bytes rate

Write rate pattern was similar to IO util. OpenTSDB write rate was very high, up to 14MB/sec. TickTockDB write rate was about 1MB/sec in all three write formats.

4.6 Memory

RSS Memory

OpenTSDB docker ran two java processes, HBase master and opentsdb. They consumed about 1.2GB RSS memory each, 2.4GB in total. Note that we didn't tune their heap sizes but just used default.

TickTockDB is a single process. With 2k cardinality, it consumed very little RSS memory, less than 30MB.

4.7 Summary

OpenTSDB is much more heavyweight than TickTockDB, in terms of CPU, memory, and IO resources. In backfill cases, CPU is the bottleneck. To OpenTSDB, memory is also a bottleneck. TickTockDB didn't use all CPUs in the cases of put and line format. We believe it was because our benchmark clients didn't send requests fast enough to saturate CPUs.

So in next section we will increase workloads by using higher cardinality (100k=10k devices * 10 sensors/device), and more data points in one write request (50 instead of 10). We don't test OpenTSDB anymore since OpenTSDB can't sustain such loads.

5. 100K cardinality, TickTockDB only

5.1 Throughput

Write throughput

With higher cardinality and more data points in one request, throughput is higher. when requests are in Json format, the throughput is 1,430,167 data points/sec, compared with 688,141 in case 1 (2k cardinality, 10 data points/request) above. When requests are in line format, the throughput is increased the most. They are 3,016,147 and 824,698 data points/sec for case 2 (100k cardinality and 50 data points/request) and case 1, respectively. We think it is because case 2 completely uses all CPU resources, as shown in the CPU figure later. Throughput of Put format is in between Json and line format.

5.2 Response time

Write response time

Response time pattern is similar between case 1 and case 2. The largest average response time is Json format. The smallest average time is line format. P999 response time is the other way around.

5.3 CPU

cpu

Now let's look at CPU usage. All three tests in Json, put, line format use almost 200% CPU in 2vCPU docker. Line format might have a little room left but we consider CPU saturated already. It is clear to us that, in case 1, TickTockDB did not reach max throughput yet with put and line format since clients didn't send request fast enough. With higher cardinality and more data points per request, TickTockDB can achieve higher throughput. The highest is about 3M data point/second when writes are in line format.

5.4 IO Util

IO util

Similar to case 1, IO util is very low. Please note the spike (color blue) is caused by data verification after tests. We inspected data to make sure that data points were consistent with what the benchmark inserted.

5.5 Write bytes rate

Write bytes rate

Similar to case 1, IO write rate is very low.

5.6 Memory

RSS Memory

Similar to case 1, RSS memory is very low.

6. Conclusion

  • We compared TickTockDB with OpenTSDB on X86 (2 vCPUs, 4GB memory, 5400rpm HDD) using backfill scenarios in which clients write data to TickTockDB back to back in three formats (i.e., Json, put, line) through HTTP.
  • In case 1 (2k cardinality, 10 data points/request), the throughputs of OpenTSDB and TickTockDB are 69,685 and 688,141 data points/second, respectively, in Json format. TickTockDB can reach 824,698 data points/second with line format.
  • In case 2 (100k cardinality, 50 data points/request), TickTockDB can reach 3M data points/second with line format, as CPUs resource are completely taken advantage of.
  • In both cases, the bottleneck is CPU for both OpenTSDB and TickTockDB. Memory is also another bottleneck to OpenTSDB.
  • We suggest to use Influx-line format in TickTockDB, as it is more efficient than the other two formats, hence higher throughput and lower response time in average (Influx-line : OpenTSDB put : OpenTSDB json throughput = 3.0M : 2.1M : 1.4M data points/second).
Clone this wiki locally