Skip to content
刘禄恒 edited this page Nov 14, 2017 · 1 revision

Phase 2

Phase 2 will mainly boost the throughput performance with the following way:

  1. use sqlite's WAL log as the Raft log.(mainly)
  2. compress the log.

Survey/Research

  1. actordb use 2 engines (sqlite and lmdb) which are connected by WAL. we could employ this, and should in a progressive way.
  2. LMDB is fast. See Reference 1. and that's(30 Million per node) enough in our situation.

On smaller runs (30Million or less), LMDB came out on top on most of the metrics except for disk size. This is actually what we’d expect for B-trees: they’re faster the fewer keys you have in them.

  1. actordb will change the architecture in the next version (other than sqlite and lmdb), this may be a clue about the new achitecture.
  2. either way need to change the current raft implementation.
  3. snapshot handle of the statemachine is not handled in actoredb. we could leave it as the previous.

Plan/Roadmap

a more practical roadmap:

  1. hack sqlite wal to get the wal data(may be frame) as the raft log.
  2. reorganize the Raft implementation to adapt to the wal, include deal with the sqlite checkpoint.
  3. test the throuput and fix.

Result

  1. InsertThroughputNoIndex: a million non-indexed insert operation with 1000 concurrent jobs takes 330s, while non-raft implementation takes 150.346s, both with paraengineclient_d.exe, best effort with NPL.

Reference

  1. benchmarking-leveldb-vs-rocksdb-vs-hyperleveldb-vs-lmdb-performance, more results
  2. actordb-how-and-why-we-run-sqlite-on-top-of-lmdb
  3. actordb-overview
  4. Lightning_Memory-Mapped_Database
Clone this wiki locally