Everstore is an high performance append-only event log database with ACID properties
- docker
Start by checking out the repository
git clone https://github.com/perandersson/everstore-server.git
Then build the docker image
cd everstore-server
docker build . -t everstore-server:latest
Start a docker container with the newly built image with:
docker run -d --name=everstore-server -p 6929:6929 everstore-server:latest
The default port is 6929
and the default journal directory is located in /journals
directory.
You should expose the journals directory from the docker container. This is done by adding a -v
flag:
docker run -d --name=everstore-server -p 6929:6929 -p /path/on/host:/journals everstore-server:latest
- CMake 3.6 (or newer)
- On Linux: G++ 4.8 (or newer)
- On Mac: clang
- On PC: Visual Studio 2013 (or newer)
Download the source code from this GitHub repository and run the command:
cd /path/to/source
cmake .
make
The build files can be found in the bin directory. It's recommended to run the test-suite before actually using the server.
cd bin
./everstore-shared-test
You can then use it by running the command:
cd bin
./everstore-server
- ACID
- Read-friendly event logs
- Fast
- Asynchronous
- Small memory footprint (max 2 MB per worker)
- Adapters for popular languages
- Conflict resolution
To be able to be competitive, the database must have ACID properties.
Journals should be readable for non-technical people. It should also be possible to read the journals without any external programs.
It should be low-latency and it should also be able to handle a lot of events simulatinously.
All new web-frameworks today support asynchronous request handling. Let's follow this trend.
Why require massive amount of memory when it's not needed.
- Scala
- Java 8
- .NET
It should be possible to resolve any conflicts occuring when committing a transaction.
- Windows
- Linux (Shared posix mutex does not work correctly. Race condition occurs on very large load atm.)
- Docker
Everstore implements transactional behaviour for reading and writing to an event journal. By making use of the actual event types, we can ensure that types of the same type results in a conflict if they are saved at the same time. This also ensures that event types of different types (where the order between the events does not matter) do not result in a conflict.
The events are saved in a way that the server can repair itself from if a crash occurs or if the power is lost during a transactional write. The database automatically restores itself to the latest consistent state.
By making use of transactions and dedicated child-processes being responsible for different journals, we can ensure that issolation is achived between different journals or transactions.
The server is split into two parts: The Host and the workers. The host is responsible for managing the workers - which is managed as child-processes for the host program. Due to the host being very small and performing almost no logic we lessen the risk of it crashing in runtime.
Any database related logic is performed by the worker child-processes. When the host notices that one or more workers has crashed then it will begin the "restart" procedure:
- Kill the previous process and cleanup after it (if it's still running)
- Start the process once again
- Register any existing connections
- Repair any journals it's affacted during it's crash
- Start send traffic to it once again.
The actual content of the log depends on the adapter. The only requirement the server has on the event log is:
- No new-line characters
- No NULL characters (used as a delimiter between events)
An event-log row looks like this if we use the Scala adapter:
2015-08-02T16:23:02.580 examples.UserCreated {"username":"pa@speedledger.se"}
The first part is the timestamp when the "transaction" was committed. This one is always in UTC and is managed by the server. The second part is the event name. The Scala adapter uses the full-name of the event case class (as seen above). The last part if the data associated with the event.
The server does not require the event row data to be JSON, it's up to the serialization mechanism in the adapter. The server saves whatever it receives from the adapter.
- CPU: Intel Core i7-4770 @ 3.4 GHz
- RAM: 16 GB 1600 Mhz
- HDD: Corsair SSD Force Series GS 240GB 2.5" SATA 6 Gb/s (SATA3.0), 555/525MB/s read/write, fast Toggle NAND
Peak Memory footprint:
everstore-worker.exe 1116 K
everstore-worker.exe 1116 K
everstore-worker.exe 1116 K
everstore-worker.exe 1116 K
everstore-worker.exe 1112 K
everstore-server.exe 824 K
VM-options: -Xmx1024M
EvtCount -> TimeInMS
720000 --> 1566
720000 --> 1387
720000 --> 1004
720000 --> 1015
720000 --> 1053
720000 --> 1168
720000 --> 1000
720000 --> 1083
720000 --> 998
720000 --> 1053
720000 --> 1014
720000 --> 1039
720000 --> 1010
720000 --> 1091
720000 --> 984
Peak Memory Footprint:
everstore-worker.exe 984 K
everstore-worker.exe 980 K
everstore-worker.exe 976 K
everstore-worker.exe 976 K
everstore-worker.exe 912 K
everstore-server.exe 823 K
VM-options: -Xmx1024M
EvtCount -> TimeInMS
1200000 --> 2147
1200000 --> 1380
1200000 --> 1376
1200000 --> 1336
1200000 --> 1570
1200000 --> 1218
1200000 --> 1338
1200000 --> 1411
1200000 --> 1199
1200000 --> 1297
1200000 --> 1281
1200000 --> 1249
1200000 --> 1455
1200000 --> 1252