Skip to content

Conversation

@fanyang89
Copy link
Contributor

Motivation

Our current usage of BadgerDB has proven to be too resource-intensive, consuming unacceptable amounts of memory and CPU in our specific use case (>10GiB in common).

Design

To mitigate this, we have refactored our datastore to directly write records into files. After reaching a certain limit of records, we switch to a new file for continued writing. Additionally, we now compress previous data files to reduce spaces.

protojson is not available after using gogoprotobuf
DataStore v2 is a simple storage engine for our monitor.

Data file layout:

```
ver,len,buf,ver,len,buf,...
```

Keep writing until the maximum number of entries is reached.

Then sealed the data file and compressed. Every time a certain amount of
files are sealed, performs space reclaiming.

Move the previous datastore modules as v2 by the way.
@fanyang89 fanyang89 added the enhancement New feature or request label Sep 13, 2023
@fanyang89 fanyang89 self-assigned this Sep 13, 2023
@1023280072 1023280072 merged commit 9400caa into main Sep 13, 2023
@1023280072 1023280072 deleted the datastore-v2 branch September 13, 2023 07:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants