Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reorganize the code of benchmark #2

Merged
merged 1 commit into from Jan 13, 2023

Conversation

presidento
Copy link
Contributor

Decouple the common steps and database implementations. With this it is easy to add new implementations and remove missing ones.

Also separate single writes and batch writes, since in almost every implementation the batch write is faster.

Initiate MAX_TIME to stop spending infinite time calculating very slow implementations.

Add combined case: in real world the key/value stores are not used to store the whole database, then read the whole database. In the combined case the same number of writes and reads are calculated for different database sizes.

WiredTiger is missing from PyPi, so I removed that.

Added SQLite WAL mode, since it is much faster even with autocommit than the existing two.

@presidento
Copy link
Contributor Author

presidento commented Dec 11, 2022

Here is a sample output with max time = 100:

write

items lmdbm vedis unqlite rocksdict dbm.gnu semidbm pysos dbm.dumb sqlite-wal sqlite-autocommit sqlite-batch dummypickle dummyjson
10 0.015 0.000 0.000 0.031 - 0 0 0 0.015 0.015 0 0.015 0
100 0.141 0.000 0.000 0.047 - 0 0 0.031 0.031 0.109 0.094 0.156 0.172
1000 1.609 0.000 0.016 0.062 - 0.016 0.031 0.328 0.328 1.015 0.937 1.781 4.250
10000 15.359 0.062 0.062 0.172 - 0.140 0.250 3.219 3.125 - 17.922 19.453 95.188 -
100000 - 0.719 0.672 1.390 - 1.562 2.687 - 31.172 - - - -
1000000 - 9.234 9.140 16.594 - 17.484 27.390 - - - - - -

batch

items lmdbm vedis unqlite rocksdict dbm.gnu semidbm pysos dbm.dumb sqlite-wal sqlite-autocommit sqlite-batch dummypickle dummyjson
10 0.000 0.000 0.000 - - - - 0 0.016 0 0 0 0
100 0.000 0.000 0.000 - - - - 0.032 0.031 0.078 0 0 0
1000 0.000 0.015 0.000 - - - - 0.312 0.109 0.719 0.109 0 0
10000 0.031 0.047 0.046 - - - - 3.140 1.046 - 16.156 1.047 0 0.047
100000 0.422 0.609 0.594 - - - - - 10.875 - 10.985 0.375 2.859
1000000 4.188 8.500 8.406 - - - - - - - - 41.760 -

read

items lmdbm vedis unqlite rocksdict dbm.gnu semidbm pysos dbm.dumb sqlite-wal sqlite-autocommit sqlite-batch dummypickle dummyjson
10 0.000 0.000 0.000 0.031 - 0 0 0 0 0 0 0 0
100 0.000 0.000 0.000 0.031 - 0 0 0 0.031 0.016 0.031 0 0
1000 0.000 0.000 0.000 0.046 - 0.015 0 0.078 0.218 0.297 0.282 0 0
10000 0.047 0.047 0.047 0.093 - 0.109 0.141 0.765 2.079 2.890 3.013 0.031 0.015
100000 0.610 0.641 0.640 0.735 - 1.422 1.797 - 23.797 - 29.375 0.187 0.218
1000000 6.297 8.609 8.765 20.391 - 15.891 20.485 - - - - 2.141 -

combined

items lmdbm vedis unqlite rocksdict dbm.gnu semidbm pysos dbm.dumb sqlite-wal sqlite-autocommit sqlite-batch dummypickle dummyjson
10 0.156 0.000 0.015 0.047 - 0 0 0.078 0.235 0.390 0.406 0.156 0.156
100 0.156 0.015 0.000 0.047 - 0.016 0 0.078 0.235 0.390 0.390 0.156 0.203
1000 0.156 0.000 0.000 0.047 - 0.015 0.015 0.094 0.235 0.406 0.390 0.188 0.672
10000 0.156 0.015 0.015 0.047 - 0.015 0.031 0.203 0.234 0.469 0.406 0.594 5.141
100000 0.171 0.000 0.000 0.047 - 0.140 0.250 - 0.266 - 0.484 5.547 49.750
1000000 0.172 0.015 0.016 0.047 - 1.297 2.407 - - - - 90.718 -

Decouple the common steps and database implementations.
With this it is easy to add new implementations and
remove missing ones.

Also separate single writes and batch writes, since
in almost every implementation the batch write is faster.

Initiate MAX_TIME to stop spending infinite time calculating
very slow implementations.

Add combined case: in real world the key/value stores are not
used to store the whole database, then read the whole database.
In the combined case the same number of writes and reads are
calculated for different database sizes.

WiredTiger is missing from PyPi, so I removed that.

Added SQLite WAL mode, since it is much faster even with
autocommit than the existing two.
@Dobatymo
Copy link
Owner

Hi, thank you for your contribution. I am a bit busy right now, but I will definitely review them soon! It looks like good restructuring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants