Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase lsm_batch_multiple to 64. #492

Merged
merged 1 commit into from Feb 22, 2023
Merged

Increase lsm_batch_multiple to 64. #492

merged 1 commit into from Feb 22, 2023

Conversation

jamii
Copy link
Contributor

@jamii jamii commented Feb 22, 2023

This is roughly the value we plan to use in production, so we should be benchmarking this configuration as early as possible.

This increases memory usage and increase the latency spike from the final phase of compaction, but we need to deal with both regardless.

#491 and #490 can help with memory usage.

#269 will reduce the latency spike, as will any optimization that improves compaction performance in general.

Pre-merge checklist

Performance:

  • Compare zig benchmark on linux before and after this pr.
lbm=4
1643655168 allocated
1223 batches in 116.60 s
load offered = 1000000 tx/s
load accepted = 85766 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 27 ms
batch latency p30 = 32 ms
batch latency p40 = 36 ms
batch latency p50 = 42 ms
batch latency p60 = 112 ms
batch latency p70 = 142 ms
batch latency p80 = 172 ms
batch latency p90 = 217 ms
batch latency p100 = 411 ms

lbm=64
2989330432 bytes allocated
1223 batches in 93.51 s
load offered = 1000000 tx/s
load accepted = 106942 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 26 ms
batch latency p30 = 27 ms
batch latency p40 = 31 ms
batch latency p50 = 31 ms
batch latency p60 = 32 ms
batch latency p70 = 36 ms
batch latency p80 = 37 ms
batch latency p90 = 42 ms
batch latency p100 = 3624 ms

OR

  • I am very sure this PR could not affect performance.

This is roughly the value we plan to use in production, so we should be benchmarking this configuration as early as possible.
@sentientwaffle
Copy link
Member

bors d+

(d+ since its a draft, but this change LGTM).

@bors
Copy link
Contributor

bors bot commented Feb 22, 2023

✌️ jamii can now approve this pull request. To approve and merge a pull request, simply reply with bors r+. More detailed instructions are available here.

@jamii
Copy link
Contributor Author

jamii commented Feb 22, 2023

Oops. I guess the button defaults to whatever I picked last?

@jamii
Copy link
Contributor Author

jamii commented Feb 22, 2023

bors r+

bors bot added a commit that referenced this pull request Feb 22, 2023
492: Increase lsm_batch_multiple to 64. r=jamii a=jamii

This is roughly the value we plan to use in production, so we should be benchmarking this configuration as early as possible.

This increases memory usage and increase the latency spike from the final phase of compaction, but we need to deal with both regardless. 

#491 and #490 can help with memory usage.

#269 will reduce the latency spike, as will any optimization that improves compaction performance in general.

## Pre-merge checklist

Performance:

* [x] Compare `zig benchmark` on linux before and after this pr.
```
lbm=4
1643655168 allocated
1223 batches in 116.60 s
load offered = 1000000 tx/s
load accepted = 85766 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 27 ms
batch latency p30 = 32 ms
batch latency p40 = 36 ms
batch latency p50 = 42 ms
batch latency p60 = 112 ms
batch latency p70 = 142 ms
batch latency p80 = 172 ms
batch latency p90 = 217 ms
batch latency p100 = 411 ms

lbm=64
2989330432 bytes allocated
1223 batches in 93.51 s
load offered = 1000000 tx/s
load accepted = 106942 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 26 ms
batch latency p30 = 27 ms
batch latency p40 = 31 ms
batch latency p50 = 31 ms
batch latency p60 = 32 ms
batch latency p70 = 36 ms
batch latency p80 = 37 ms
batch latency p90 = 42 ms
batch latency p100 = 3624 ms
```
OR
* [ ] I am very sure this PR could not affect performance.


Co-authored-by: Jamie Brandon <jamie@scattered-thoughts.net>
@bors
Copy link
Contributor

bors bot commented Feb 22, 2023

Build failed:

@jamii jamii marked this pull request as ready for review February 22, 2023 23:29
@jamii
Copy link
Contributor Author

jamii commented Feb 22, 2023

bors retry

@bors
Copy link
Contributor

bors bot commented Feb 22, 2023

@bors bors bot merged commit 4afc75a into main Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants