Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce external fragmentation #491

Open
jamii opened this issue Feb 22, 2023 · 0 comments
Open

Reduce external fragmentation #491

jamii opened this issue Feb 22, 2023 · 0 comments

Comments

@jamii
Copy link
Contributor

jamii commented Feb 22, 2023

Tracy reports ~1200mb allocated out of a ~1800mb span. The rest is presumably external fragmentation. We can probably reduce this by using an allocator that groups allocations by size and alignment.

bors bot added a commit that referenced this issue Feb 22, 2023
492: Increase lsm_batch_multiple to 64. r=jamii a=jamii

This is roughly the value we plan to use in production, so we should be benchmarking this configuration as early as possible.

This increases memory usage and increase the latency spike from the final phase of compaction, but we need to deal with both regardless. 

#491 and #490 can help with memory usage.

#269 will reduce the latency spike, as will any optimization that improves compaction performance in general.

## Pre-merge checklist

Performance:

* [x] Compare `zig benchmark` on linux before and after this pr.
```
lbm=4
1643655168 allocated
1223 batches in 116.60 s
load offered = 1000000 tx/s
load accepted = 85766 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 27 ms
batch latency p30 = 32 ms
batch latency p40 = 36 ms
batch latency p50 = 42 ms
batch latency p60 = 112 ms
batch latency p70 = 142 ms
batch latency p80 = 172 ms
batch latency p90 = 217 ms
batch latency p100 = 411 ms

lbm=64
2989330432 bytes allocated
1223 batches in 93.51 s
load offered = 1000000 tx/s
load accepted = 106942 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 26 ms
batch latency p30 = 27 ms
batch latency p40 = 31 ms
batch latency p50 = 31 ms
batch latency p60 = 32 ms
batch latency p70 = 36 ms
batch latency p80 = 37 ms
batch latency p90 = 42 ms
batch latency p100 = 3624 ms
```
OR
* [ ] I am very sure this PR could not affect performance.


Co-authored-by: Jamie Brandon <jamie@scattered-thoughts.net>
bors bot added a commit that referenced this issue Feb 22, 2023
492: Increase lsm_batch_multiple to 64. r=jamii a=jamii

This is roughly the value we plan to use in production, so we should be benchmarking this configuration as early as possible.

This increases memory usage and increase the latency spike from the final phase of compaction, but we need to deal with both regardless. 

#491 and #490 can help with memory usage.

#269 will reduce the latency spike, as will any optimization that improves compaction performance in general.

## Pre-merge checklist

Performance:

* [x] Compare `zig benchmark` on linux before and after this pr.
```
lbm=4
1643655168 allocated
1223 batches in 116.60 s
load offered = 1000000 tx/s
load accepted = 85766 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 27 ms
batch latency p30 = 32 ms
batch latency p40 = 36 ms
batch latency p50 = 42 ms
batch latency p60 = 112 ms
batch latency p70 = 142 ms
batch latency p80 = 172 ms
batch latency p90 = 217 ms
batch latency p100 = 411 ms

lbm=64
2989330432 bytes allocated
1223 batches in 93.51 s
load offered = 1000000 tx/s
load accepted = 106942 tx/s
batch latency p00 = 5 ms
batch latency p10 = 26 ms
batch latency p20 = 26 ms
batch latency p30 = 27 ms
batch latency p40 = 31 ms
batch latency p50 = 31 ms
batch latency p60 = 32 ms
batch latency p70 = 36 ms
batch latency p80 = 37 ms
batch latency p90 = 42 ms
batch latency p100 = 3624 ms
```
OR
* [ ] I am very sure this PR could not affect performance.


Co-authored-by: Jamie Brandon <jamie@scattered-thoughts.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant