Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-21.1: storage: pool pebbleReadOnly allocations #63972

Merged

Conversation

nvanbenschoten
Copy link
Member

Backport 1/1 commits from #63845.

/cc @cockroachdb/release


This commit introduces object pooling for pebbleReadOnly allocation avoidance. I found this to be important both because it avoids the initial pebbleReadOnly allocation, but also because it allows the memory recycling inside of each pebbleIterator owned by a pebbleReadOnly to work correctly.

I found this while running a few microbenchmarks and noticing that the lockTable's calls to intentInterleavingReader were resulting in a large number of heap allocations in (*pebbleReadOnly).NewEngineIterator. This was because lowerBoundBuf and upperBoundBuf were always nil and so each append (all 4 of them) in (*pebbleIterator).init was causing an allocation.

name                          old time/op    new time/op    delta
KV/Scan/Native/rows=1-16        30.9µs ± 4%    29.9µs ± 6%   -3.29%  (p=0.000 n=20+20)
KV/Scan/Native/rows=100-16      54.2µs ± 4%    52.7µs ± 5%   -2.84%  (p=0.002 n=20+20)
KV/Scan/Native/rows=10-16       34.0µs ± 3%    33.1µs ± 6%   -2.64%  (p=0.001 n=20+20)
KV/Scan/Native/rows=1000-16      253µs ± 5%     255µs ± 5%     ~     (p=0.659 n=20+20)
KV/Scan/Native/rows=10000-16    2.16ms ± 4%    2.14ms ± 3%     ~     (p=0.072 n=20+20)

name                          old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16        8.69kB ± 0%    7.49kB ± 0%  -13.79%  (p=0.000 n=20+19)
KV/Scan/Native/rows=10-16       10.1kB ± 0%     8.9kB ± 0%  -11.87%  (p=0.000 n=20+18)
KV/Scan/Native/rows=100-16      22.7kB ± 0%    21.5kB ± 0%   -5.29%  (p=0.000 n=17+19)
KV/Scan/Native/rows=1000-16      174kB ± 0%     172kB ± 0%   -0.66%  (p=0.000 n=19+19)
KV/Scan/Native/rows=10000-16    1.51MB ± 0%    1.51MB ± 0%   -0.05%  (p=0.000 n=16+19)

name                          old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16          71.0 ± 0%      62.0 ± 0%  -12.68%  (p=0.000 n=20+20)
KV/Scan/Native/rows=10-16         75.0 ± 0%      66.0 ± 0%  -12.00%  (p=0.000 n=20+19)
KV/Scan/Native/rows=100-16        79.0 ± 0%      70.0 ± 0%  -11.39%  (p=0.000 n=19+19)
KV/Scan/Native/rows=1000-16       87.8 ± 1%      79.0 ± 0%   -9.97%  (p=0.000 n=20+16)
KV/Scan/Native/rows=10000-16       113 ± 2%       103 ± 2%   -8.19%  (p=0.000 n=17+19)

We may want to consider this as a candidate to backport to release-21.1, because the lack of pooling here was even more detrimental with the separated lockTable, which creates a separate EngineIterator with lower and upper bounds. So this may have a small impact on #62078.

Release note (performance improvement): A series of heap allocations performed when serving read-only queries are now avoided.

This commit introduces object pooling for `pebbleReadOnly` allocation avoidance.
I found this to be important both because it avoids the initial `pebbleReadOnly`
allocation, but also because it allows the memory recycling inside of each
`pebbleIterator` owned by a `pebbleReadOnly` to work correctly.

I found this while running a few microbenchmarks and noticing that the
lockTable's calls to `intentInterleavingReader` were resulting in a large number
of heap allocations in `(*pebbleReadOnly).NewEngineIterator`. This was because
`lowerBoundBuf` and `upperBoundBuf` were always nil and so each append (all 4 of
them) in `(*pebbleIterator).init` was causing an allocation.

```
name                          old time/op    new time/op    delta
KV/Scan/Native/rows=1-16        30.9µs ± 4%    29.9µs ± 6%   -3.29%  (p=0.000 n=20+20)
KV/Scan/Native/rows=100-16      54.2µs ± 4%    52.7µs ± 5%   -2.84%  (p=0.002 n=20+20)
KV/Scan/Native/rows=10-16       34.0µs ± 3%    33.1µs ± 6%   -2.64%  (p=0.001 n=20+20)
KV/Scan/Native/rows=1000-16      253µs ± 5%     255µs ± 5%     ~     (p=0.659 n=20+20)
KV/Scan/Native/rows=10000-16    2.16ms ± 4%    2.14ms ± 3%     ~     (p=0.072 n=20+20)

name                          old alloc/op   new alloc/op   delta
KV/Scan/Native/rows=1-16        8.69kB ± 0%    7.49kB ± 0%  -13.79%  (p=0.000 n=20+19)
KV/Scan/Native/rows=10-16       10.1kB ± 0%     8.9kB ± 0%  -11.87%  (p=0.000 n=20+18)
KV/Scan/Native/rows=100-16      22.7kB ± 0%    21.5kB ± 0%   -5.29%  (p=0.000 n=17+19)
KV/Scan/Native/rows=1000-16      174kB ± 0%     172kB ± 0%   -0.66%  (p=0.000 n=19+19)
KV/Scan/Native/rows=10000-16    1.51MB ± 0%    1.51MB ± 0%   -0.05%  (p=0.000 n=16+19)

name                          old allocs/op  new allocs/op  delta
KV/Scan/Native/rows=1-16          71.0 ± 0%      62.0 ± 0%  -12.68%  (p=0.000 n=20+20)
KV/Scan/Native/rows=10-16         75.0 ± 0%      66.0 ± 0%  -12.00%  (p=0.000 n=20+19)
KV/Scan/Native/rows=100-16        79.0 ± 0%      70.0 ± 0%  -11.39%  (p=0.000 n=19+19)
KV/Scan/Native/rows=1000-16       87.8 ± 1%      79.0 ± 0%   -9.97%  (p=0.000 n=20+16)
KV/Scan/Native/rows=10000-16       113 ± 2%       103 ± 2%   -8.19%  (p=0.000 n=17+19)
```

We may want to consider this as a candidate to backport to release-21.1, because
the lack of pooling here was even more detrimental with the separated lockTable,
which creates a separate EngineIterator. So this may have a small impact on cockroachdb#62078.

Release note (performance improvement): A series of heap allocations performed
when serving read-only queries are now avoided.
@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Collaborator

@stevendanna stevendanna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 1 of 1 files at r1.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @sumeerbhola)

Copy link
Collaborator

@sumeerbhola sumeerbhola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @nvanbenschoten)

@nvanbenschoten nvanbenschoten merged commit d8c3562 into cockroachdb:release-21.1 Apr 21, 2021
@nvanbenschoten nvanbenschoten deleted the backport21.1-63845 branch April 22, 2021 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants