Skip to content
This repository has been archived by the owner on Jul 19, 2023. It is now read-only.

Correctly saves the buffer capacity in BufferedRowReaderIterator #842

Closed
wants to merge 1 commit into from

Conversation

cyriltovena
Copy link
Collaborator

I was looking at compaction and trying to figure why it was slow.

❯ benchstat before.txt afte.txt
name                  old time/op    new time/op    delta
BufferedRowReader-16     709ns ±14%     306ns ± 3%   -56.88%  (p=0.008 n=5+5)

name                  old alloc/op   new alloc/op   delta
BufferedRowReader-16    2.72kB ± 0%    0.00kB       -100.00%  (p=0.008 n=5+5)

name                  old allocs/op  new allocs/op  delta
BufferedRowReader-16      1.00 ± 0%      0.00       -100.00%  (p=0.008 n=5+5)

This is most likely makes flushing a block 50% faster and lighter.

PS: Not fully sure why we loose the slice capacity when slicing.

Copy link
Collaborator

@simonswine simonswine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if len(r.buff) > 1 {
r.buff = r.buff[1:]
if len(r.bufferedRows) > 1 {
r.bufferedRows = r.bufferedRows[1:]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That old line surely reduced the capacity by-1

It is only when you cut of the end that the capacity stays the same, so buf[:cap(buff)-1], will still have the same capactity.

@cyriltovena
Copy link
Collaborator Author

The PR is ready to merge but it trigger another bug hidden in the merge. I'll update this PR to have the fix soon.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants