New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce ArrayUtil#grow in decompress #12996
Conversation
@@ -128,10 +128,12 @@ public void decompress(DataInput in, int originalLength, int offset, int length, | |||
} | |||
|
|||
// Read blocks that intersect with the interval we need | |||
if (offsetInBlock < offset + length) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if growExact would be better here. I think grow will try to oversize it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept the logic that if the new length is less than the current array then we don't do anything. it appears in some cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather keep using grow
over growExact
. This helps make sure we don't keep allocating a new array in cases lengths grow in small increments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, it's fixed :)
This reverts commit fd950c0.
This PR has not had activity in the past 2 weeks, labeling it as stale. If the PR is waiting for review, notify the dev@lucene.apache.org list. Thank you for your contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a neat and simple optimization. We grow the bytes
array to hold all the decompressed data in one shot, instead of doing it in multiples of blockLength
.
Thanks for this improvement @easyice !
@easyice Let's add a changes.txt entry for this? |
Thank you for reviewing! @vigyasharma I have added the CHANGES entry under Lucene 9.11.0 |
@vigyasharma I wonder if you missed to backport this change to |
@jpountz my bad, I think I missed the back-port. Merged it now. |
Thank you! |
Description