-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-39122: [C++][Parquet] Optimize FLBA record reader #39124
Conversation
This is an alternative to #39120. @Hattonuri could you try it out? |
@ursabot please benchmark |
Benchmark runs are scheduled for commit e0b5609. Watch https://buildkite.com/apache-arrow and https://conbench.ursa.dev for updates. A comment will be posted here when the runs are complete. |
@github-actions crossbow submit -g cpp |
Revision: e0b5609 Submitted crossbow builds: ursacomputing/crossbow @ actions-c4f6660a25 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
Thanks for your patience. Conbench analyzed the 6 benchmarking runs that have been run so far on PR commit e0b5609. There were 5 benchmark results indicating a performance regression:
The full Conbench report has more details. |
Comparing to my pull request you have 5% speedup But I accidentally found that between this commit and 2dcee3f There was a commit that increased the number of page faults by two times In this commit i have
And before it was
I guess your pull request optimizes User time. But as we can see page faults increased from 27169121 to 56145278 and system time increased by 2.5 times. Also context switches increased by 1.5 times. |
I don't think we can infer anything from that number of page faults, except if it's stable otherwise. |
🤔Though benchmark using SystemAllocator is a bit weird, should we use something like jemalloc? Also |
Are you just comparing percentages? This is silly, isn't it? |
i compile all my project with tcmalloc
I think that it's not in that case because page faults increased and zstd increased - so we can still see that that it is changed I will try to binary search breaking commit. But I think that it needs another issue because we see that the problem is not with that PR |
No "breakage" has been proven, though. You've found out a variation in number of page faults. So what? Are you sure the system was quiet? Are you sure the number of page faults is otherwise a stable quantity? That said, it will be interesting to see the results of your investigation. |
It did not affect the measurement because
Yes, the results are stable. page faults number repeats |
About investigation - i mean that i will put in another issue the results because they repeat stably - so i can find the reason with binary search |
BTW, this patch reminds me of this ancient PR: #14353 |
After merging your PR, Conbench analyzed the 6 benchmarking runs that have been run so far on merge-commit 20c975d. There were 8 benchmark results indicating a performance regression:
The full Conbench report has more details. It also includes information about 27 possible false positives for unstable benchmarks that are known to sometimes produce them. |
) ### Rationale for this change The FLBA implementation of RecordReader is suboptimal: * it doesn't preallocate the output array * it reads the decoded validity bitmap one bit at a time and recreates it, one bit at a time ### What changes are included in this PR? Optimize the FLBA implementation of RecordReader so as to avoid the aforementioned inefficiencies. I did a quick-and-dirty benchmark on a Parquet file with two columns: * column 1: uncompressed, PLAIN-encoded, FLBA<3> with no nulls * column 2: uncompressed, PLAIN-encoded, FLBA<3> with 25% nulls With git main, the file can be read at 465 MB/s. With this PR, the file can be read at 700 MB/s. ### Are these changes tested? Yes. ### Are there any user-facing changes? No. * Closes: apache#39122 Lead-authored-by: Antoine Pitrou <antoine@python.org> Co-authored-by: Antoine Pitrou <pitrou@free.fr> Signed-off-by: Antoine Pitrou <antoine@python.org>
) ### Rationale for this change The FLBA implementation of RecordReader is suboptimal: * it doesn't preallocate the output array * it reads the decoded validity bitmap one bit at a time and recreates it, one bit at a time ### What changes are included in this PR? Optimize the FLBA implementation of RecordReader so as to avoid the aforementioned inefficiencies. I did a quick-and-dirty benchmark on a Parquet file with two columns: * column 1: uncompressed, PLAIN-encoded, FLBA<3> with no nulls * column 2: uncompressed, PLAIN-encoded, FLBA<3> with 25% nulls With git main, the file can be read at 465 MB/s. With this PR, the file can be read at 700 MB/s. ### Are these changes tested? Yes. ### Are there any user-facing changes? No. * Closes: apache#39122 Lead-authored-by: Antoine Pitrou <antoine@python.org> Co-authored-by: Antoine Pitrou <pitrou@free.fr> Signed-off-by: Antoine Pitrou <antoine@python.org>
) ### Rationale for this change The FLBA implementation of RecordReader is suboptimal: * it doesn't preallocate the output array * it reads the decoded validity bitmap one bit at a time and recreates it, one bit at a time ### What changes are included in this PR? Optimize the FLBA implementation of RecordReader so as to avoid the aforementioned inefficiencies. I did a quick-and-dirty benchmark on a Parquet file with two columns: * column 1: uncompressed, PLAIN-encoded, FLBA<3> with no nulls * column 2: uncompressed, PLAIN-encoded, FLBA<3> with 25% nulls With git main, the file can be read at 465 MB/s. With this PR, the file can be read at 700 MB/s. ### Are these changes tested? Yes. ### Are there any user-facing changes? No. * Closes: apache#39122 Lead-authored-by: Antoine Pitrou <antoine@python.org> Co-authored-by: Antoine Pitrou <pitrou@free.fr> Signed-off-by: Antoine Pitrou <antoine@python.org>
Rationale for this change
The FLBA implementation of RecordReader is suboptimal:
What changes are included in this PR?
Optimize the FLBA implementation of RecordReader so as to avoid the aforementioned inefficiencies.
I did a quick-and-dirty benchmark on a Parquet file with two columns:
With git main, the file can be read at 465 MB/s. With this PR, the file can be read at 700 MB/s.
Are these changes tested?
Yes.
Are there any user-facing changes?
No.