-
Notifications
You must be signed in to change notification settings - Fork 78
8259380: Correct pretouch chunk size to cap with actual page size #97
Conversation
👋 Welcome back qpzhang! A progress list of the required criteria for merging this PR into |
@cnqpzhang The following label will be automatically applied to this pull request:
When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command. |
Webrevs
|
Thanks for moving this issue to JDK16. I looked a bit into what could cause this, and one thing that I particularly noticed is that the tests are enabling THP. With THP, the (original) code sets updages the page size to os::vm_page_size():
After having looked at the code, I am not completely sure whether the analysis about the issue is correct or what the change fixes. To me it looks like that on aarch64 the default chunk size should be much higher than on x64. Example:
I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first). I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M $ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=4m Hello real 0m0.708s and chunk size = 1g: $ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=1g Hello real 0m1.613s Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster. I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid. (There is still the issue whether it makes sense to have a smaller chunk size than page size without THP, but that is not the issue here afaict) |
Another option is to just set the default chunk size for aarch64 to e.g. 512M and defer searching for the "best" later. |
Thanks for the comments. First of all, I am not objecting to 805d058, which does helps most cases. If we have an aarch64 system with 2MB large page configured for the kernel, certainly we can share the benefit as well.
Before 2c7fc85,
Please see https://github.com/torvalds/linux/blob/a09b1d78505eb9fe27597a5174c61a7c66253fe8/Documentation/admin-guide/mm/hugetlbpage.rst.
Please see the testing results I attached, https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
Again, I agree it is faster under some conditions, but not all.
Not agree, it hurts startup time on most systems configured by default, e.g., CentOS 8 Stream aarch64.
No, it does not hurt default system on x64, since the size of large pages there is 2M, which means 4M can still work very well.
I assume this change does not change things if not LINUX, or not THP. Please double check. |
This cannot solve the problem completely, e.g., HugeTLB Pages: "x86 CPUs normally support 4K and 2M (1G if architecturally supported)". Should there be a x64 system configured with 1GB large page, using current 4MB chunk size, the regression slowdown would show too, I believe. |
As for the expected regression with 1g pages with 4m chunk size vs. 1g chunk size: interestingly, on Linux, without THP, 4m chunk size is faster for a simple "Hello World" app as measured by However this seems to be an artifact of the test, as when comparing log message times (the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lgtm, thanks.
Please wait for a second reviewer to approve before integrating. |
@cnqpzhang This change now passes all automated pre-integration checks. ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details. After integration, the commit message for the final commit will be:
You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed. At the time when this comment was updated there had been 6 new commits pushed to the
Please see this link for an up-to-date comparison between the source branch of this pull request and the As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@tschatzl, @kstefanj) but any other Committer may sponsor as well. ➡️ To flag this PR as ready for integration with the above commit message, type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me too.
/integrate |
@cnqpzhang |
/sponsor |
@tschatzl @cnqpzhang Since your change was applied there have been 7 commits pushed to the
Your commit was automatically rebased without conflicts. Pushed as commit 67e1b63. 💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored. |
This is actually a regression, with regards to JVM startup time extreme slowdown, initially found at an aarch64 platform (Ampere Altra core).
The chunk size of pretouching should cap with the input page size which probably stands for large pages size if UseLargePages was set, otherwise processing chunks with much smaller size inside large size pages would hurt performance.
This issue was introduced during a refactor on chunk calculations JDK-8254972 (2c7fc85) but did not cause any problem immediately since the default PreTouchParallelChunkSize for all platforms are 1GB which can cover all popular sizes of large pages in use by most kernel variations. Later on, JDK-8254699 (805d058) set default 4MB for Linux platform, which is helpful to speed up startup time for some platforms. For example, most x64, since the popular default large page size (e.g. CentOS) is 2MB. In contrast, most default large page size with aarch64 platforms/kernels (e.g. CentOS) are 512MB, so using the 4MB chunk size to do page walk through the pages inside 512MB large page hurt performance of startup time.
In addition, there will be a similar problem if we set -XX:PreTouchParallelChunkSize=4k at a x64 Linux platform, the startup slowdown will show as well.
Tests:
https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
The 4 before-after comparisons show the JVM startup time go back to normal.
1). 33.381s to 0.870s
2). 20.333s to 2.740s
3). 15.090s to 6.268s
4). 38.983s to 6.709s
(Use the start time of pretouching the first Survivor space as a rough measurement, while \time, or GCTraceTime can generate similar results)
Progress
Issue
Reviewers
Download
$ git fetch https://git.openjdk.java.net/jdk16 pull/97/head:pull/97
$ git checkout pull/97