Skip to content
This repository has been archived by the owner on Sep 2, 2022. It is now read-only.
/ jdk16 Public archive

8259380: Correct pretouch chunk size to cap with actual page size #97

Closed
wants to merge 4 commits into from

Conversation

cnqpzhang
Copy link

@cnqpzhang cnqpzhang commented Jan 8, 2021

This is actually a regression, with regards to JVM startup time extreme slowdown, initially found at an aarch64 platform (Ampere Altra core).

The chunk size of pretouching should cap with the input page size which probably stands for large pages size if UseLargePages was set, otherwise processing chunks with much smaller size inside large size pages would hurt performance.

This issue was introduced during a refactor on chunk calculations JDK-8254972 (2c7fc85) but did not cause any problem immediately since the default PreTouchParallelChunkSize for all platforms are 1GB which can cover all popular sizes of large pages in use by most kernel variations. Later on, JDK-8254699 (805d058) set default 4MB for Linux platform, which is helpful to speed up startup time for some platforms. For example, most x64, since the popular default large page size (e.g. CentOS) is 2MB. In contrast, most default large page size with aarch64 platforms/kernels (e.g. CentOS) are 512MB, so using the 4MB chunk size to do page walk through the pages inside 512MB large page hurt performance of startup time.

In addition, there will be a similar problem if we set -XX:PreTouchParallelChunkSize=4k at a x64 Linux platform, the startup slowdown will show as well.

Tests:
https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
The 4 before-after comparisons show the JVM startup time go back to normal.
1). 33.381s to 0.870s
2). 20.333s to 2.740s
3). 15.090s to 6.268s
4). 38.983s to 6.709s
(Use the start time of pretouching the first Survivor space as a rough measurement, while \time, or GCTraceTime can generate similar results)


Progress

  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue
  • Change must be properly reviewed

Issue

  • JDK-8259380: Correct pretouch chunk size to cap with actual page size

Reviewers

Download

$ git fetch https://git.openjdk.java.net/jdk16 pull/97/head:pull/97
$ git checkout pull/97

@bridgekeeper
Copy link

bridgekeeper bot commented Jan 8, 2021

👋 Welcome back qpzhang! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk openjdk bot added the rfr Pull request is ready for review label Jan 8, 2021
@openjdk
Copy link

openjdk bot commented Jan 8, 2021

@cnqpzhang The following label will be automatically applied to this pull request:

  • hotspot-gc

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing list. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the hotspot-gc hotspot-gc-dev@openjdk.java.net label Jan 8, 2021
@mlbridge
Copy link

mlbridge bot commented Jan 8, 2021

Webrevs

@tschatzl
Copy link

tschatzl commented Jan 8, 2021

Thanks for moving this issue to JDK16.

I looked a bit into what could cause this, and one thing that I particularly noticed is that the tests are enabling THP.

With THP, the (original) code sets updages the page size to os::vm_page_size():

#ifdef LINUX
  // When using THP we need to always pre-touch using small pages as the OS will
  // initially always use small pages.
  page_size = UseTransparentHugePages ? (size_t)os::vm_page_size() : page_size;
#endif
   size_t chunk_size = MAX2(PretouchTask::chunk_size(), page_size);

After having looked at the code, I am not completely sure whether the analysis about the issue is correct or what the change fixes. To me it looks like that on aarch64 the default chunk size should be much higher than on x64.

Example:
page_size is the size of a page, that 512M in your case; os::vm_page_size() is the small size page, 64k in that configuration.

chunk_size is then set to 4M (MAX(PreTouchParallelChunkSize, 64k)) - because with THP, as the comment indicates, we do not know whether the reservation is a large or a small page - so the code must use the small page size for actual pretouch within a chunk.

I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the PretouchTask is initialized, in the PretouchTask constructor there, and the calculation of the chunk size in the PretouchTask::work method done by every thread seperately.

The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first).

I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M

$ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=4m Hello
[0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
[0.053s][warning][gc] pretouch 21474836480 chunk 4194304 page 4096
[0.406s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
[0.413s][warning][gc] pretouch 335544320 chunk 4194304 page 4096
[0.421s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
[0.423s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
[0.432s][warning][gc] pretouch 41943040 chunk 4194304 page 4096
Hello World!

real 0m0.708s
user 0m0.367s
sys 0m9.983s

and chunk size = 1g:

$ time java -Xmx20g -Xms20g -XX:+UseLargePages -XX:LargePageSizeInBytes=1g -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:PreTouchParallelChunkSize=1g Hello
[0.001s][warning][gc] LargePageSizeInBytes=1073741824 large_page_size 1073741824
[0.054s][warning][gc] pretouch 21474836480 chunk 1073741824 page 4096
[1.141s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
[1.216s][warning][gc] pretouch 335544320 chunk 1073741824 page 4096
[1.289s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
[1.299s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
[1.320s][warning][gc] pretouch 41943040 chunk 1073741824 page 4096
Hello World!

real 0m1.613s
user 0m0.420s
sys 0m16.666s

Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster.

I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for -XX:PreTouchParallelChunkSize.

The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid.

(There is still the issue whether it makes sense to have a smaller chunk size than page size without THP, but that is not the issue here afaict)

@tschatzl
Copy link

tschatzl commented Jan 8, 2021

Another option is to just set the default chunk size for aarch64 to e.g. 512M and defer searching for the "best" later.

@cnqpzhang
Copy link
Author

cnqpzhang commented Jan 9, 2021

Thanks for the comments.

First of all, I am not objecting to 805d058, which does helps most cases. If we have an aarch64 system with 2MB large page configured for the kernel, certainly we can share the benefit as well.

I am also not sure about the statement about the introduction of this issue in JDK-8254972: the only difference seems to be where the page size for the PretouchTask is initialized, in the PretouchTask constructor there, and the calculation of the chunk size in the PretouchTask::work method done by every thread seperately.

Before 2c7fc85, PretouchTask instance gets initialized firstly, then doing the cap with page size when calculating num_chunks. In contrast, after 2c7fc85, PretouchTask instance initialization followed the calculation of chunk_size. This is the diff.

The only thing I could see that in case the OS already gave us large pages (i.e. 512M), and iterating over the same page using multiple threads may cause performance issues, although for the startup case, x64 does not seem to care (for me, for 20g heaps) and the default of 4M seems to be fastest as shown in [https://bugs.openjdk.java.net/browse/JDK-8254699][JDK-8254699] (and afaik with THP you always get small pages at first).

Please see https://github.com/torvalds/linux/blob/a09b1d78505eb9fe27597a5174c61a7c66253fe8/Documentation/admin-guide/mm/hugetlbpage.rst.
We cannot take assumption of the size of large pages, this is not specific to any arch, x64, aarch64, or else. Users are able to configure any choice to kernel they want, if architecturally supported. So x64 can face to 512MB large page, while aarch64 can work with 2MB large page too.

I can't see how setting chunk size to 4k using the shows "the same problem" on x64 as it does not show with 4M (default) chunk size and 1g (huge) pages. E.g. chunk size = 4M

Please see the testing results I attached, https://bugs.openjdk.java.net/secure/attachment/92623/pretouch_chunk_size_fix_testing.txt
2), 3), 4) are done on x64 servers with various -XX:PreTouchParallelChunkSize=xxk

Even without THP using 4M chunks (and still using 1g pages for the Java heap) still seems to be consistently faster.

Again, I agree it is faster under some conditions, but not all.

I would suggest that in this case the correct fix would be to do the same testing as done in JDK-8254699 and add an aarch64 specific default for -XX:PreTouchParallelChunkSize.

Not agree, it hurts startup time on most systems configured by default, e.g., CentOS 8 Stream aarch64.

The suggested change (to increase chunk size based on page size, particularly with THP enabled) seems to not fix the issue (suboptimal default chunk size) and also regress performance on x64 which I would prefer to avoid.

No, it does not hurt default system on x64, since the size of large pages there is 2M, which means 4M can still work very well.

(There is still the issue whether it makes sense to have a smaller chunk size than page size without THP, but that is not the issue here afaict)

I assume this change does not change things if not LINUX, or not THP. Please double check.

@cnqpzhang
Copy link
Author

Another option is to just set the default chunk size for aarch64 to e.g. 512M and defer searching for the "best" later.

This cannot solve the problem completely, e.g., HugeTLB Pages: "x86 CPUs normally support 4K and 2M (1G if architecturally supported)". Should there be a x64 system configured with 1GB large page, using current 4MB chunk size, the regression slowdown would show too, I believe.
This was probably the reason why -XX:PreTouchParallelChunkSize has default 1GB settings, which could cover all kinds of large pages in modern kernels/architectures.

@tschatzl
Copy link

tschatzl commented Jan 11, 2021

As for the expected regression with 1g pages with 4m chunk size vs. 1g chunk size: interestingly, on Linux, without THP, 4m chunk size is faster for a simple "Hello World" app as measured by time. I noticed that already yesterday, and re-verified on different machines and heap sizes up to 2TB today.

However this seems to be an artifact of the test, as when comparing log message times (the Running G1 PreTouch with X workers for ... ones shown with gc+heap=debug, they are the same.

@openjdk openjdk bot removed the rfr Pull request is ready for review label Jan 11, 2021
@openjdk openjdk bot added the rfr Pull request is ready for review label Jan 11, 2021
Copy link

@tschatzl tschatzl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm, thanks.

@tschatzl
Copy link

Please wait for a second reviewer to approve before integrating.

@openjdk
Copy link

openjdk bot commented Jan 12, 2021

@cnqpzhang This change now passes all automated pre-integration checks.

ℹ️ This project also has non-automated pre-integration requirements. Please see the file CONTRIBUTING.md for details.

After integration, the commit message for the final commit will be:

8259380: Correct pretouch chunk size to cap with actual page size

Reviewed-by: tschatzl, sjohanss

You can use pull request commands such as /summary, /contributor and /issue to adjust it as needed.

At the time when this comment was updated there had been 6 new commits pushed to the master branch:

  • a7e5da2: 8258384: AArch64: SVE verify_ptrue fails on some tests
  • 2cb271e: 8253996: Javac error on jdk16 build 18: invalid flag: -Xdoclint:-missing
  • d60a937: 8259028: ClassCastException when using custom filesystem with wrapper FileChannel impl
  • e05f36f: 8259043: More Zero architectures need linkage with libatomic
  • 020ec84: 8259429: Update reference to README.md
  • fb68395: 8259014: (so) ServerSocketChannel.bind(UnixDomainSocketAddress)/SocketChannel.bind(UnixDomainSocketAddress) will have unknown user and group owner (win)

Please see this link for an up-to-date comparison between the source branch of this pull request and the master branch.
As there are no conflicts, your changes will automatically be rebased on top of these commits when integrating. If you prefer to avoid this automatic rebasing, please check the documentation for the /integrate command for further details.

As you do not have Committer status in this project an existing Committer must agree to sponsor your change. Possible candidates are the reviewers of this PR (@tschatzl, @kstefanj) but any other Committer may sponsor as well.

➡️ To flag this PR as ready for integration with the above commit message, type /integrate in a new comment. (Afterwards, your sponsor types /sponsor in a new comment to perform the integration).

@openjdk openjdk bot added the ready Pull request is ready to be integrated label Jan 12, 2021
Copy link
Contributor

@kstefanj kstefanj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me too.

@cnqpzhang
Copy link
Author

/integrate

@openjdk openjdk bot added the sponsor Pull request is ready to be sponsored label Jan 12, 2021
@openjdk
Copy link

openjdk bot commented Jan 12, 2021

@cnqpzhang
Your change (at version efb5873) is now ready to be sponsored by a Committer.

@tschatzl
Copy link

/sponsor

@openjdk openjdk bot closed this Jan 12, 2021
@openjdk openjdk bot added the integrated Pull request has been integrated label Jan 12, 2021
@openjdk openjdk bot removed sponsor Pull request is ready to be sponsored ready Pull request is ready to be integrated rfr Pull request is ready for review labels Jan 12, 2021
@openjdk
Copy link

openjdk bot commented Jan 12, 2021

@tschatzl @cnqpzhang Since your change was applied there have been 7 commits pushed to the master branch:

  • 28ff2de: 8259237: Demo selection changes with left/right arrow key. No need to press space for selection.
  • a7e5da2: 8258384: AArch64: SVE verify_ptrue fails on some tests
  • 2cb271e: 8253996: Javac error on jdk16 build 18: invalid flag: -Xdoclint:-missing
  • d60a937: 8259028: ClassCastException when using custom filesystem with wrapper FileChannel impl
  • e05f36f: 8259043: More Zero architectures need linkage with libatomic
  • 020ec84: 8259429: Update reference to README.md
  • fb68395: 8259014: (so) ServerSocketChannel.bind(UnixDomainSocketAddress)/SocketChannel.bind(UnixDomainSocketAddress) will have unknown user and group owner (win)

Your commit was automatically rebased without conflicts.

Pushed as commit 67e1b63.

💡 You may see a message that your pull request was closed with unmerged commits. This can be safely ignored.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
hotspot-gc hotspot-gc-dev@openjdk.java.net integrated Pull request has been integrated
Development

Successfully merging this pull request may close these issues.

3 participants