Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize for better memory footprint management. #1262

Merged
merged 1 commit into from
Mar 3, 2020

Conversation

rgrunber
Copy link
Member

@rgrunber rgrunber commented Feb 5, 2020

The options can be found in the JVM configuration for the java-maven devfile . Also some additional articles that explain the motivation, and benchmark these options :

https://developers.redhat.com/blog/2014/07/15/dude-wheres-my-paas-memory-tuning-javas-footprint-in-openshift-part-1/
https://developers.redhat.com/blog/2014/07/22/dude-wheres-my-paas-memory-tuning-javas-footprint-in-openshift-part-2/

Use a garbage collector (Parallel) that supports memory footprint
management :

  • MinHeapFreeRatio : Heap will be expanded when free heap space is less
    than this (%) amount
  • MaxHeapFreeRatio : Heap will shrink when free heap space is more than
    this (%) amount
  • GCTimeRatio : Fraction of time (%) spent performing garbage collection
    of total time
  • AdaptiveSizePolicyWeight : The extent to which current GC times are
    used when maintaining the timing goal as opposed to previous times

Signed-off-by: Roland Grunberg rgrunber@redhat.com

package.json Outdated
@@ -73,8 +73,8 @@
"string",
"null"
],
"default": "-Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication",
"description": "Specifies extra VM arguments used to launch the Java Language Server. Eg. use `-Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication` to increase the heap size to 1GB and enable String deduplication with the G1 Garbage collector",
"default": "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true -Xmx1G -Xms100m",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are those flag available in all JDK distros? Have you tested running with J9 for instance?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The options are specific to the Parallel GC, so it'd be supported by any HotSpot JVMs but not J9 or GraalVM. The Xms/Xmx options are supported on J9 and from what I can tell, the -XX: options are ignored by default on J9 if not recognized. I can test on a local J9 instance to confirm that's the case for the new options as well.

For Che, vscode-java seems to run under che-sidecar-java, and that is using OpenJDK.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@akaroml I know you guys did investigate JVM Args and GC customization options. Any comment about @rgrunber's change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested with OpenJ9, vscode-java plugin master + this change, and a local theia instance. JDT-LS was started using the OpenJ9 JRE and while the options probably weren't supported, it didn't seem to affect functionality.

$ /usr/lib/jvm/jdk-11.0.6+10/bin/java -version
openjdk version "11.0.6" 2020-01-14
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.6+10)
Eclipse OpenJ9 VM AdoptOpenJDK (build openj9-0.18.1, JRE 11 Linux amd64-64-Bit Compressed References 20200122_441 (JIT enabled, AOT enabled)
OpenJ9   - 51a5857d2
OMR      - 7a1b0239a
JCL      - da35e0c380 based on jdk-11.0.6+10)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fbricon I did the experiment with previous default vmargs, about opening project eclipse.jdt.ls in vscode and browsing the code. At a glance of the GC log, I find the heap size stayed at ~620M, and it had a large number of premature promotions (up to 50%). That might be one thing to optimize for.
BTW, I got a suggestion from a pro is "parallelGC will be better as you don't benefit from G1GC with such small heap size". I believe so, although I haven't done expriments to validate that. Just FYI.

And another question is, can we remove the default arg -Xmx1G? On the one hand, it blocks users from opening some large projects (e.g. elastic). On the other hand, from the original commit, I find Xmx1G was added as a sample to allow users to increase the heap size at that time. By default it's 1/4 of the physical mem. That means, for machines with more than 4G RAM (I guess >99% for desktop machines nowadays), this vm arg limits them from better performance. So I think it can be removed now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here to also share some thoughts:

  1. 👍 Switching from G1 to Parallel. Our investigation also suggests that G1 is probably an overkill for the given memory set (1G)
  2. ❓ -Xmx1G By default, JVM uses 1/4 of physical memory as the max heap size, so the capability of JVM grows with the hardware configuration. The concern here is that this value puts a hard limit and users won't benefit from having more memories installed.
  3. ❓ -Xms100m According to our analysis, the heap size stabilizes at around 600 MB when loading eclipse.jdt.ls. GC, in general, aims to clear up things toward the minimum setting. The concern here is that GC will be too aggressive to make it to 100M.
  4. 🏃 I'll take a further look at all the ratios. Our investigation suggests that the GC struggles, in the beginning, to figure out the right size of different generations. And we observed the fact that the young generation is always too small and premature promotions to tenor gen happens too much.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I frequently have multiple vscode-java instances opened, so n*1/4 of the physical memory won't scale very well very quick.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fbricon OK, it makes sense and I'm fine that we limit Xmx to 1G.

As we're switching to ParallelGC, I did some investigation about the args in this PR. See below.

This time I simply open a single helloworld .java file in vscode, without opening any folder. And I believe that would be the minimum overhead. I collected the GC logs until the language server is totally up (did publish diagnostics, and can respond to all requests, taking ~8s). I tested combinations of different Max/MinHeapFreeRatio and GCTimeRatio.

  HeapFreeRatio GCTimeRatio Total pause Full GC pause total GC invocation (full GC) Peak Heap Size
a) default (0,100) 99 1.704s 1.201s 19(4) ~520M
b) (0,100) 4 1.363s 0.843s 89(5) ~220M
c) This PR (5,10) 4 3.472s 2.609s 91(13) ~220M
  • Changing GCTimeRatio from 99 (default in ParallelGC) to 4 looks promising, meaning it can take up to 20% time (previous is 1%) to perform GC. Per the data above, it does decrease total GC pause, and prevents unnecessary memory consumption.

  • My concern is HeapFreeRatio, ParallelGC default is 0%-100%, and 5%-10% seems too aggressive for the free ratio. According to my understanding, given the 100M intial heap, it doesn't grow until >95M is used. And every time it just grows 10% , like adjusting to ~110M. So it keeps growing the heap, slightly and frequently. That's why we observed 13 invocations of full GC in just a few seconds in Case c).

Copy link
Contributor

@Eskibear Eskibear Mar 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So my suggestion would be a compromise that we adopt Case b here, leaving HeapFreeRatio as default, i.e.
-XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true -Xmx1G -Xms100m

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. If GCTimeRatio + AdaptiveSizePolicyWeight is enough to trigger GC and limit the heap size then that's fine. FWIW, my testing was done on projects like https://github.com/eclipse/deeplearning4j-examples which required a lot of initial setup time and likely generated a large jdt index.

@fbricon fbricon requested a review from snjeza February 11, 2020 15:06
package.json Outdated
@@ -73,8 +73,8 @@
"string",
"null"
],
"default": "-Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication",
"description": "Specifies extra VM arguments used to launch the Java Language Server. Eg. use `-Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication` to increase the heap size to 1GB and enable String deduplication with the G1 Garbage collector",
"default": "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true -Xmx1G -Xms100m",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah sounds good. @rgrunber can you please update your PR with the above suggestion?

Use a garbage collector (Parallel) that supports memory footprint
management :

- GCTimeRatio : Fraction of time (%) spent performing garbage collection
of total time
- AdaptiveSizePolicyWeight : The extent to which current GC times are
used when maintaining the timing goal as opposed to previous times

Signed-off-by: Roland Grunberg <rgrunber@redhat.com>
@fbricon fbricon merged commit ab68e0e into redhat-developer:master Mar 3, 2020
@fbricon
Copy link
Collaborator

fbricon commented Mar 3, 2020

Thanks @rgrunber!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants