New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do Graal native images run faster than the Java bytecode ordinarily produced #1069
Comments
It depends. The best option is to use a profile guided optimization. |
AFAIK currently executables produced by native-image have lower peak performance than the original Java code running under GraalVM JIT. OTOH native-image starts faster and can have much lower memory overhead for simple applications. Here is some comparison: https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/java-substratevm.html Here's another: http://www.scala-native.org/en/v0.3.8/blog/interflow.html (includes native-image PGO) As always, take benchmarks results with grain of salt. PS: native-image consumes the bytecode produced by standard Java compiler. The main difference is that native-image is AOT compiler (i.e. it compiles the bytecodes to native code ahead of time), while standard HotSpot or GraalVM are JIT compilers (i.e. they compile bytecodes to native code "just in time" / in runtime). |
So it's better (more performant) to just use Java, Graal images are less efficient at the moment? |
@ theonlygusti it depends on your algorithms and workload... and on versions of JDK and GraalVM! Here is a comparison of benchmark results (JSON parsing and serialization for different payload with different libraries) on JDK 11 + Graal JIT vs. GraalVM CE 1.0.0 RC13. But those which use a lot of A full picture, except results form native-images, please see here. |
Peak performance of Graal AOT (native-image) is slower than Graal JIT. You need to benchmark Graal in exactly the compilation mode you're interested in. My impression is that the following properties hold: Warm-up time (from shortest to longest):
Peak performance (from lowest to highest):
GraalVM JIT warm-up time (and other types of overhead) should improve over time, eventually making it a drop-in replacement for HotSpot. GraalVM AOT peak performance should also improve over time, but I'm rather skeptical that it will match the JIT mode of GraalVM. OTOH GraalVM AOT could match HotSpot's performance some day as HotSpot receives little new automatic optimization techniques these days. All of this is just rough estimation, prediction, etc You need to benchmark your program on regular basis to see which VM and which compilation mode is best for you. |
Yes, these are pretty accurate estimates for the status quo. We are working on getting warmup of GraalVM CE and GraalVM EE equal to HotSpot by linking a native-image of the Graal compiler with HotSpot. There will be a prototype of this available in the next release candidate. And we are working on getting GraalVM AOT with PGO to the same level as GraalVM EE JIT. This is a longer term effort and will take quite some time. Assuming reasonably accurate collected profiling feedback, there are no theoretical reasons why we shouldn't reach the same performance. |
I wish graal native image was like a C program |
Doing some benchmarks, GraalVM has a tendency to be faster. Compiling a DB in pure Java shows a faster boot, running with native performance (like Hotspot JIT) from the beginning. Overall improvement of performance was from 10% to 20% depending of the operation. (OpenJDK GraalVM CE 1.0.0-rc16) |
Is there any plan to bring PGO to the community edition? According to the docs, it requires an enterprise license, which is prohibitively expensive for many: |
No, there is no such plan at the moment. Depending on the size of your installation & achieved benefit, the price may vary. |
If I compile a Java project I've been working on into a native image using Graal, will its performance increase?
The text was updated successfully, but these errors were encountered: