Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do Graal native images run faster than the Java bytecode ordinarily produced #1069

Closed
nodingneu opened this issue Mar 15, 2019 · 10 comments
Closed
Labels

Comments

@nodingneu
Copy link

nodingneu commented Mar 15, 2019

If I compile a Java project I've been working on into a native image using Graal, will its performance increase?

@nodingneu nodingneu changed the title Do native images run faster than the Java bytecode ordinarily produced Do Graal native images run faster than the Java bytecode ordinarily produced Mar 15, 2019
@plokhotnyuk
Copy link

plokhotnyuk commented Mar 15, 2019

It depends.

The best option is to use a profile guided optimization.

@tarsa
Copy link

tarsa commented Mar 16, 2019

AFAIK currently executables produced by native-image have lower peak performance than the original Java code running under GraalVM JIT. OTOH native-image starts faster and can have much lower memory overhead for simple applications.

Here is some comparison: https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/java-substratevm.html

Here's another: http://www.scala-native.org/en/v0.3.8/blog/interflow.html (includes native-image PGO)

As always, take benchmarks results with grain of salt.

PS: native-image consumes the bytecode produced by standard Java compiler. The main difference is that native-image is AOT compiler (i.e. it compiles the bytecodes to native code ahead of time), while standard HotSpot or GraalVM are JIT compilers (i.e. they compile bytecodes to native code "just in time" / in runtime).

@nodingneu
Copy link
Author

So it's better (more performant) to just use Java, Graal images are less efficient at the moment?

@plokhotnyuk
Copy link

plokhotnyuk commented Mar 17, 2019

@ theonlygusti it depends on your algorithms and workload... and on versions of JDK and GraalVM!

Here is a comparison of benchmark results (JSON parsing and serialization for different payload with different libraries) on JDK 11 + Graal JIT vs. GraalVM CE 1.0.0 RC13. But those which use a lot of / or % operations with constants at 2nd argument are slower (up to 4x times) on outdated Graal JIT compiler then on latest GraalVM CE.

A full picture, except results form native-images, please see here.

@tarsa
Copy link

tarsa commented Mar 17, 2019

Peak performance of Graal AOT (native-image) is slower than Graal JIT. You need to benchmark Graal in exactly the compilation mode you're interested in.

My impression is that the following properties hold:

Warm-up time (from shortest to longest):

  • GraalVM AOT = native-image (compilation is fully done before execution, so there's no JIT warm-up)
  • HotSpot
  • GraalVM CE JIT
  • GraalVM EE JIT

Peak performance (from lowest to highest):

  • GraalVM AOT = native image, without PGO
  • GraalVM AOT = native image, with PGO
  • HotSpot/ GraalVM CE JIT - HotSpot should be better at vectorization, GraalVM CE JIT should be better at optimizing object oriented code (smarter devirtualization, inlining, escape analysis, etc)
  • GraalVM EE JIT - has both strong vectorization and strong OO code optimizations

GraalVM JIT warm-up time (and other types of overhead) should improve over time, eventually making it a drop-in replacement for HotSpot. GraalVM AOT peak performance should also improve over time, but I'm rather skeptical that it will match the JIT mode of GraalVM. OTOH GraalVM AOT could match HotSpot's performance some day as HotSpot receives little new automatic optimization techniques these days.

All of this is just rough estimation, prediction, etc You need to benchmark your program on regular basis to see which VM and which compilation mode is best for you.

@thomaswue
Copy link
Member

Yes, these are pretty accurate estimates for the status quo.

We are working on getting warmup of GraalVM CE and GraalVM EE equal to HotSpot by linking a native-image of the Graal compiler with HotSpot. There will be a prototype of this available in the next release candidate.

And we are working on getting GraalVM AOT with PGO to the same level as GraalVM EE JIT. This is a longer term effort and will take quite some time. Assuming reasonably accurate collected profiling feedback, there are no theoretical reasons why we shouldn't reach the same performance.

@nodingneu
Copy link
Author

I wish graal native image was like a C program

@gmpassos
Copy link

gmpassos commented May 9, 2019

Doing some benchmarks, GraalVM has a tendency to be faster. Compiling a DB in pure Java shows a faster boot, running with native performance (like Hotspot JIT) from the beginning. Overall improvement of performance was from 10% to 20% depending of the operation.

(OpenJDK GraalVM CE 1.0.0-rc16)

@avindra
Copy link

avindra commented Jun 22, 2020

Is there any plan to bring PGO to the community edition? According to the docs, it requires an enterprise license, which is prohibitively expensive for many:

image

@thomaswue
Copy link
Member

No, there is no such plan at the moment. Depending on the size of your installation & achieved benefit, the price may vary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants