You have probably heard about
HotSpot, but what about
Azul Zing – these are just some of the more popular JVM implementations.
They all differ in numerous aspects, however today I am especially interested in their performance.
Which one is the best? Let’s write some benchmarks and see!
This blog post is the second iteration of something I did back at the beginning 2019. Back then was tinkering with concepts of dynamic programming and additionally wanted to test different implementations of Java Virtual Machine. It evolved into a simple project, if you are interested in that, the old code and benchmarks are available on GitHub (fineconstant/dynamic-programming-jmh-jvm).
This time I do it the right way: in order to get reliable and audible results I focus on a single aspect to make a benchmark and use cloud environment.
In a nutshell, Java compiler takes the source code (Java, Kotlin, Scala, Clojure, etc.) and produces Bytecode which serves as an intermediary and platform-independent language. This means that Bytecode is portable among any Java Virtual Machine (JVM), operating system or underlying hardware. JVM is responsible for running the code, it takes Bytecode and puts it through various steps, these steps together describe the whole JVM.
I decided to benchmark four of (subjectively) the most popular JVM implementations. Without a doubt, Long Time Support (LTS) versions of Java are the most commonly used among all other, that is why I decided to limit my tests only to two of most recent LTS releases – 8 and 11. Java 8 serves as a good reference, but is a legacy, and you should not be running this version unless you have a good reason to, Java 11 is the current LTS.
When running the code I did not apply any tuning or Java specific configuration, all JVMs are running with their default settings. The following sections contain detailed description of JVM versions I checked.
This is the most popular and widely spread variant of JVM. Implemented in C++ and originally maintained by Oracle Corporation. Currently, this responsibility has been taken over by OpenJDK where HotSpot is being developed by community and other organizations. If you want to get this JVM then AdoptOpenJDK is the best place to go.
HotSpot Java 8
openjdk version "1.8.0_275" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_275-b01) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.275-b01, mixed mode)
HotSpot Java 11
openjdk 18.104.22.168 2020-11-04 OpenJDK Runtime Environment AdoptOpenJDK (build 22.214.171.124+1) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 126.96.36.199+1, mixed mode)
Developed by IMB and previously known as IBM J9, a runtime engine for many of IBM’s Enterprise products. In 2017 IBM J9 became an Eclipse Foundation project and changed its name to Eclipse OpenJ9.
Comparing to HotSpot, OpenJ9 features quicker start-up times and lower memory consumption at a similar overall throughput – I will test that last claim later. If you want to download and check OpenJ9 then head to AdoptOpenJDK where binaries and archives are available.
OpenJ9 Java 8
openjdk version "1.8.0_275" OpenJDK Runtime Environment (build 1.8.0_275-b01) Eclipse OpenJ9 VM (build openj9-0.23.0, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20201110_845 (JIT enabled, AOT enabled) OpenJ9 - 0394ef754 OMR - 582366ae5 JCL - b52d2ff7ee based on jdk8u275-b01)
OpenJ9 Java 11
openjdk 11.0.9 2020-10-20 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.9+11) Eclipse OpenJ9 VM AdoptOpenJDK (build openj9-0.23.0, JRE 11 Linux amd64-64-Bit Compressed References 20201022_810 (JIT enabled, AOT enabled) OpenJ9 - 0394ef754 OMR - 582366ae5 JCL - 3b09cfd7e9 based on jdk-11.0.9+11)
Oracle’s newest JVM implementation GraalVM which contains some very distinctive features:
GraalVM Compiler– completely new JIT compiler written in Java
Native Image– allows compiling applications into small self-contained native OS binaries
- Performance – high application throughput and reduced latency
GraalVM comes in two variants: Community Edition (CE) and Enterprise Edition (EE). CE is free and open, whereas EE is paid but contains some additional performance, scalability and security tweaks.
I personally feel very excited about this one as it may bring some freshness and competition as well as change the way we create our JVM applications – just take a look at Spring Boot Native.
GraalVM CE Java 8
openjdk version "1.8.0_272" OpenJDK Runtime Environment (build 1.8.0_272-b10) OpenJDK 64-Bit Server VM GraalVM CE 20.3.0 (build 25.272-b10-jvmci-20.3-b06, mixed mode)
GraalVM CE Java 11
openjdk 11.0.9 2020-10-20 OpenJDK Runtime Environment GraalVM CE 20.3.0 (build 11.0.9+10-jvmci-20.3-b06) OpenJDK 64-Bit Server VM GraalVM CE 20.3.0 (build 11.0.9+10-jvmci-20.3-b06, mixed mode, sharing)
GraalVM EE Java 8
java version "1.8.0_271" Java(TM) SE Runtime Environment (build 1.8.0_271-b09) Java HotSpot(TM) 64-Bit Server VM GraalVM EE 20.3.0 (build 25.271-b09-jvmci-20.3-b06, mixed mode)
Graalvm EE Java 11
java 11.0.9 2020-10-20 LTS Java(TM) SE Runtime Environment GraalVM EE 20.3.0 (build 11.0.9+7-LTS-jvmci-20.3-b06) Java HotSpot(TM) 64-Bit Server VM GraalVM EE 20.3.0 (build 11.0.9+7-LTS-jvmci-20.3-b06, mixed mode, sharing)
Made by Azul, Zing features enhancements to Garbage Collection, JIT Compilation, and Warmup behaviors. Azul aims to improve overall application execution metrics and performance indicators especially at high and very high scale.
C4– disruption-free Garbage collector
Falcon– LLVM-based JIT compiler
ReadyNow– reduces application startup time
Zing is a paid JVM available from Azul’s website.
Azul Zing Java 8
java version "1.8.0_271" Java Runtime Environment (Zing 188.8.131.52-b4-CA-linux64) (build 1.8.0_271-b4) Zing 64-Bit Tiered VM (Zing 184.108.40.206-b4-CA-linux64) (build 1.8.0_271-zing_220.127.116.11-b4-product-linux-X86_64, mixed mode)
Azul Zing Java 11
java 18.104.22.168.101 2020-10-27 LTS Java Runtime Environment Zing 22.214.171.124+4-CA (build 126.96.36.199.101+5-LTS) Zing 64-Bit Tiered VM Zing 188.8.131.52+4-CA (build 184.108.40.206.101-zing_220.127.116.11-b4-product-linux-X86_64, mixed mode)
The last time I did this comparison I used my own PC, this time I want the results to be more reliable and not affected by other processes running on the machine. This is the reason why I decided to use a brand new Virtual Machine provisioned in the Cloud. These days, most of the production environments are located in the Cloud so that configuration comes naturally.
For the testing platform I chose
e2-standard-2 (2 vCPUs, 8 GB memory) VM running on Google Cloud Platform Compute Engine VM with Intel Skylake CPU.
Operating System is the latest available CentOS Linux which is
CentOS Linux release 8.2.2004 (Core).
uname -mrs Linux 4.18.0-193.28.1.el8_2.x86_64 x86_64
JMH lets you build and run macro, milli, micro, nano benchmarks using any language targeting JVM. This is the proper and conscious way of benchmarking JVM code.
Using JMH is very simple as it takes advantage of Java Annotations to generate synthetic benchmark code. It allows for configuration various aspects of performance tests such as benchmark mode, time units, warmup and measurement iterations or even level of parallelism – all that through annotations.
Java Microbenchmark Harness takes care of two of the most important matters when measuring performance of JVM code which are often overlooked:
- before each test it warms up Java Virtual Machine in order to ensure that the code is fully compiled – not interpreted;
- provides a side effect preventing JIT Compiler from eliminating dependent computations.
In my benchmarks I used Gradle JMH Plugin so running tests is as simple as calling
This is an efficient, tail-recursive implementation of Fibonacci sequence written in Kotlin
1.4.20 – just something to keep the CPU occupied.
n-th number of the Fibonacci sequence.
Full source code for this and benchmark is available under GitHub (fineconstant/jvm-performance-comparison).
First (line 2) there is an initial call to
apply(n: Int, nMinusTwo: BigInteger, nMinusOne: BigInteger): BigInteger function with some starting parameters – that is where all the hard work happens.
tailrec keyword (line 4) makes the compiler allow only to compile the code that really is tail-recursive and thus prevents from
StackOverflowError exception happening.
JmhBenchmark.kt file is located in
src/jmh/kotlin folder – this is the definition of my JMH benchmark.
In this file there is configuration describing the benchmark as well as a direct call to code under the test.
At the beginning there are all the required imports from
Starting from line 5 I configure JMH to measure overall throughput, alternatively you could measure the time of a single execution, average time or all this combined.
Then I define time unit for output and scope for which JMH should keep the state – if there is any.
Lines 8 - 10:
There are 10 tests, each test consists of 10 warmup iterations, each lasting 1 second, then there are 20 measurement iterations, each lasting 1 second. Tests run on one thread.
Lines 15 - 18:
This is where I define what want to benchmark. If you simply called
Fibonacci.apply(fibonacciN) function, JVM would notice that it is not consumed by anything and simply erase it – you would measure nothing.
To avoid that, JMH provides
Blackhole object that is injected into the function
fun fibonacciTailrec(bh: Blackhole),
bh.consume(...) wraps a call to my Fibonacci function.
The following chart shows the score for all the tested JVMs with relation to the best one which turned out to be
GraalVM EE Java 8.
If you are interested in more detailed results, they are listed in the next section.
GraalVM has a rather significant advantage over all other Java Virtual Machines.
What is interesting is that Java 8 variant is about 7 percentage points better than Java 11 for both Community and Enterprise Editions equally.
Additionally, Enterprise Edition contains some extra performance tweaks so unsurprisingly it is better than its free counterpart – though, only 4 percentage points better.
It is not that huge of a difference so if you are using Community Edition you are not loosing much performance.
Then there is
Azul's Zing where – considering measurement error – both Java 8 and Java 11 perform exactly the same.
Zing’s throughput is roughly 12 percentage points worse than GraalVM’s.
Next in ranking is
HotSpot Java 11, it displays a considerable improvement from Java 8 – 21 percentage points.
Kudos to all the developers who contributed to this upgrade over the years, that surley was not an easy feat.
HotSpot Java 8 along with
OpenJ9 (Java 8 and 11) are closing the list, their score is roughly the same.
It is worth noting that they show over two times smaller throughput than the best variant of GraalVM.
OpenJ9 engineers took a different path than HotSpot’s and decided to invest their efforts into reducing application size and smaller memory consumption.
Below are detailed numbers as well as a table containing measurement errors. Throughput is measured with operations (Fibonacci function calls) per second.
|Implementation||Java version||Score [ops/sec]||Error [± ops/sec]|
|GraalVM EE||Java 8||3 806 237||11 048|
|GraalVM CE||Java 8||3 720 403||8 470|
|GraalVM EE||Java 11||3 572 857||19 923|
|GraalVM CE||Java 11||3 461 576||12 722|
|Zing||Java 8||2 988 555||19 464|
|Zing||Java 11||2 985 787||23 498|
|HotSpot||Java 11||2 609 728||9 275|
|HotSpot||Java 8||1 786 321||10 416|
|OpenJ9||Java 11||1 752 337||12 601|
|OpenJ9||Java 8||1 729 035||17 056|
To conclude, you learned about basic differences of some of available JVM implementations.
You saw how to use JMH in order to benchmark a piece of code.
Finally, tests showed that
GraalVM is clearly the winner when it comes to throughput.
If you had run JMH previously for yourself, you must have noticed that after every finished test it displays the following warning:
REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial experiments, perform baseline and negative tests that provide experimental control, make sure the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts. Do not assume the numbers tell you what you want them to tell.
I did not do this because in this particular benchmark I was not interested in measuring various implementations of some function (to check which one is better), but rather in comparing JVM platforms as a whole. In general, remember to always follow these instructions as doing so provides valuable insights to your results helps to understand them.
As I mention in the begging, you should always prefer the latest LTS versions of JVM. I am aware that performance on its own is not a sufficient factor when choosing JVM, you should also consider aspects like:
- enterprise standards and conventions;
- vendor support;
- engineers’ skills and habits;
- and others…
The complete source code from this article is available on GitHub (fineconstant/jvm-performance-comparison).
This is my first blog post, so I am looking forward to your opinions and suggestions – what do you think? Please feel free to contact me directly or just leave a comment here 😃
Based on the feedback I might improve and refine this post. I am also thinking about working on other articles regarding memory consumption or concurrency on JVM.