A lot is happening in world of JVMs lately. Oracle changed its support policy roadmap for the Oracle JDK. GraalVM has been open sourced. AdoptOpenJDK provides binaries and is supported by (among others) Azul Systems, IBM and Microsoft. Large software vendors provide their own supported OpenJDK distributions such as Amazon (Coretto), RedHat and SAP. Next to OpenJDK there are also different JVM implementations such as Eclipse OpenJ9, Azul Systems Zing and GraalVM (which allows creation of native images). Other variables include different versions of the JDK used and whether you are running the JDK directly on the OS or within a container. Next to that, JVMs support different garbage collection algorithms which influence your application behavior. There are many options for running your Java application and choosing the right ones matters! Performance is often an important factor to take into consideration when choosing your JVM. How do the different JVMs compare with respect to performance when running different Microservice implementations? Does a specific framework provide best performance on a specific JVM implementation? I've performed elaborate measures of (among other things) start-up times, response times, CPU usage, memory usage, garbage collection behavior for these different JVMs with several different frameworks such as Reactive Spring Boot, regular Spring Boot, MicroProfile, Quarkus, Vert.x, Akka. During this presentation I will describe the test setup used and will show you some remarkable differences between the different JVM implementations and Microservice frameworks. Also differences between running a JAR or a native image are shown and the effects of running inside a container. This will help choosing the JVM with the right characteristics for your specific use-case!
4. Who am I?
Who is Maarten?
• Principal Consultant
Software Architect at AMIS / Conclusion
• Several certifications
SOA, BPM, MCS, Java, SQL, PL/SQL,
Mule, AWS, etc
• Enthusiastic presenter and blogger
http://javaoraclesoa.blogspot.com
@MaartenSmeetsNL
https://nl.linkedin.com/in/smeetsm
Maarten.Smeets@amis.nl
5. What is the CJIB
• The Central Judicial Collection Agency
part of the Ministry of Justice and Security in the Netherlands
• The CJIB is responsible for collecting a range of different fines, such as traffic
fines and punitive orders.
• Works together with EU Member States when it comes to collecting fines.
• Plays a key enforcement role in decisions relating to criminal matters, such as
• court rulings
• decisions made by one of the Public Prosecution Service’s public
prosecutors
• Located in Leeuwarden, Friesland
Where do I work?
6. Traffic fines
9.223.477
Fees cost awards
3.846
CJIB: Key figures 2017
Fines imposed by the court
57.900
Compensation measures
13.563
Aid with problematic debts
Transaction proposals
4.575Principal
Custodial sentences
13.485
Administrative premium
2.081.270
Public Prosecutor
Service settlements
284.642
Coordination of community service
36.630
Incoming European fines
1.038
Outgoing European fines
49.766
Confiscation measures
1.690
Conditional release
1.043
Administrative fines
40.608
Supervision
15.021
Converted community
service sentences
7.657
Youth authority
5.258
7. The CJIB and Java
• 1400 people. ICT department of around 325 people. 100 Java developers
• 30 teams using Scrum and SAFe. Tight integration between business and IT
• Solid CI/CD pipelines and release train
• Automated testing using Cucumber, Gherkin
• Code quality checks using SonarQube
• Bamboo, Puppet, Git, Maven, Vault
• Running on Redhat 7, OpenJDK 8 with Spring Boot Microservices on Jetty
• Innovation lab
• Blockchain
• Machine Learning
CJIB ICT
8. Disclaimer
The performance tests mentioned in this presentation were conducted with intention to obtain information on
what performance differences can be expected from running various frameworks in various JVMs using various
garbage collection algorithms. A best effort has been made to conduct an unbiased test. Still, performance
depends on many parameters such as hardware, specifics of the framework implementation, the usage of
different back-ends, concurrency, versions of libraries, OS, virtualization and various other factors. I cannot
provide any guarantees that the same values will be achieved with other than tested configurations. The use of
these results is at your own risk. I shall in no case accept liability for any loss resulting from the use of the results
or decisions based on them.
Let yourself be inspired and perform your own tests!
10. Code tested
Which framework parts are tested
• HTTP server
• Routing. Mapping of HTTP request to Java method
• Parsing of the URL for a GET request
• Creating a response object based on data from the request
• Serialize the response object to JSON
HTTP server
Routing
Request parsing
Response generation
Response serialization
Microservice frameworks
11. No framework
• Minimal implementation using Java SE code only
• No reflective code and no frameworks
• Easy native compilation
• Of course the fastest option which uses least resources
12. Microservice frameworks
Spring Boot
• Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you
can "just run".
• An opinionated view of the Spring platform and third-party libraries so you can get started with
minimum fuss. Most Spring Boot applications need very little Spring configuration.
• Supported by Pivotal / VMware
13. Microservice frameworks
Spring Boot Reactive / WebFlux
• Spring WebFlux is
– fully non-blocking
– supports Reactive Streams back pressure
– and runs on such servers as Netty, Undertow, and Servlet 3.1+ containers
• Supported by Pivotal / VMware
14. Microservice frameworks
Spring Fu
• Spring Fu is
– an incubator for Kofu (Ko for Kotlin, fu for functional)
– provides a Kotlin API to configure Spring Boot applications programmatically
– allows for native compilation on GraalVM while ‘regular’ Spring does not
15. Microservice frameworks
Quarkus
• A Kubernetes Native Java stack tailored for GraalVM & OpenJDK HotSpot
crafted from the best of breed Java libraries and standards
• Extensions configure, boot and integrate a framework or technology into your Quarkus application.
These extensions also provide the right information to GraalVM for your application to compile
natively
• Supported by Red Hat / IBM
16. Microservice frameworks
Eclipse Vert.x
• Eclipse Vert.x is event driven and non blocking.
• You can use Vert.x with multiple languages including
Java, JavaScript, Groovy, Ruby, Ceylon, Scala and Kotlin.
• Vert.x is flexible and unopiniated
• Quarkus provides a Vert.x implementation
• Supported by the Eclipse foundation
17. Microservice frameworks
Akka
• Akka is a toolkit for building highly concurrent, distributed, and resilient message-driven applications
for Java and Scala
• Actors and Streams let you build systems that scale up, using the resources of a server more
efficiently, and out, using multiple servers
• Supported by Lightbend
18. Microservice frameworks
Micronaut
• A modern, JVM-based, full-stack framework for building
modular, easily testable microservice and serverless applications.
• Micronaut uses no reflection.
This makes it easier for Micronaut applications to run on GraalVM.
• You can run Spring applications as Micronaut applications
making native compilation possible (Micronaut for Spring)
• Supported by Object Computing
19. Microservice frameworks
Helidon
• Helidon SE
– Microframework
– Functional style
– Reactive
– Can be natively compiled
• Helidon MP
– MicroProfile
– Declarative style
– CDI, JAX-RS, JSON-P
– Cannot be natively compiled (CDI)
• Supported by Oracle
20. Microservice frameworks
Open Liberty. A Microprofile implementation
• An open forum to optimize Enterprise Java for a microservices architecture by innovating across multiple implementations and
collaborating on common areas of interest with a goal of standardization.
• Specifies the least number of Java Enterprise specifications APIs
required to build a microservice.
21. JVMs differ
• Licensing / support
• Memory usage / NUMA support
• Garbage collection algorithms
• Start-up caching
• Other features
JVMs
22. Test setup
Framework versions used Hardware used
• Intel Core i7-8700
• hexa-core
• 12 threads
• Max 4.6GHz
• 32Gb DDR4L (2400MHz)
OS used
• Running Linux Mint 19.1 (Tessa)
based on Ubuntu 18.04LTS (Bionic Beaver)
• Docker server 18.06.1-ce client 18.09.2
Framework Version HTTP server
Spring Boot 2.1.4 Tomcat
Spring Fu 0.0.5 Reactor Netty
WebFlux 2.1.4 Reactor Netty
Akka 2.12 10.1.8 Akka HTTP
Open Liberty 19.0.0.4+ Open Liberty
Vert.x 3.7.0 Vert.x
Quarkus 0.15.0 Reactor Netty
Helidon 1.1.1 Helidon webserver
My laptop
23. What did I do?
• Create minimal but comparable implementations for every framework
Java, Kotlin
• Create a script to loop over scenarios
JVMs, Microservice implementations, GC-algorithms, etc
Bash, Python
• Create multithreaded load generators and compare results
Python, Node/JavaScript
• Containerize the implementations; makes testing JVMs and resource isolation easy
Docker
• Summarize results (determine average, standard deviation, Prometheus results)
Bash (awk, sed, curl), Python, Apache Bench
• Run the setup under various conditions
First priming followed by 15 minutes per framework per JVM per variable (weeks of data)
• Visualize results
Python / Jupyter: pandas, numpy, pyplot
Test setup
24. Test setup response times
15m per test
Docker base
image with JVM
Microservice
framework fat JAR
Load generator
(Python)
Build and run container
JVM + framework
Load generation
runtest.sh
Loop over JVMs and frameworks
Start processes, clean up and summarize results
outputfile.txt
groupedbarplot.py
Data
validation
Data
description
Data
visualization
Generate and summarize data Validate and visualize
results.txt
(measures per JVM
per framework)
messages
25. Test setup throughput + feature importance
20s per test (many variables)
• No more containers
Effects of containers are slower startup and Docker NAT overhead
No more need to create and clean up containers -> simpler code
• Apache Bench for testing
No more split individual measures and summary data
Proven simple to use tool (also used to test Apache HTTPD)
• Python for running the tests / manage the processes
Easier than Bash for complex testing
• Jupyter for visualization instead of plain Python
Displaying graphs and developing the script in small bits is easier
• Looked at many variables
Average response time and throughput
– Concurrent requests
1,4,8
– CPU cores assigned to the JVM
1,4,8
– Memory
512mb, 128mb, 50mb
– Frameworks
10x
– GC algorithms
all available for every JVM
– JVMs
OpenJDK, Oracle JDK, OpenJ9, Zing
– JVM version
8,11,12,13
26. Test setup throughput + feature importance
20s per test (many variables)
JVM
Microservice
framework fat JAR
Apache Bench
Build and run
JVM + framework
Load generation
run_test.py
Loop over all variables
Start processes, clean up
feature importance.ipynb
Data
validation
Data
description
Data
visualization
Generate and summarize data Validate and visualize
results.txt
messages
28. • Images available on Docker Hub
– OpenJDK
– AdoptOpenJDK
– Oracle GraalVM
– Amazon Corretto
– Eclipse OpenJ9
– Azul Zulu
• Images not available on Docker Hub
(due to license restrictions)
– Oracle JDK
– Azul Zing
Running in a container
29. Containers
Results
• Hosting on Docker gives worse response times than hosting locally
• Docker NAT is expensive!
• Application startup in Docker is worse
• Shared libraries need to be loaded
• JVM caching features are not available or difficult to implement
• Docker to Docker is not faster than local to Docker
• Everything outside a container is fastest
30.
31.
32. Microservice frameworks
Which framework gives best response times
• Akka gives worst performance. Vert.x best.
• Reactive frameworks (Akka, Vert.x, WebFlux)
do not outperform non-reactive frameworks
(Microprofile, Quarkus, Spring Boot, Spring Fu)
33. Java versions
What happens when migrating from Java 8 to Java 11?
• Java 8 and 11 behave very similarly for every framework
JDK 11 is slightly slower than 8
• OpenJ9 benefits most from going to JDK 11
Especially for Spring Boot and Akka
Responsetime[ms]
Java version
34. JVMs
Which JVM gives best response times?
• OpenJDK and Oracle JDK perform similarly for every framework
(no consistent winner)
• Substrate VM (native compilation) does worse than Oracle JDK and
OpenJDK
• OpenJ9 does worst for every framework followed by Zing
36. Application startup
Native compilation and startup
• Native compilation (Substrate VM) greatly reduces start-up time
• OpenJ9 is fastest to start outside a container
Shared classes cache (SCC) and dynamic ahead-of-time (AOT) compilation (?)
• Zing and OpenJ9 are slow to start in a container
Have not looked at Zing Compile Stashing and ReadyNow! features
• Quarkus, Helidon SE, Vert.x start approximately 5x faster than
Spring Boot and approximately 10x faster than Open Liberty!
37. • OpenJ9 Balanced
Divide memory in individually managed blocks. Good for NUMA
• OpenJ9 Metronone
Garbage collection occurs in small interruptible steps
• OpenJ9 OptAvgPause
Uses concurrent mark and sweep phases. Reduces pause times
• OpenJ9 OptThruPut
Optimizes on throughput but long pause times (app freezes)
• OpenJDK Shenandoah GC (Java 12)
A low pause time algorithm which does several tasks concurrently
No increased pause times with a larger heap
• OpenJDK / Oracle JDK ZGC (Java 12)
Scalable (concurrent) low latency garbage collector
• Zing C4 GC
Continuously Concurrent Compacting Collector
Pauseless garbage collection
• OpenJDK / Oracle JDK G1GC (default Java 9+)
Compact free memory space without lengthy pause times
• OpenJDK / Oracle JDK Parallel (default Java 8)
High-throughput GC which does not allow memory shrinking
• OpenJDK / Oracle JDK ConcMarkSweep (EOL?)
Designed for lower latency / pause times than other parallel collectors
• OpenJDK / Oracle JDK Serial GC
Single threaded GC which freezes the application during collection
• OpenJ9 Generational Concurrent policy
Minimize GC pause times without compromising throughput
GC algorithms
38. GC algorithms
How do GC algorithms influence response times (2Gb heap)
• OpenJ9 did worst (with and without sharedclasses)
Metronome does best for OpenJ9
• Every JVM (OpenJ9, Zing, OpenJDK)
achieves similar performance
for every available GC algorithm (at 2Gb heap!)
• OpenJDK Serial GC did best
OpenJDK
OpenJ9
Zing
39. GC algorithms 20mb heap
AdoptOpenJDK Eclipse OpenJ9
How do GC algorithms influence response times (20Mb heap)
• GC algorithms influence the minimal required memory
OpenJ9 Metronome GC gave out of memory
• When memory is limited
Do not use Shenandaoh (30ms) or parallel GC
OpenJ9 does better than AdoptOpenJDK
• Azul Zing cheated!
‘Warning Maximum heap size rounded to 1024 MB’
• OpenJ9 with OptThruPu GC is the winner!
produces best performance on limited memory
• For AdoptOpenJDK Serial GC does best
40. • Which feature is most important in
determining response times and throughput?
– Garbage collection algorithms
– Concurrency
– JVM supplier
– JVM version
– Microservice framework
– CPUs
– Memory
• Determine feature importance
using Random Forest Regression
Feature importance
42. • Framework choice is very important when considering performance (~70%)
Quarkus, Vert.x, Helidon SE
– are fast to start
– can be natively compiled
– give best throughput and response times
• Low on memory: consider OpenJ9 and think about garbage collection
• Is performance important? Don’t run in a container!
– Docker NAT is overhead (responsetime, throughput)
– JVM caching features (Zing, OpenJ9) are hard to use (start-up)
Recommendations
43. • JDK 11 has slower start-up times and slightly worse performance than JDK 8
• OpenJDK variants (GraalVM, Corretto, AdoptOpenJDK, Zulu) and Oracle JDK
perform pretty similarly. Does not matter much which one you use when looking
at performance
• Native images have way faster startup time but worse response times.
Many frameworks and libraries do not support this yet
Cloud / Serverless environments are good use cases to consider native images
Recommendations
44. Choices the CJIB made
Framework: Spring Boot
Quick for development and can run standalone
Jetty servlet engine
Efficient in memory usage and performance
OpenJDK 8 on RedHat 7 (moving to Kubernetes)
CJIB already has RedHat licenses for support
RedHat is steward for OpenJDK 8 and 11
Spring Boot runs well on OpenJDK
Garbage Collection
Default as long as no performance issues
Choices the CJIB made
45. Suggestions
• Sources:
https://github.com/MaartenSmeets/jvmperformance
• Get help from the JVM and framework suppliers
Several got in touch and provided valuable feedback
• Do your own tests.
Your environment and application is unique. Results might differ
Questions?
@MaartenSmeetsNL
https://nl.linkedin.com/in/smeetsm
Maarten.Smeets@amis.nl
Thema: Het CJIB – KengetallenHet innen van verkeersboetes is nog altijd een van de grootste taken van het CJIB. In 2017 inde het CJIB ruim 9,2 miljoen verkeersboetes. Naast verkeersboetes int het CJIB bijvoorbeeld ook strafrechtboetes, dit zijn door de rechter opgelegde boetes; bestuursrechtelijke premies, zoals het innen van de premie voor de zorgverzekering; of een bestuurlijke boete, bijvoorbeeld een boete die een bedrijf krijgt van de arbeidsinspectie als het niet voldoet aan veiligheidsnormen. Het CJIB kent inmiddels uiteenlopende taken op het gebied van Innen & Incasseren en Coördineren & Informeren. (Bron cijfers: CJIB, Management Informatie 23 februari 2018).
Apache Bench is great for simple tests
Effects of running in containers
Python beats Bash for complex scripts
Python in Jupyter is handy for visualization