The document discusses tuning garbage collection in the Java Virtual Machine. It provides recommendations for sizing generations based on an application's object longevity and size to reduce premature promotions which are a major cause of garbage collection pauses. Maintaining a low allocation rate and promotion rate can also help reduce garbage collection frequency. Plotting metrics like allocation rates, promotion rates, and heap occupancy over time can help analyze garbage collection performance.
The Performance Engineer's Guide To HotSpot Just-in-Time CompilationMonica Beckwith
Adaptive compilation and runtime in the OpenJDK Hotspot VM offers significant performance enhancements for our tools and applications in Java and other JVM languages. Understanding how it works provides developers with critical information on the Java HotSpot JIT compilation and runtime techniques such as vectorization, compressed OOPs etc., to assist in understanding performance for both client and server applications. We will focus on the internals of OpenJDK 8, the reference implementation for Java SE 8.
The Performance Engineer's Guide to Java (HotSpot) Virtual MachineMonica Beckwith
Monica Beckwith has worked with the Java Virtual Machine for more than a decade not just optimizing the JVM heuristics, but also improving the Just-in-time (JIT) code quality for various processor architectures as well as working with the garbage collectors and improving garbage collection for server systems.
During this talk, Monica will cover a few JIT and Runtime optimizations and she will dive into the HotSpot garbage collection and provide an overview of the various garbage collectors available in HotSpot.
GC Tuning Confessions Of A Performance EngineerMonica Beckwith
This document provides an overview of garbage collection (GC) tuning and various GC algorithms used in OpenJDK HotSpot. It discusses key concepts like throughput collectors, latency-sensitive collectors, and the tradeoffs between throughput, latency and footprint. Specifically, it summarizes the Parallel Old, CMS and G1 garbage collectors - their goals, techniques used and failure scenarios. It also covers GC tuning concepts like heap configuration, GC logs and metrics.
Way Improved :) GC Tuning Confessions - presented at JavaOne2015Monica Beckwith
This document provides a summary of a presentation on GC tuning confessions of a performance engineer. It begins with introductions of the speaker and an overview of topics to be covered, including performance engineering, insight into garbage collectors, OpenJDK HotSpot GCs, GC algorithms, key topics, and GC tunables. The document then goes into more detail on various GC algorithms like Parallel GC, CMS GC, and G1 GC. It discusses concepts like promotion failures, concurrent mode failures, incremental compaction, and fragmentation. It also provides examples and explanations of various GC-related log entries.
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...Monica Beckwith
This document provides an overview of garbage collection in the Java Virtual Machine. It discusses key concepts like generational collection, parallel and concurrent marking, and tuning garbage collection for throughput versus latency. Specific collectors like Parallel GC, CMS GC, and G1 GC are explained in terms of their marking and compaction algorithms. Memory tuning recommendations and analyzing garbage collection logs and heap dumps are also covered. The document concludes with a high-level explanation of the Garbage First garbage collector and how it uses region-based heap management.
Garbage First Garbage Collector: Where the Rubber Meets the Road!Monica Beckwith
This document provides an overview of the Garbage First (G1) garbage collector in Java. It discusses key aspects of G1 including how it divides the heap into regions, handles humongous objects, performs garbage collection phases like marking and compaction, and considerations for tuning G1 performance like managing mixed garbage collections and addressing fragmentation. The document contains detailed explanations, examples, and graphics to illustrate how G1 works.
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Monica Beckwith
G1 GC Presentation @ JavaOne 2013
Sneak a peek under the hood of the latest and coolest garbage collector, Garbage-First!
Dive deep into G1's adaptability and ergonomics
Discuss the future of G1's adaptability
GC Tuning Confessions Of A Performance Engineer - Improved :)Monica Beckwith
The document provides an overview of performance engineering and garbage collection (GC) tuning. It discusses key GC concepts like throughput, latency and footprint tradeoffs. It describes the main GC algorithms in OpenJDK HotSpot - Parallel, CMS and G1 collectors. It highlights promotion and evacuation failures that can occur in these collectors when memory fragmentation is high. It also discusses monitoring and analyzing GC logs to understand GC behavior and tune performance.
The Performance Engineer's Guide To HotSpot Just-in-Time CompilationMonica Beckwith
Adaptive compilation and runtime in the OpenJDK Hotspot VM offers significant performance enhancements for our tools and applications in Java and other JVM languages. Understanding how it works provides developers with critical information on the Java HotSpot JIT compilation and runtime techniques such as vectorization, compressed OOPs etc., to assist in understanding performance for both client and server applications. We will focus on the internals of OpenJDK 8, the reference implementation for Java SE 8.
The Performance Engineer's Guide to Java (HotSpot) Virtual MachineMonica Beckwith
Monica Beckwith has worked with the Java Virtual Machine for more than a decade not just optimizing the JVM heuristics, but also improving the Just-in-time (JIT) code quality for various processor architectures as well as working with the garbage collectors and improving garbage collection for server systems.
During this talk, Monica will cover a few JIT and Runtime optimizations and she will dive into the HotSpot garbage collection and provide an overview of the various garbage collectors available in HotSpot.
GC Tuning Confessions Of A Performance EngineerMonica Beckwith
This document provides an overview of garbage collection (GC) tuning and various GC algorithms used in OpenJDK HotSpot. It discusses key concepts like throughput collectors, latency-sensitive collectors, and the tradeoffs between throughput, latency and footprint. Specifically, it summarizes the Parallel Old, CMS and G1 garbage collectors - their goals, techniques used and failure scenarios. It also covers GC tuning concepts like heap configuration, GC logs and metrics.
Way Improved :) GC Tuning Confessions - presented at JavaOne2015Monica Beckwith
This document provides a summary of a presentation on GC tuning confessions of a performance engineer. It begins with introductions of the speaker and an overview of topics to be covered, including performance engineering, insight into garbage collectors, OpenJDK HotSpot GCs, GC algorithms, key topics, and GC tunables. The document then goes into more detail on various GC algorithms like Parallel GC, CMS GC, and G1 GC. It discusses concepts like promotion failures, concurrent mode failures, incremental compaction, and fragmentation. It also provides examples and explanations of various GC-related log entries.
The Performance Engineer's Guide To (OpenJDK) HotSpot Garbage Collection - Th...Monica Beckwith
This document provides an overview of garbage collection in the Java Virtual Machine. It discusses key concepts like generational collection, parallel and concurrent marking, and tuning garbage collection for throughput versus latency. Specific collectors like Parallel GC, CMS GC, and G1 GC are explained in terms of their marking and compaction algorithms. Memory tuning recommendations and analyzing garbage collection logs and heap dumps are also covered. The document concludes with a high-level explanation of the Garbage First garbage collector and how it uses region-based heap management.
Garbage First Garbage Collector: Where the Rubber Meets the Road!Monica Beckwith
This document provides an overview of the Garbage First (G1) garbage collector in Java. It discusses key aspects of G1 including how it divides the heap into regions, handles humongous objects, performs garbage collection phases like marking and compaction, and considerations for tuning G1 performance like managing mixed garbage collections and addressing fragmentation. The document contains detailed explanations, examples, and graphics to illustrate how G1 works.
Garbage First Garbage Collector (G1 GC): Current and Future Adaptability and ...Monica Beckwith
G1 GC Presentation @ JavaOne 2013
Sneak a peek under the hood of the latest and coolest garbage collector, Garbage-First!
Dive deep into G1's adaptability and ergonomics
Discuss the future of G1's adaptability
GC Tuning Confessions Of A Performance Engineer - Improved :)Monica Beckwith
The document provides an overview of performance engineering and garbage collection (GC) tuning. It discusses key GC concepts like throughput, latency and footprint tradeoffs. It describes the main GC algorithms in OpenJDK HotSpot - Parallel, CMS and G1 collectors. It highlights promotion and evacuation failures that can occur in these collectors when memory fragmentation is high. It also discusses monitoring and analyzing GC logs to understand GC behavior and tune performance.
In Java 9, Garbage First Garbage Collector (G1 GC) will be the default GC. This presentation makes an effort to help Hotspot VM users to understand the concept of G1 GC as well as provides some tuning advice.
At JavaOne keynote this year, Mark Reinhold talked about how Java 9 was much bigger than Jigsaw. To put that in numbers - 80+ JEPs bigger! Yes, we see more presentations on Jigsaw since it brings about modularity to the once monolithic JDK. But what about those other JEPs?! One of those "other" JEPs, is JEP 143 - 'Improve Contended Locking'. Monica will apply her performance engineering approach and talk about JEP 143 and Oracle's Studio Analyzer Performance Tool. The crux of the presentation will entail comparing performance of contended locks in JDK 9 to JDK 8.
Enabling Java: Windows on Arm64 - A Success Story!Monica Beckwith
The document provides an overview of Microsoft's efforts to port OpenJDK to Windows on Arm64. It discusses the background of OpenJDK and Arm64, details of the port including Windows and Arm64 specific nuances, and the timeline of the port from initial work to current testing and benchmarking efforts. It also introduces the team behind the port and highlights some of their key learnings and accomplishments.
Java Garbage Collectors – Moving to Java7 Garbage First (G1) CollectorGurpreet Sachdeva
One of the key strengths of JVM is automatic memory management (Garbage Collection). Its understanding can help in writing better applications. This becomes all the more important as enterprise server applications have large amount of live heap data and significant parallel threads. Until recently, main collectors were parallel collector and concurrent-mark-sweep (CMS) collector. This presentation introduces the various Garbage Collectors and compares the CMS collector against its replacement, a new implementation in Java7 i.e. G1. It is characterized by a single contiguous heap which is split into same-sized regions. In fact if your application is still running on the 1.5 or 1.6 JVM, a compelling argument to upgrade to Java 7 is to leverage G1.
This document discusses tuning the Java Hotspot G1 garbage collector (GC) for improved performance in BigData applications using HBase as a case study. It provides background on the G1 GC, describes how tuning the XX:G1HeapWastePercent flag from the default 5% to 2% resulted in a 29.3% reduction in total GC pause time, an 18.6% improvement in throughput, and a 15.7% reduction in latency for an HBase workload. The document concludes by providing contact information for those interested in learning more about G1 GC tuning and contributing to OpenJDK.
The document discusses Cassandra metrics and how they have evolved over time. It describes how metrics were initially implemented by adding MBeans to code classes. It then explains how Cassandra adopted the Dropwizard metrics library in version 1.1, which introduced the use of reservoirs that randomly sample data. The document notes limitations of reservoirs and how Cassandra addressed this. It also discusses how metrics data can be stored, including storing all points or pre-computing aggregations in a round-robin database with constant footprint.
Introduction of Java GC Tuning and Java Java Mission ControlLeon Chen
This document provides an introduction and overview of Java garbage collection (GC) tuning and the Java Mission Control tool. It begins with information about the speaker, Leon Chen, including his background and patents. It then outlines the Java and JVM roadmap and upcoming features. The bulk of the document discusses GC tuning concepts like heap sizing, generation sizing, footprint vs throughput vs latency. It provides examples and recommendations for GC logging, analysis tools like GCViewer and JWorks GC Web. The document is intended to outline Oracle's product direction and future plans for Java GC tuning and tools.
GC Tuning in the HotSpot Java VM - a FISL 10 PresentationLudovic Poitou
This document provides a summary of a presentation on garbage collection tuning in the Java HotSpot Virtual Machine. It introduces the presenters and their backgrounds in GC and Java performance. The main points covered are that GC tuning is an art that requires experience, and tuning advice is provided for the young generation, Parallel GC, and Concurrent Mark Sweep GC. Monitoring GC performance and avoiding fragmentation are also discussed.
The document discusses tuning Java for large data workloads. It covers symptoms of memory issues like jobs getting stuck or failing. It then discusses various Java and Hadoop configuration settings to optimize memory usage like mapreduce.child.java.opts and mapreduce.map.memory.mb. Finally, it provides an overview of different garbage collectors in Java and factors like generation sizes and concurrent marking that impact performance.
This document discusses JVM tuning and garbage collection. It provides an overview of key JVM flags and garbage collection strategies. It also describes the IDI test case which includes a swing desktop client, wicket web application, mobile web app in development, multiple application servers and frameworks. Hardware choices like CPU architecture and memory are important considerations for performance. The ideal configuration depends on the application's needs in terms of computational intensity and concurrency.
Kris Mok presented on customizing the JVM at Taobao to improve performance and diagnostics. Some key customizations discussed include:
1) Creating a GC Invisible Heap (GCIH) to improve performance by removing large static data from GC management.
2) Optimizing JNI wrappers to reduce overhead of JNI calls by handling common cases more efficiently.
3) Adding support for new instructions like CRC32c to take advantage of hardware acceleration.
4) Adding flags like -XX:+PrintGCReason to provide more information on the direct causes of GC cycles.
5) Tracking huge object and array allocations to help diagnose out of memory errors.
An introduction into the Garbage First (G1) garbage collector for the JVM. The session covers general GC concepts, the fundamentals of G1 and how to setup and tune the JVM for G1.
It is our experience that 60-70% of all the applications we analyze incur a performance penalty due to how it consumes and/or retains memory. Yet if you ask people what is their biggest performance headache it’s unlikely that they’d recognize memory as being #1. I have been involved in analyzing numerous applications for just these kinds of issues and in this session, I will cover the tell tale signs and symptoms that your application is one of those suffering from some sort of memory issue. I'll discuss what steps you can take to cure this problem and will also cover how the JVM can both help reduce the memory ~strength of your application. What should you do?"
The document provides information on tuning the G1 garbage collector in Java. It explains how to enable G1GC, set the heap size and pause time goal. It then describes key aspects of how G1GC works including allocating regions from the free list, performing young garbage collections, tenuring objects, maintaining the remembered set and performing concurrent and full garbage collections.
t the beginning of this year, this session’s speakers had an interesting problem: how to take the core engine of their desktop GC analysis tooling and move it to a distributed server environment where they wanted to analyze live traffic from hundreds of JVMs. The framework they finally settled on was Vert.x, an event-driven, nonblocking framework that enabled them to neatly fracture their core engine and distribute it in a matter of a couple of weeks of effort. In this session, they share with you what it took to integrate Vert.x into Censum.
The document discusses the new unified logging system in Java 9 for garbage collection (GC) logs. It introduces the -Xlog:gc option that provides a common interface for GC logging and deprecates old GC logging flags. Examples are provided showing GC logs from both the old and new logging approaches. The key points covered are the unified logging framework in Java 9, how it applies specifically to GC logging, and migrating existing GC logging configurations to use the new -Xlog:gc option.
This document describes designing a real-time heat map service using Apache Storm. It involves collecting check-in data from various locations, geocoding the addresses, building heat maps for time intervals, and persisting the results. The key components are a check-ins spout to generate sample data, geocode lookup bolt to geocode addresses, heat map builder bolt to accumulate locations into intervals and emit maps, and persistor bolt to store results. Stream groupings and parallelism across workers allow the topology to horizontally scale for high throughput processing of location data.
Title: Sista: Improving Cog’s JIT performance
Speaker: Clément Béra
Thu, August 21, 9:45am – 10:30am
Video Part1
https://www.youtube.com/watch?v=X4E_FoLysJg
Video Part2
https://www.youtube.com/watch?v=gZOk3qojoVE
Description
Abstract: Although recent improvements of the Cog VM performance made it one of the fastest available Smalltalk virtual machine, the overhead compared to optimized C code remains important. Efficient industrial object oriented virtual machine, such as Javascript V8's engine for Google Chrome and Oracle Java Hotspot can reach on many benchs the performance of optimized C code thanks to adaptive optimizations performed their JIT compilers. The VM becomes then cleverer, and after executing numerous times the same portion of codes, it stops the code execution, looks at what it is doing and recompiles critical portion of codes in code faster to run based on the current environment and previous executions.
Bio: Clément Béra and Eliot Miranda has been working together on Cog's JIT performance for the last year. Clément Béra is a young engineer and has been working in the Pharo team for the past two years. Eliot Miranda is a Smalltalk VM expert who, among others, has implemented Cog's JIT and the Spur Memory Manager for Cog.
Code examples are available here: https://github.com/ivailo-pashov/jvmmagic
How well do you know what is going inside the JVM? How about its secret backdoors and nasty hacks? Initially they appear as magic tricks but being aware what is going on behind the scenes will save your time when real issues arise.
In Java 9, Garbage First Garbage Collector (G1 GC) will be the default GC. This presentation makes an effort to help Hotspot VM users to understand the concept of G1 GC as well as provides some tuning advice.
At JavaOne keynote this year, Mark Reinhold talked about how Java 9 was much bigger than Jigsaw. To put that in numbers - 80+ JEPs bigger! Yes, we see more presentations on Jigsaw since it brings about modularity to the once monolithic JDK. But what about those other JEPs?! One of those "other" JEPs, is JEP 143 - 'Improve Contended Locking'. Monica will apply her performance engineering approach and talk about JEP 143 and Oracle's Studio Analyzer Performance Tool. The crux of the presentation will entail comparing performance of contended locks in JDK 9 to JDK 8.
Enabling Java: Windows on Arm64 - A Success Story!Monica Beckwith
The document provides an overview of Microsoft's efforts to port OpenJDK to Windows on Arm64. It discusses the background of OpenJDK and Arm64, details of the port including Windows and Arm64 specific nuances, and the timeline of the port from initial work to current testing and benchmarking efforts. It also introduces the team behind the port and highlights some of their key learnings and accomplishments.
Java Garbage Collectors – Moving to Java7 Garbage First (G1) CollectorGurpreet Sachdeva
One of the key strengths of JVM is automatic memory management (Garbage Collection). Its understanding can help in writing better applications. This becomes all the more important as enterprise server applications have large amount of live heap data and significant parallel threads. Until recently, main collectors were parallel collector and concurrent-mark-sweep (CMS) collector. This presentation introduces the various Garbage Collectors and compares the CMS collector against its replacement, a new implementation in Java7 i.e. G1. It is characterized by a single contiguous heap which is split into same-sized regions. In fact if your application is still running on the 1.5 or 1.6 JVM, a compelling argument to upgrade to Java 7 is to leverage G1.
This document discusses tuning the Java Hotspot G1 garbage collector (GC) for improved performance in BigData applications using HBase as a case study. It provides background on the G1 GC, describes how tuning the XX:G1HeapWastePercent flag from the default 5% to 2% resulted in a 29.3% reduction in total GC pause time, an 18.6% improvement in throughput, and a 15.7% reduction in latency for an HBase workload. The document concludes by providing contact information for those interested in learning more about G1 GC tuning and contributing to OpenJDK.
The document discusses Cassandra metrics and how they have evolved over time. It describes how metrics were initially implemented by adding MBeans to code classes. It then explains how Cassandra adopted the Dropwizard metrics library in version 1.1, which introduced the use of reservoirs that randomly sample data. The document notes limitations of reservoirs and how Cassandra addressed this. It also discusses how metrics data can be stored, including storing all points or pre-computing aggregations in a round-robin database with constant footprint.
Introduction of Java GC Tuning and Java Java Mission ControlLeon Chen
This document provides an introduction and overview of Java garbage collection (GC) tuning and the Java Mission Control tool. It begins with information about the speaker, Leon Chen, including his background and patents. It then outlines the Java and JVM roadmap and upcoming features. The bulk of the document discusses GC tuning concepts like heap sizing, generation sizing, footprint vs throughput vs latency. It provides examples and recommendations for GC logging, analysis tools like GCViewer and JWorks GC Web. The document is intended to outline Oracle's product direction and future plans for Java GC tuning and tools.
GC Tuning in the HotSpot Java VM - a FISL 10 PresentationLudovic Poitou
This document provides a summary of a presentation on garbage collection tuning in the Java HotSpot Virtual Machine. It introduces the presenters and their backgrounds in GC and Java performance. The main points covered are that GC tuning is an art that requires experience, and tuning advice is provided for the young generation, Parallel GC, and Concurrent Mark Sweep GC. Monitoring GC performance and avoiding fragmentation are also discussed.
The document discusses tuning Java for large data workloads. It covers symptoms of memory issues like jobs getting stuck or failing. It then discusses various Java and Hadoop configuration settings to optimize memory usage like mapreduce.child.java.opts and mapreduce.map.memory.mb. Finally, it provides an overview of different garbage collectors in Java and factors like generation sizes and concurrent marking that impact performance.
This document discusses JVM tuning and garbage collection. It provides an overview of key JVM flags and garbage collection strategies. It also describes the IDI test case which includes a swing desktop client, wicket web application, mobile web app in development, multiple application servers and frameworks. Hardware choices like CPU architecture and memory are important considerations for performance. The ideal configuration depends on the application's needs in terms of computational intensity and concurrency.
Kris Mok presented on customizing the JVM at Taobao to improve performance and diagnostics. Some key customizations discussed include:
1) Creating a GC Invisible Heap (GCIH) to improve performance by removing large static data from GC management.
2) Optimizing JNI wrappers to reduce overhead of JNI calls by handling common cases more efficiently.
3) Adding support for new instructions like CRC32c to take advantage of hardware acceleration.
4) Adding flags like -XX:+PrintGCReason to provide more information on the direct causes of GC cycles.
5) Tracking huge object and array allocations to help diagnose out of memory errors.
An introduction into the Garbage First (G1) garbage collector for the JVM. The session covers general GC concepts, the fundamentals of G1 and how to setup and tune the JVM for G1.
It is our experience that 60-70% of all the applications we analyze incur a performance penalty due to how it consumes and/or retains memory. Yet if you ask people what is their biggest performance headache it’s unlikely that they’d recognize memory as being #1. I have been involved in analyzing numerous applications for just these kinds of issues and in this session, I will cover the tell tale signs and symptoms that your application is one of those suffering from some sort of memory issue. I'll discuss what steps you can take to cure this problem and will also cover how the JVM can both help reduce the memory ~strength of your application. What should you do?"
The document provides information on tuning the G1 garbage collector in Java. It explains how to enable G1GC, set the heap size and pause time goal. It then describes key aspects of how G1GC works including allocating regions from the free list, performing young garbage collections, tenuring objects, maintaining the remembered set and performing concurrent and full garbage collections.
t the beginning of this year, this session’s speakers had an interesting problem: how to take the core engine of their desktop GC analysis tooling and move it to a distributed server environment where they wanted to analyze live traffic from hundreds of JVMs. The framework they finally settled on was Vert.x, an event-driven, nonblocking framework that enabled them to neatly fracture their core engine and distribute it in a matter of a couple of weeks of effort. In this session, they share with you what it took to integrate Vert.x into Censum.
The document discusses the new unified logging system in Java 9 for garbage collection (GC) logs. It introduces the -Xlog:gc option that provides a common interface for GC logging and deprecates old GC logging flags. Examples are provided showing GC logs from both the old and new logging approaches. The key points covered are the unified logging framework in Java 9, how it applies specifically to GC logging, and migrating existing GC logging configurations to use the new -Xlog:gc option.
This document describes designing a real-time heat map service using Apache Storm. It involves collecting check-in data from various locations, geocoding the addresses, building heat maps for time intervals, and persisting the results. The key components are a check-ins spout to generate sample data, geocode lookup bolt to geocode addresses, heat map builder bolt to accumulate locations into intervals and emit maps, and persistor bolt to store results. Stream groupings and parallelism across workers allow the topology to horizontally scale for high throughput processing of location data.
Title: Sista: Improving Cog’s JIT performance
Speaker: Clément Béra
Thu, August 21, 9:45am – 10:30am
Video Part1
https://www.youtube.com/watch?v=X4E_FoLysJg
Video Part2
https://www.youtube.com/watch?v=gZOk3qojoVE
Description
Abstract: Although recent improvements of the Cog VM performance made it one of the fastest available Smalltalk virtual machine, the overhead compared to optimized C code remains important. Efficient industrial object oriented virtual machine, such as Javascript V8's engine for Google Chrome and Oracle Java Hotspot can reach on many benchs the performance of optimized C code thanks to adaptive optimizations performed their JIT compilers. The VM becomes then cleverer, and after executing numerous times the same portion of codes, it stops the code execution, looks at what it is doing and recompiles critical portion of codes in code faster to run based on the current environment and previous executions.
Bio: Clément Béra and Eliot Miranda has been working together on Cog's JIT performance for the last year. Clément Béra is a young engineer and has been working in the Pharo team for the past two years. Eliot Miranda is a Smalltalk VM expert who, among others, has implemented Cog's JIT and the Spur Memory Manager for Cog.
Code examples are available here: https://github.com/ivailo-pashov/jvmmagic
How well do you know what is going inside the JVM? How about its secret backdoors and nasty hacks? Initially they appear as magic tricks but being aware what is going on behind the scenes will save your time when real issues arise.
Production debugging is hard, and it’s getting harder. With architectures becoming more distributed and code more asynchronous and reactive, pinpointing and resolving errors that happen in production is no child’s game. This session covers some essential tools and more advanced techniques Scala developers can use to debug live applications and resolve errors quickly. It explores crucial techniques for distributed debugging - and some of the pitfalls that make resolution much harder, and can lead to downtime. The talk also touches on some little-known JVM tools and capabilities that give you super-deep visibility at high scale without making you restart it or attach debuggers.
Byte code manipulation and instrumentalization in JavaAlex Moskvin
This document discusses bytecode manipulation in Java. It begins with an overview of bytecode and how it works. It then covers how to manipulate bytecode using tools like Javaassist. Key reasons for manipulating bytecode are discussed, such as program analysis, class generation, security, and transforming classes without source code. The document proposes building a Java agent that tracks MongoDB operations and exposes metrics via JMX. It concludes by asking if anyone has any questions.
Lifecycle of a JIT compiled code by Ivan Krylov
The just-in-time compilers in the Java Virtual Machines make sure that the Java code you wrote runs as fast as possible and that the implicit and explicit checks do not compromise the performance. What are the mechanics of decision making within JITs and what happens when those turn to be wrong? The talk will cover the profile collection path, code transformations and the various interfaces to influence the JIT including the ReadyNow technology from Azul Systems.
The document discusses techniques for enhancing Java code at build time or runtime, including patching third party libraries. It presents AspectJ as a way to patch byte code by defining pointcuts and advice that modify the behavior of methods. The document includes an example of using AspectJ to patch methods in a class with known bugs, replacing the implementation to fix the bugs while ensuring the patches worked correctly through test cases.
This document discusses how the V8 Javascript engine works. It begins by explaining what V8 is and how it can be embedded. It then covers key aspects of V8 including just-in-time compilation, garbage collection, and optimizations. It describes the different memory spaces in V8 like new space and old space. The document concludes by discussing how the presenter has used V8 for virtual reality applications.
How the HotSpot and Graal JVMs execute Java CodeJim Gough
When Java was released in 1995 it was slow, a reputation it has carried for many years… Today Java can give performance that is comparable to C++ and can emit instructions that are more optimal than code which is statically compiled. But how?
This talk will explore practical examples and the subsystems that are involved in interpreting, compiling and executing a simple Hello World Application. We will dive into JIT compilation and the arrival of the JVM Compiler Interface (JVMCI) to explore how optimisations are applied to boost the performance of our application. We will discuss HotSpot, explore Graal and the JVM ecosystem to discover performance benefits of a platform 25 years in the making.
Code lifecycle in the jvm - TopConf LinzIvan Krylov
This document summarizes the lifecycle of JIT compiled code in the JVM. It discusses how bytecode is initially interpreted and then compiled by the JIT compiler into machine code. It describes the different compilation tiers (interpreter, C1, C2) and how profiling guides which tier is used. The document outlines what profiles are and how they are used to optimize compilation. It also discusses deoptimization that can occur if optimizations assumptions are invalidated at runtime. Finally it reviews several ways to control the JIT compiler behavior like compiler commands, CompilerControl API, and ReadyNow ahead-of-time compilation.
This document provides an overview of JVM JIT compilers, specifically focusing on the HotSpot JVM compiler. It discusses the differences between static and dynamic compilation, how just-in-time compilation works in the JVM, profiling and optimizations performed by JIT compilers like inlining and devirtualization, and how to monitor the JIT compiler through options like -XX:+PrintCompilation and -XX:+PrintInlining.
Bytecode Manipulation with a Java Agent and Byte BuddyKoichi Sakata
Oracle Code One 2018, Birds of a Feather (BOF) Session
Bytecode Manipulation with a Java Agent and Byte Buddy
[BOF5314]
Tuesday, Oct 23, 07:30 PM - 08:15 PM | Moscone West - Room 2005
Have you ever manipulated Java bytecode? There are several bytecode engineering libraries, and Byte Buddy is one of the easiest, and you can also use Java agents, which are related to the Instrumentation class in the java.lang.instrument API. Instrumentation is the addition of bytecode to methods. Because the changes are purely additive, a Java agent does not modify application state or behavior. With Byte Buddy and a Java agent, we can add behaviors to existing classes. This session explains what Java agents and the instrumentation API are, introduces Byte Buddy, and presents sample code that uses a Java agent and Byte Buddy to modify behavior. The presentation will be useful for those who want to start manipulating Java bytecode with Byte Buddy.
The document discusses garbage collection tuning for Apache Cassandra. It provides an overview of the G1 garbage collector, how it works, when to use it, and how to monitor and tune it. Specific topics covered include young generation and old generation garbage collection, monitoring using JMX and tools like Swiss Java Knife, and interpreting garbage collection logs.
SPARKNaCl: A verified, fast cryptographic libraryAdaCore
SPARKNaCl https://github.com/rod-chapman/SPARKNaCl is a new, freely-available, verified and fast reference implementation of the NaCl cryptographic API, based on the TweetNaCl distribution. It has a fully automated, complete and sound proof of type-safety and several key correctness properties. In addition, the code is surprisingly fast - out-performing TweetNaCl's C implementation on an Ed25519 Sign operation by a factor of 3 at all optimisation levels on a 32-bit RISC-V bare-metal machine. This talk will concentrate on how "Proof Driven Optimisation" can result in code that is both correct and fast.
Nowadays people usually talk more about big data, internet of things, and other buzzwords on various conferences. However, sometimes developers tend to not pay enough attention to the core things such as garbage collection. After having a short discussion with many somewhat experienced Java developers I came to a conclusion that most of them do not know how many garbage collectors there are in the latest JVM, and under what circumstances each of them should be enabled. This presentation is aimed to improve or refresh people’s knowledge on this core topic, and share a real use case when it helped us to resolve production issue.
Triton and Symbolic execution on GDB@DEF CON ChinaWei-Bo Chen
Triton and Symbolic execution on GDB is a talk about symbolic execution. It introduces Triton, a symbolic execution framework, and SymGDB, which combines Triton with GDB to enable symbolic execution directly within the GDB debugger. Key points include how Triton works, its components like the symbolic execution engine and AST representations, and how SymGDB utilizes the GDB Python API to synchronize Triton's state with the debugged program and allow symbolic analysis of programs within GDB. Examples demonstrating SymGDB on crackme programs that take symbolic user input are also presented.
Performance van Java 8 en verder - Jeroen BorgersNLJUG
We weten allemaal dat de grootste verbetering die Java 8 brengt de ondersteuning voor lambda-expressies is. Dit introduceert functioneel programmeren in Java. Door het toevoegen van de Stream API wordt deze verbetering nog groter: iteratie kan nu intern worden afgehandeld door een bibliotheek, je kunt daarmee nu het beginsel "Tell, don’t ask" toepassen op collecties. Je kunt gewoon vertellen dat er een ??functie uitgevoerd moet worden op je verzameling, of vertellen dat dat parallel, door meerdere cores moet gebeuren. Maar wat betekent dit voor de prestaties van onze Java-toepassingen? Kunnen we nu meteen volledig al onze CPU-cores benutten om betere responstijden te krijgen? Hoe werken filter / map / reduce en parallele streams precies intern? Hoe wordt het Fork-Join framework hierin gebruikt? Zijn lambda's sneller dan inner klassen? - Al deze vragen worden beantwoord in deze sessie. Daarnaast introduceert Java 8 meer performance verbeteringen: tiered compilatie, PermGen verwijdering, java.time, Accumulators, Adders en Map verbeteringen. Ten slotte zullen we ook een kijkje nemen in de keuken van de geplande performance verbeteringen voor Java 9: benutting van GPU's, Value Types en arrays 2.0.
The document summarizes upcoming features and enhancements in Java 8, including project Jigsaw for modules, the Nashorn JavaScript engine, JVM convergence between HotSpot and JRockit, lambda expressions, and functional updates to core Java collections. It also discusses design decisions around lambda translation using invokedynamic and the benefits this approach provides.
ChainerCV is a computer vision library built on top of the Chainer deep learning framework. It provides implementations of popular models for feature extraction, object detection, semantic segmentation, and instance segmentation. ChainerCV aims to reproduce the performance of models from original papers and standardizes APIs across models. It also includes useful functions, example codes, documentation, and tests to make it easy to install, use, and maintain.
This 4-day training course covers JVM and Java performance tuning. The course will cover topics such as garbage collection, JVM architecture, monitoring and profiling tools, class loading, and Java NIO. It is intended for Java developers and consultants interested in performance testing and optimization. Participants should have experience with Java and distributed technologies. The training will include lectures, illustrations of concepts, and a case study to integrate the lessons.
This document provides an overview of key concepts in the Java Virtual Machine (JVM) including memory management, garbage collection techniques, class loading, execution engines, multi-threading, and the Java Native Interface (JNI). It discusses topics such as heap vs non-heap memory, minor vs major garbage collection, just-in-time compilation, safe points, and how to call native methods from Java using JNI.
Similaire à Game of Performance: A Song of JIT and GC (20)
Since JDK 9, G1 GC is the default garbage collector (JEP 248). Up until 2017, Monica has shared some G1 GC details, performance tips, and optimizations that help G1's adaptiveness and provided advice on manual tuning.
Since then, G1 has come a long way in improving its adaptiveness. In this session, Monica will cover most of the important optimizations that are delivered from JDK9+ and how they can help your application's responsiveness, throughput, and help with footprint reduction. This is a saga that involves various regions and generations (all pun intended).
Are your application's tail-latencies holding it back from delivering its near-real time SLOs? Do your in-memory processing platform's long pauses only get worse with increasing heap sizes? How about those latency spikes causing variability in your end-to-end latency for your multi-tiered distributed systems?
If any of the above keep you up at night, then have no fear as Z Garbage Collector (GC) is here and is production ready in JDK 15.
In this talk, Monica Beckwith will cover the basics of Z GC and contrast it with G1 GC (the current default collector for OpenJDK JDK 11 LTS and tip).
This is part 1 in a series of talks covering Padawan Monica Beckwith’s hands-on practical experience over the last two decades. Monica, who has trained with many Knights and a few Masters, will cover what it means to be sympathetic to the underlying hardware in Scaling Up and Scaling Out scenarios. In addition, she will share examples to put cloud performance into perspective.
Applying Concurrency Cookbook Recipes to SPEC JBBMonica Beckwith
The document discusses memory models and performance analysis using litmus tests and Java Micro-Benchmark Harness (JMH). It explains that most hardware allows more reordering than sequential consistency for performance reasons. Litmus tests can check allowed reorderings on different hardware. Barriers like DMB can restore ordering but hurt performance. Alternatives like acquire-release pairs using LDAR/STLR can provide ordering with less overhead. JMH benchmarks show STLR+LDAR volatile accesses have lower overhead than DMB barriers.
The document discusses garbage collection techniques in Java. It begins with an overview of manual memory management and the problems it poses, such as memory leaks and dangling pointers. It then covers automatic memory management techniques used in Java, including garbage collection algorithms like reference counting and tracing. Specific algorithms used in OpenJDK's HotSpot virtual machine are examined, such as the G1, Shenandoah, and Z garbage collectors. Key aspects of garbage collection like heap layout, generations, and concurrent versus stop-the-world collection are summarized.
This document provides an overview and comparison of the algorithms, phases, and commonalities of modern concurrent garbage collectors in HotSpot, including G1, Shenandoah, and Z GC. It begins with laying the groundwork on stop-the-world vs concurrent collection and heap layout. It then introduces the key differences between the three collectors in their marking, barrier, and compaction approaches. The goal of the document is to provide a technical introduction and high-level differences between these concurrent garbage collectors.
Managed runtime performance expert, Monica Beckwith will divulge her survival guide which is essential for any application performance engineer. Following simple rules and performance engineering patterns will make you and your stakeholders happy.
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Monica Beckwith
Learn what you need to know to experience nirvana in the evaluation of G1 GC even if your are migrating from Parallel GC to G1, or CMS GC to G1 GC
You also get a walk through of some case study data
G1 GC
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Choosing The Best AWS Service For Your Website + API.pptx
Game of Performance: A Song of JIT and GC
1. A Song of JIT and GC
1
QCon London 2016
Monica Beckwith
monica@codekaram.com; @mon_beck
https://www.linkedin.com/in/monicabeckwith
www.codekaram.com
Game of Performance