1. The document discusses using Apache Sling for event-driven architectures and job processing in AEM. It covers OSGi event administration, Sling job handling, discovery, and offloading jobs between instances.
2. Key components discussed include the Sling job manager, job consumers, queues, discovery for topology information, and the offloading framework for distributing jobs between clustered and non-clustered instances.
3. The offloading browser in AEM's UI is covered as a way to configure job topics, instances, and distribution for clustered and non-clustered installations.
Inter-Sling communication with message queueTomasz Rękawek
Sling instances tend to live in herds. Synchronizing content among them is quite easy, as the whole repository is available via the REST API. However, sending control messages ("create a user", "give me a list of modified pages") may be tricky. There is a better way than implementing dozens of servlets or event listeners: it is a message queue. During the presentation we will show how to integrate ActiveMQ with Sling and how to use it to exchange messages between instances. We will also present some real-world use cases.
Distributed Eventing in OSGi - Carsten Ziegelermfrancis
OSGi Community Event 2013 (http://www.osgi.org/CommunityEvent2013/Schedule)
ABSTRACT
One of the major topics the OSGi alliance is working on is a proposal for distributed eventing especially in the cloud. This session starts with an overview of the current state in the alliance and then shows already available solutions from the Apache Sling open source project. This includes distributing events through event admin and controlled processing of events by exactly one processor in distributed installations. The current implementations will be set in context to the ongoing activations in the alliance.
SPEAKER BIO
Carsten Ziegeler is senior developer at Adobe Research Switzerland and spends most of his time on architectural and infrastructure topics. Working for over 25 years in open source projects, Carsten is a member of the Apache Software Foundation and heavily participates in several Apache communities including Sling, Felix and ACE. He is a frequent speaker on technology and open source conferences and participates in the OSGi Core Platform and Enterprise expert groups.
Getting Started with Apache Camel at DevNation 2014Claus Ibsen
Get off to a good start with Apache Camel. This session will give you an introduction to Apache Camel and teach you:
- How Camel is related to enterprise integration patterns (EIPs).
- How to use EIPs in Camel routes written in Java code or XML files.
- How to get started developing with Camel, including how to set up new projects from scratch using Maven and Eclipse.
- With a live demo, how to build Camel applications in Java, Spring, and OSGi Blueprint.
- How ready-to-use features make integration much easier.
- About the web console tools that give you insight into your running Apache Camel applications, including visual route diagrams with tracing, debugging, and profiling capabilities.
- Useful resources to learn more about Camel.
This session will be taught with a 50/50 mix of slides and live demos, and it will conclude with Q&A time.
Integration using Apache Camel and GroovyClaus Ibsen
Apache Camel is versatile integration library that supports a huge number of components, enterprise integration patterns, and programming languages.
In this this talk I first introduce you to Apache Camel and its concepts. Then we move on to see how you can use the Groovy programming language with Camel as a first class Groovy DSL to build integration flows.
You will also learn how to build a new Camel and Groovy app from scratch from a live demo.
And we also touch how you can use Camel from grails using the grails-camel plugin.
I will also show the web console tools that give you insight into your running Apache Camel applications, including visual route diagrams with tracing, debugging, and profiling capabilities.
This session will be taught with a 50/50 mix of slides and live demos, and it will conclude with Q&A time.
Getting Started with Apache Camel - Devconf Conference - February 2013Claus Ibsen
This session will teach you how to get a good start with Apache Camel.
We will introduce you to Apache Camel and how Camel its related to Enterprise Integration Patterns. And how you go about using these patterns in Camel routes, written in Java code or XML files.
We will then discuss how you can get started developing with Camel, and how to setup a new project from scratch using Maven and Eclipse tooling. This session includes live demos that show how to build Camel applications in Java, Spring, OSGi Blueprint and alternative languages such as Scala and Groovy.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
At the end we demonstrate how to build custom components, allowing you to build custom adapters if not already provided by Camel.
Before opening up for QA, we will share useful links where you can dive into learning more about Camel.
Inter-Sling communication with message queueTomasz Rękawek
Sling instances tend to live in herds. Synchronizing content among them is quite easy, as the whole repository is available via the REST API. However, sending control messages ("create a user", "give me a list of modified pages") may be tricky. There is a better way than implementing dozens of servlets or event listeners: it is a message queue. During the presentation we will show how to integrate ActiveMQ with Sling and how to use it to exchange messages between instances. We will also present some real-world use cases.
Distributed Eventing in OSGi - Carsten Ziegelermfrancis
OSGi Community Event 2013 (http://www.osgi.org/CommunityEvent2013/Schedule)
ABSTRACT
One of the major topics the OSGi alliance is working on is a proposal for distributed eventing especially in the cloud. This session starts with an overview of the current state in the alliance and then shows already available solutions from the Apache Sling open source project. This includes distributing events through event admin and controlled processing of events by exactly one processor in distributed installations. The current implementations will be set in context to the ongoing activations in the alliance.
SPEAKER BIO
Carsten Ziegeler is senior developer at Adobe Research Switzerland and spends most of his time on architectural and infrastructure topics. Working for over 25 years in open source projects, Carsten is a member of the Apache Software Foundation and heavily participates in several Apache communities including Sling, Felix and ACE. He is a frequent speaker on technology and open source conferences and participates in the OSGi Core Platform and Enterprise expert groups.
Getting Started with Apache Camel at DevNation 2014Claus Ibsen
Get off to a good start with Apache Camel. This session will give you an introduction to Apache Camel and teach you:
- How Camel is related to enterprise integration patterns (EIPs).
- How to use EIPs in Camel routes written in Java code or XML files.
- How to get started developing with Camel, including how to set up new projects from scratch using Maven and Eclipse.
- With a live demo, how to build Camel applications in Java, Spring, and OSGi Blueprint.
- How ready-to-use features make integration much easier.
- About the web console tools that give you insight into your running Apache Camel applications, including visual route diagrams with tracing, debugging, and profiling capabilities.
- Useful resources to learn more about Camel.
This session will be taught with a 50/50 mix of slides and live demos, and it will conclude with Q&A time.
Integration using Apache Camel and GroovyClaus Ibsen
Apache Camel is versatile integration library that supports a huge number of components, enterprise integration patterns, and programming languages.
In this this talk I first introduce you to Apache Camel and its concepts. Then we move on to see how you can use the Groovy programming language with Camel as a first class Groovy DSL to build integration flows.
You will also learn how to build a new Camel and Groovy app from scratch from a live demo.
And we also touch how you can use Camel from grails using the grails-camel plugin.
I will also show the web console tools that give you insight into your running Apache Camel applications, including visual route diagrams with tracing, debugging, and profiling capabilities.
This session will be taught with a 50/50 mix of slides and live demos, and it will conclude with Q&A time.
Getting Started with Apache Camel - Devconf Conference - February 2013Claus Ibsen
This session will teach you how to get a good start with Apache Camel.
We will introduce you to Apache Camel and how Camel its related to Enterprise Integration Patterns. And how you go about using these patterns in Camel routes, written in Java code or XML files.
We will then discuss how you can get started developing with Camel, and how to setup a new project from scratch using Maven and Eclipse tooling. This session includes live demos that show how to build Camel applications in Java, Spring, OSGi Blueprint and alternative languages such as Scala and Groovy.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
At the end we demonstrate how to build custom components, allowing you to build custom adapters if not already provided by Camel.
Before opening up for QA, we will share useful links where you can dive into learning more about Camel.
Developing Java based microservices ready for the world of containersClaus Ibsen
Developing Java based microservices ready for the world of containers
The so-called experts are saying microservices and containers will change the way we build, maintain, operate, and integrate applications. This talk is intended for Java developers who wants to hear and see how you can develop Java microservices that are ready to run in containers.
In this talk we will build a set of Java based Microservices that uses a mix of technologies with:
- Spring Boot with Apache Camel
- Apache Tomcat with Apache Camel
You will see how we can build small discrete microservices with these Java technologies and build and deploy on the Kubernets/OpenShift3 container platform.
We will discuss practices how to build distributed and fault tolerant microservices using technologies such as Kubernetes Services, Camel EIPs, Netflixx Hysterix, and Ribbon.
We will use Zipkin service tracing across all four Java based microservices to provide a visualization of timings and help highlight latency problems in our mesh of microservices.
And the self healing and fault tolerant aspects of the Kubernetes/OpenShift3 platform is also discussed and demoed when we let the chaos monkeys loose killing containers.
This talk is a 50/50 mix between slides and demo.
Getting started with Apache Camel - Javagruppen Copenhagen - April 2014Claus Ibsen
This session will teach you how to get a good start with Apache Camel.
We will introduce you to Apache Camel and how Camel its related to Enterprise Integration Patterns. And how you go about using these patterns in Camel routes, written in Java code or XML files.
We will then discuss how you can get started developing with Camel, and how to setup a new project from scratch using Maven and Eclipse tooling.
This session includes live demos that show how to build Camel applications in Java, Spring, OSGi Blueprint and alternative languages such as Scala and Groovy.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
At the end we demonstrate how to build custom components, allowing you to build custom adapters if not already provided by Camel.
Before opening up for QA, we will share useful links where you can dive into learning more about Camel.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports.
You will hear how Apache Camel is related Enterprise Integration
Patterns which you can use in your architectural designs and as well in Java or XML code, running on the JVM with Camel.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
We also take a moment to look at web console tooling that allows you to get insight into your running Apache Camel applications, which has among others visual route diagrams with tracing/debugging and profiling capabilities. In addition to the web tooling we will also show you other tools in the making.
This talk was presented at JDKIO on September 13th 2016.
Presentation from OSGi Community Event / EclipseCon Europe 2013
One of the major topics the OSGi alliance is working on is a proposal for distributed eventing especially in the cloud. This session starts with an overview of the current state in the alliance and then shows already available solutions from the Apache Sling open source project. This includes distributing events through event admin and controlled processing of events by exactly one processor in distributed installations. The current implementations will be set in context to the ongoing activations in the alliance.
Developing Java based microservices ready for the world of containersClaus Ibsen
The so-called experts are saying microservices and containers will
change the way we build, maintain, operate, and integrate
applications. This talk is intended for Java developers who wants to hear and see how you can develop Java microservices that are ready to run in containers.
In this talk we will build a set of Java based Microservices that uses a mix of technologies with Apache Camel, Spring Boot and WildFly Swarm.
You will see how we can build small discrete microservices with these Java technologies and build and deploy on the Kubernets container platform.
We will discuss practices how to build distributed and fault tolerant microservices using technologies such as Kubernetes Services, Camel EIPs, and Netflixx Hysterix.
And the self healing and fault tolerant aspects of the Kubernetes platform is also discussed and demoed when we let the chaos monkeys loose killing containers.
This talk is a 50/50 mix between slides and demo.
The talk was presented at JDKIO on September 13th 2016.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports. You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
We look into web console tooling that allows you to get insight into your running Apache Camel applications, which has among others visual route diagrams with tracing/debugging and profiling capabilities. In addition to the web tooling we will also show you other tools in the making.
ApacheCon EU 2016 - Apache Camel the integration libraryClaus Ibsen
This presentation will demonstrate to developers involved with integration how the Apache Camel project can make your life much easier.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports.
You will hear how Apache Camel is related Enterprise Integration Patterns which you can use in your architectural designs and as well in Java or XML code, running on the JVM with Camel.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
Apache Camel Introduction & What's in the boxClaus Ibsen
Slides from JavaBin talk in Grimstad Norway, presented by Claus Ibsen in February 2016.
This slide deck is full up to date with latest Apache Camel 2.16.2 release and includes additional slides to present many of the features that Apache Camel provides out of the box.
2 hour session where I cover what is Apache Camel, latest news on the upcoming Camel v3, and then the main topic of the talk is the new Camel K sub-project for running integrations natively on the cloud with kubernetes. The last part of the talk is about running Camel with GraalVM / Quarkus to archive native compiled binaries that has impressive startup and footprint.
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Java 9 is just around the corner. In this session, we'll describe the new modularization support (Jigsaw), new JDK tools, enhanced APIs and many performance improvements that were added to the new version.
To build up any non-trivial business processing, you may have to connect systems that are exposed by web-services, fire off events over message queues, notify users via email or social networking, and much more.
Apache Camel is a lightweight integration framework that helps you connect systems in a consistent and reliable way. Focus on the business reasons behind what's being integrated, not the underlying details of how.
Apache development with GitHub and Travis CIJukka Zitting
Much of the recent innovation in development tooling has happened around Git-based cloud services like GitHub and Travis CI. While these services are not part of the official Apache infrastructure, it's still possible to use them to complement the tooling available to Apache projects. Based on experience from Apache Jackrabbit, this presentation shows how to leverage such external services while staying true to Apache principles and policies.
Camel Day Italy 2021 - What's new in Camel 3Claus Ibsen
Slides for the 50 min presentation at Camel Day Italy 2021, where Claus Ibsen and Andrea Cosentino had the opporunity to give a more deep dive talk about the journey towards Camel 3, and what we have done to re-architect camel core in v3 to make it awesome for microservices, cloud native, kubernetes, quarkus, graalvm, knative, apache kafka.
Camel Day Italy 2021: https://www.meetup.com/it-IT/red-hat-developers-italy/events/275332376/
[Alibaba Cloud Singapore Community Meetup Webinar, 3 Sep 2020] Automate Your ...Vinod Narayanankutty
In this webinar, I discuss the benefits of “Infrastructure-as-code” and how you can automate your cloud infrastructure deployments. We did a deep dive into Terraform, a leading solution and demonstrated how it enables the creation of reproducible infrastructure and accelerates productivity for infrastructure deployments on Alibaba Cloud. I also explored how to scale deployment for other use cases such as Disaster Recovery and Multi-cloud Deployment.
Lightbend Lagom: Microservices Just Rightmircodotta
Microservices architecture are becoming a de-facto industry standard, but are you satisfied with the current state of the art? We are not, as we believe that building microservices today is more challenging than it should be. Lagom is here to take on this challenge. First, Lagom is opinionated and it will take some of the hard decisions for you, guiding you to produce microservices that adheres to the Reactive tenents. Second, Lagom was built from the ground up around you, the developer, to push your productivity to the next level. If you are familiar with the Play Framework's development environment, imagine that but tuned for building microservices; we are sure you are going to love it! Third, Lagom comes with batteries included for deploying in production: going from development to production could not be easier. In this session, you will get an introduction to the Lightbend Lagom framework. There will be code and live demos to show you in practice how it works and what you can do with it, making you fully equipped to build your next microservices with Lightbend Lagom!
Developing Java based microservices ready for the world of containersClaus Ibsen
Developing Java based microservices ready for the world of containers
The so-called experts are saying microservices and containers will change the way we build, maintain, operate, and integrate applications. This talk is intended for Java developers who wants to hear and see how you can develop Java microservices that are ready to run in containers.
In this talk we will build a set of Java based Microservices that uses a mix of technologies with:
- Spring Boot with Apache Camel
- Apache Tomcat with Apache Camel
You will see how we can build small discrete microservices with these Java technologies and build and deploy on the Kubernets/OpenShift3 container platform.
We will discuss practices how to build distributed and fault tolerant microservices using technologies such as Kubernetes Services, Camel EIPs, Netflixx Hysterix, and Ribbon.
We will use Zipkin service tracing across all four Java based microservices to provide a visualization of timings and help highlight latency problems in our mesh of microservices.
And the self healing and fault tolerant aspects of the Kubernetes/OpenShift3 platform is also discussed and demoed when we let the chaos monkeys loose killing containers.
This talk is a 50/50 mix between slides and demo.
Getting started with Apache Camel - Javagruppen Copenhagen - April 2014Claus Ibsen
This session will teach you how to get a good start with Apache Camel.
We will introduce you to Apache Camel and how Camel its related to Enterprise Integration Patterns. And how you go about using these patterns in Camel routes, written in Java code or XML files.
We will then discuss how you can get started developing with Camel, and how to setup a new project from scratch using Maven and Eclipse tooling.
This session includes live demos that show how to build Camel applications in Java, Spring, OSGi Blueprint and alternative languages such as Scala and Groovy.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
At the end we demonstrate how to build custom components, allowing you to build custom adapters if not already provided by Camel.
Before opening up for QA, we will share useful links where you can dive into learning more about Camel.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports.
You will hear how Apache Camel is related Enterprise Integration
Patterns which you can use in your architectural designs and as well in Java or XML code, running on the JVM with Camel.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
We also take a moment to look at web console tooling that allows you to get insight into your running Apache Camel applications, which has among others visual route diagrams with tracing/debugging and profiling capabilities. In addition to the web tooling we will also show you other tools in the making.
This talk was presented at JDKIO on September 13th 2016.
Presentation from OSGi Community Event / EclipseCon Europe 2013
One of the major topics the OSGi alliance is working on is a proposal for distributed eventing especially in the cloud. This session starts with an overview of the current state in the alliance and then shows already available solutions from the Apache Sling open source project. This includes distributing events through event admin and controlled processing of events by exactly one processor in distributed installations. The current implementations will be set in context to the ongoing activations in the alliance.
Developing Java based microservices ready for the world of containersClaus Ibsen
The so-called experts are saying microservices and containers will
change the way we build, maintain, operate, and integrate
applications. This talk is intended for Java developers who wants to hear and see how you can develop Java microservices that are ready to run in containers.
In this talk we will build a set of Java based Microservices that uses a mix of technologies with Apache Camel, Spring Boot and WildFly Swarm.
You will see how we can build small discrete microservices with these Java technologies and build and deploy on the Kubernets container platform.
We will discuss practices how to build distributed and fault tolerant microservices using technologies such as Kubernetes Services, Camel EIPs, and Netflixx Hysterix.
And the self healing and fault tolerant aspects of the Kubernetes platform is also discussed and demoed when we let the chaos monkeys loose killing containers.
This talk is a 50/50 mix between slides and demo.
The talk was presented at JDKIO on September 13th 2016.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports. You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
We look into web console tooling that allows you to get insight into your running Apache Camel applications, which has among others visual route diagrams with tracing/debugging and profiling capabilities. In addition to the web tooling we will also show you other tools in the making.
ApacheCon EU 2016 - Apache Camel the integration libraryClaus Ibsen
This presentation will demonstrate to developers involved with integration how the Apache Camel project can make your life much easier.
We start with an introduction to what Apache Camel is, and how you can use Camel to make integration much easier. Allowing you to focus on your business logic, rather than low level messaging protocols, and transports.
You will hear how Apache Camel is related Enterprise Integration Patterns which you can use in your architectural designs and as well in Java or XML code, running on the JVM with Camel.
You will also hear what other features Camel provides out of the box, which can make integration much easier for you.
Apache Camel Introduction & What's in the boxClaus Ibsen
Slides from JavaBin talk in Grimstad Norway, presented by Claus Ibsen in February 2016.
This slide deck is full up to date with latest Apache Camel 2.16.2 release and includes additional slides to present many of the features that Apache Camel provides out of the box.
2 hour session where I cover what is Apache Camel, latest news on the upcoming Camel v3, and then the main topic of the talk is the new Camel K sub-project for running integrations natively on the cloud with kubernetes. The last part of the talk is about running Camel with GraalVM / Quarkus to archive native compiled binaries that has impressive startup and footprint.
Stream processing from single node to a clusterGal Marder
Building data pipelines shouldn't be so hard, you just need to choose the right tools for the task.
We will review Akka and Spark streaming, how they work and how to use them and when.
Java 9 is just around the corner. In this session, we'll describe the new modularization support (Jigsaw), new JDK tools, enhanced APIs and many performance improvements that were added to the new version.
To build up any non-trivial business processing, you may have to connect systems that are exposed by web-services, fire off events over message queues, notify users via email or social networking, and much more.
Apache Camel is a lightweight integration framework that helps you connect systems in a consistent and reliable way. Focus on the business reasons behind what's being integrated, not the underlying details of how.
Apache development with GitHub and Travis CIJukka Zitting
Much of the recent innovation in development tooling has happened around Git-based cloud services like GitHub and Travis CI. While these services are not part of the official Apache infrastructure, it's still possible to use them to complement the tooling available to Apache projects. Based on experience from Apache Jackrabbit, this presentation shows how to leverage such external services while staying true to Apache principles and policies.
Camel Day Italy 2021 - What's new in Camel 3Claus Ibsen
Slides for the 50 min presentation at Camel Day Italy 2021, where Claus Ibsen and Andrea Cosentino had the opporunity to give a more deep dive talk about the journey towards Camel 3, and what we have done to re-architect camel core in v3 to make it awesome for microservices, cloud native, kubernetes, quarkus, graalvm, knative, apache kafka.
Camel Day Italy 2021: https://www.meetup.com/it-IT/red-hat-developers-italy/events/275332376/
[Alibaba Cloud Singapore Community Meetup Webinar, 3 Sep 2020] Automate Your ...Vinod Narayanankutty
In this webinar, I discuss the benefits of “Infrastructure-as-code” and how you can automate your cloud infrastructure deployments. We did a deep dive into Terraform, a leading solution and demonstrated how it enables the creation of reproducible infrastructure and accelerates productivity for infrastructure deployments on Alibaba Cloud. I also explored how to scale deployment for other use cases such as Disaster Recovery and Multi-cloud Deployment.
Lightbend Lagom: Microservices Just Rightmircodotta
Microservices architecture are becoming a de-facto industry standard, but are you satisfied with the current state of the art? We are not, as we believe that building microservices today is more challenging than it should be. Lagom is here to take on this challenge. First, Lagom is opinionated and it will take some of the hard decisions for you, guiding you to produce microservices that adheres to the Reactive tenents. Second, Lagom was built from the ground up around you, the developer, to push your productivity to the next level. If you are familiar with the Play Framework's development environment, imagine that but tuned for building microservices; we are sure you are going to love it! Third, Lagom comes with batteries included for deploying in production: going from development to production could not be easier. In this session, you will get an introduction to the Lightbend Lagom framework. There will be code and live demos to show you in practice how it works and what you can do with it, making you fully equipped to build your next microservices with Lightbend Lagom!
Using Groovy? Got lots of stuff to do at the same time? Then you need to take a look at GPars (“Jeepers!”), a library providing support for concurrency and parallelism in Groovy. GPars brings powerful concurrency models from other languages to Groovy and makes them easy to use with custom DSLs:
- Actors (Erlang and Scala)
- Dataflow (Io)
- Fork/join (Java)
- Agent (Clojure agents)
In addition to this support, GPars integrates with standard Groovy frameworks like Grails and Griffon.
Background, comparisons to other languages, and motivating examples will be given for the major GPars features.
CIRCUIT 2015 - 10 Things Apache Sling Can DoICF CIRCUIT
Presented by Carsten Ziegeler & David Bosschaert from Adobe
Apache Sling is the underlying web framework for Adobe AEM. While the main concept of resource handling is well known, the project contains some hidden gems. Learn some fun facts about the open source project together with very valuable insight into important bits and pieces making the life of an application developer easier. This is a developer focused journey into the "secrets" of Apache Sling.
Performance van Java 8 en verder - Jeroen BorgersNLJUG
We weten allemaal dat de grootste verbetering die Java 8 brengt de ondersteuning voor lambda-expressies is. Dit introduceert functioneel programmeren in Java. Door het toevoegen van de Stream API wordt deze verbetering nog groter: iteratie kan nu intern worden afgehandeld door een bibliotheek, je kunt daarmee nu het beginsel "Tell, don’t ask" toepassen op collecties. Je kunt gewoon vertellen dat er een ??functie uitgevoerd moet worden op je verzameling, of vertellen dat dat parallel, door meerdere cores moet gebeuren. Maar wat betekent dit voor de prestaties van onze Java-toepassingen? Kunnen we nu meteen volledig al onze CPU-cores benutten om betere responstijden te krijgen? Hoe werken filter / map / reduce en parallele streams precies intern? Hoe wordt het Fork-Join framework hierin gebruikt? Zijn lambda's sneller dan inner klassen? - Al deze vragen worden beantwoord in deze sessie. Daarnaast introduceert Java 8 meer performance verbeteringen: tiered compilatie, PermGen verwijdering, java.time, Accumulators, Adders en Map verbeteringen. Ten slotte zullen we ook een kijkje nemen in de keuken van de geplande performance verbeteringen voor Java 9: benutting van GPU's, Value Types en arrays 2.0.
Exploring Java Heap Dumps (Oracle Code One 2018)Ryan Cuprak
Memory leaks are not always simple or easy to find. Heap dumps from production systems are often gigantic (4+ gigs) with millions of objects in memory. Simple spot checking with traditional tools is woefully inadequate in these situations, especially with real data. Leaks can be entire object graphs with enormous amounts of noise. This session will show you how to build custom tools using the Apache NetBeans Profiler/Heapwalker APIs. Using these APIs, you can read and analyze Java heaps programmatically to ask really hard questions. This gives you the power to analyze complex object graphs with tens of thousands of objects in seconds.
JIT vs. AOT: Unity And Conflict of Dynamic and Static Compilers Nikita Lipsky
Java had been constantly criticized for poor performance ever since its inception, but not so much in recent years. Thanks to optimizing dynamic native code compilers, Java performance today is very close to the performance of low level languages such as C/C++, and is even better on some classes of applications. Along with dynamic compilers, static compilers for Java have been evolving as well, so there is still no clear winner among these two approaches. It should then come as no surprise that an AOT compiler is finally going to appear even in the HotSpot JVM and OpenJDK via JEP-295, which is officially included in Java 9.
Her, I would like to dispel common myths around the old dispute on whether dynamic or static compilation is better, show that both approaches have their strengths and weaknesses, and explain why the future is the hybrid approach.
Apache Sqoop: Unlocking Hadoop for Your Relational Database huguk
Kathleen Ting, Technical Account Manager @ Cloudera and Sqoop Committer
Unlocking data stored in an organization's RDBMS and transferring it to Apache Hadoop is a major concern in the big data industry. Apache Sqoop enables users with information stored in existing SQL tables to use new analytic tools like Apache HBase and Apache Hive. This talk will go over how to deploy and apply Sqoop in your environment as well as transferring data from MySQL, Oracle, PostgreSQL, SQL Server, Netezza, Teradata, and other relational systems. In addition, we'll show you how to keep table data and Hadoop in sync by importing data incrementally as well as how to customize transferred data by calling various database functions.
Spring Day | Spring and Scala | Eberhard WolffJAX London
2011-10-31 | 09:45 AM - 10:30 AM
Spring is widely used in the Java world - but does it make any sense to combine it with Scala? This talk gives an answer and shows how and why Spring is useful in the Scala world. All areas of Spring such as Dependency Injection, Aspect-Oriented Programming and the Portable Service Abstraction as well as Spring MVC are covered.
Project "Orleans" is an Actor Model framework from Microsoft Research that is currently in public preview. It is designed to make it easy for .NET developers to develop and deploy an actor-based distributed system into Microsoft Azure.
Can puppet help you run docker on a T2.Micro?Neil Millard
A puppet beginners guide through a number of the key concepts of puppet; stages, Role and profile, hiera data and puppet forge, as well as a brief introduction to Docker.
We will use these to explain a solution of running a puppet manifest to configure Amazon's smallest server to run a docker containerised web service.
You will learn why puppet stages are required in this solution, how roles and profiles are defined and used, and finally use of the puppet Forge with Hiera data to install and run docker containers.
This talk will contain links to code that can be used afterwards and we'll touch on what docker is and how to configure the puppet module to automatically run containers.
Transactional writes to cloud storage with Eric LiangDatabricks
We will discuss the three dimensions to evaluate HDFS to S3: cost, SLAs (availability and durability), and performance. He then provided a deep dive on the challenges in writing to Cloud storage with Apache Spark and shared transactional commit benchmarks on Databricks I/O (DBIO) compared to Hadoop.
Empowering developers to deploy their own data storesTomas Doran
Empowering developers to deploy their own data stores using Terrafom, Puppet and rage. A talk about automating server building and configuration for Elasticsearch clusters, using Hashicorp and puppet labs tool. Presented at Config Management Camp 2016 in Ghent
Similar to EVOLVE'13 | Enhance | Eventing to job Processing | Carsten Zeigler (20)
Extending Adobe Experience Manager with custom solutions that meet your unique business needs has never been easier. Learn how Adobe I/O developer tools, including Adobe I/O Runtime and Adobe I/O Events can be leveraged to deliver timely, targeted, personalized and effective customer experiences.
Adobe Asset Link (AAL) is the new solution to seamless linking of AEM Assets with Creative Cloud products. This session talks about the common use cases where AAL would be the right choice and also provides details around some of the most common pitfalls to avoid when implementing AAL.
AEM is content-centric, so is the future of building commerce experiences. In this session, you will be shown how to build modern commerce experiences with AEM. The demo will explain how authors create/configure multiple (industry-independent) stores, configure the commerce environment for each store and manage all the commerce content and features, without writing a line of code. The second part will demonstrate how developers create templates, components, and functionality to build a compelling Web/User/Commerce Experience.
Rolling out AEM Site or Assets? Learn how to structure your deployment to maximize your return while reducing risk. See how to overdeliver while hitting aggressive timelines. Understand how to generate excitement that fuels user adoption and sets you up for success.
The roles of the Product Owner, Business Analyst and/or Subject Matter Expert are crucial to the success of an AEM project, especially at critical times. From the development team’s perspective leveraging these resources during kickoff can set the project up for success. Hear more about the right resourcing and preparation for kickoff can enable development teams to start a project off right and to avoid costly changes (scope increase or rework) later in the project.
In this session, attendees will learn about key take-aways from a recent interactive round table hosted by Translations.com and Adobe with their shared customers, Lavazza, Western Digital, Lufthansa, and Honeywell. As the $800M leader of their industry, Translations.com will also share trends in translations they are seeing across their 95+ Adobe Experience Manager customers. Bring your burning localization related questions to this interactive session.
When Furniture Row decided to leave their digital assets management provider to go to AEM, they began a multi-phased journey that has resulted in the transition of their eCommerce platform and content management system. They recently launch a newly redesigned DenverMattress.com site which introduces a headless implementation of AEM sites, a new authoring experience for their content team, and an upgrade from a freestanding instance of Scene7 Classic to Dynamic Media integrated with AEM Assets. Hear from the implementation team and learn more about Furniture Row’s digital evolution.
Today’s customers expect relevant and personalized engagement with brands – or they go elsewhere. In this session, Carl will lay out some of the hurdles involved in crafting a customer- and loyalty-forward data management and architectural strategy. Using examples from specific client engagements, he will outline approaches to building an actionable data and technology stack on which teams can build and extend personalized interactions.
Autodesk cut their teeth on AEM in 2013 with Autodesk.com. It's safe to say they've come a long way since then. Join Sharat Radhakrishnan and his gang as they bring us up to speed on their wild AEM journey.
Want to make sure your scope is accurate? How do you dissect requirements to meet your implementation needs? Learn the pitfalls, how to plan MVP projects and what it takes to dig deep and find success when you start your AEM projects.
Get a glimpse into the highly competitive AEM talent market, Dave's journey as an entrepreneur and a little known secret that can help managers better understand the phycological needs of their team members and drastically increase their retention.
Understand concepts around Deep Learning, Machine Learning, Pattern Recognition and more. See AEM scenarios powered with Adobe Sensei. Understand the latest roadmap on AEM and Sensei.
AEM is an investment in the future so it's no surprise that architecting flexible and forward thinking is a must. See how to take an enterprise approach to your AEM architecture that supports globalization, extreme personalization, and omnichannel delivery.
Adobe AEM Managed Services started deploying Production AEM workloads on Azure in Nov 2017. In this session, we will share our learnings and offer advice to those thinking about deploying their AEM workloads on Azure.
Learn how to create omnichannel experiences using Adobe Experience Manager where you manage the content once and deliver across channels like Web, SPA, Mobile, Chatbot, Voice and Email.
Everyone wants to see their project launch successfully. In this session learn about the roles, processes, and tools that are critical to every project.
More from Evolve The Adobe Digital Marketing Community (20)
Company Valuation webinar series - Tuesday, 4 June 2024FelixPerez547899
This session provided an update as to the latest valuation data in the UK and then delved into a discussion on the upcoming election and the impacts on valuation. We finished, as always with a Q&A
Kseniya Leshchenko: Shared development support service model as the way to ma...Lviv Startup Club
Kseniya Leshchenko: Shared development support service model as the way to make small projects with small budgets profitable for the company (UA)
Kyiv PMDay 2024 Summer
Website – www.pmday.org
Youtube – https://www.youtube.com/startuplviv
FB – https://www.facebook.com/pmdayconference
Digital Transformation and IT Strategy Toolkit and TemplatesAurelien Domont, MBA
This Digital Transformation and IT Strategy Toolkit was created by ex-McKinsey, Deloitte and BCG Management Consultants, after more than 5,000 hours of work. It is considered the world's best & most comprehensive Digital Transformation and IT Strategy Toolkit. It includes all the Frameworks, Best Practices & Templates required to successfully undertake the Digital Transformation of your organization and define a robust IT Strategy.
Editable Toolkit to help you reuse our content: 700 Powerpoint slides | 35 Excel sheets | 84 minutes of Video training
This PowerPoint presentation is only a small preview of our Toolkits. For more details, visit www.domontconsulting.com
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Sustainability has become an increasingly critical topic as the world recognizes the need to protect our planet and its resources for future generations. Sustainability means meeting our current needs without compromising the ability of future generations to meet theirs. It involves long-term planning and consideration of the consequences of our actions. The goal is to create strategies that ensure the long-term viability of People, Planet, and Profit.
Leading companies such as Nike, Toyota, and Siemens are prioritizing sustainable innovation in their business models, setting an example for others to follow. In this Sustainability training presentation, you will learn key concepts, principles, and practices of sustainability applicable across industries. This training aims to create awareness and educate employees, senior executives, consultants, and other key stakeholders, including investors, policymakers, and supply chain partners, on the importance and implementation of sustainability.
LEARNING OBJECTIVES
1. Develop a comprehensive understanding of the fundamental principles and concepts that form the foundation of sustainability within corporate environments.
2. Explore the sustainability implementation model, focusing on effective measures and reporting strategies to track and communicate sustainability efforts.
3. Identify and define best practices and critical success factors essential for achieving sustainability goals within organizations.
CONTENTS
1. Introduction and Key Concepts of Sustainability
2. Principles and Practices of Sustainability
3. Measures and Reporting in Sustainability
4. Sustainability Implementation & Best Practices
To download the complete presentation, visit: https://www.oeconsulting.com.sg/training-presentations
Top mailing list providers in the USA.pptxJeremyPeirce1
Discover the top mailing list providers in the USA, offering targeted lists, segmentation, and analytics to optimize your marketing campaigns and drive engagement.
3.0 Project 2_ Developing My Brand Identity Kit.pptxtanyjahb
A personal brand exploration presentation summarizes an individual's unique qualities and goals, covering strengths, values, passions, and target audience. It helps individuals understand what makes them stand out, their desired image, and how they aim to achieve it.
B2B payments are rapidly changing. Find out the 5 key questions you need to be asking yourself to be sure you are mastering B2B payments today. Learn more at www.BlueSnap.com.
Building Your Employer Brand with Social MediaLuanWise
Presented at The Global HR Summit, 6th June 2024
In this keynote, Luan Wise will provide invaluable insights to elevate your employer brand on social media platforms including LinkedIn, Facebook, Instagram, X (formerly Twitter) and TikTok. You'll learn how compelling content can authentically showcase your company culture, values, and employee experiences to support your talent acquisition and retention objectives. Additionally, you'll understand the power of employee advocacy to amplify reach and engagement – helping to position your organization as an employer of choice in today's competitive talent landscape.
1. FROM EVENTING TO JOB PROCESSING
Carsten Ziegeler | Adobe Research Switzerland
1
2. 2
• RnD Team at Adobe Research Switzerland
• OSGi Core Platform and Enterprise Expert Groups
• Member of the ASF
• Current PMC Chair of Apache Sling
• Apache Sling, Felix, ACE
• Conference Speaker
• Technical Reviewer
• Article/Book Author
ABOUT
3. 3
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
Felix OSGi Event Admin
FromEventingtoJobProcessing
4. 4
OSGI EVENT ADMIN
Publish / Subscribe Model
OSGi Event Admin
Component
A
publish
deliver
Component
X
Component
Y
5. 5
• OSGi event is a data object with
• Topic (hierarchical namespace)
• Properties (key-value-pairs)
• Page Event
• Topic: com/day/cq/wcm/core/page
• Properties: path, change type (add/remove/edit) etc.
OSGI EVENT ADMIN
Publish / Subscribe Model
6. 6
• Publisher creates event object
• Sends event through EventAdmin service
• Either sync or async delivery
• Subscriber is an OSGi service
• Service registration properties
• Interested topic(s)
• Additional filters (optional)
OSGI EVENT ADMIN
Publish / Subscribe Model
7. 7
• Immediate delivery to available subscribers
• No guarantee of delivery
• No distributed delivery
OSGI EVENT ADMIN
Publish / Subscribe Model
8. 8
• Immediate delivery to available subscribers
• No guarantee of delivery
• No distributed delivery
OSGI EVENT ADMIN
Publish / Subscribe Model
DiscoverySling Job Distribution
9. 9
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
10. 10
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
12. 12
Instance: Unique Id (Sling ID)
TOPOLOGIES I
Apache Sling Discovery
Clustered CRXCRX
ID : A ID : X ID : 42ID : 1
Single
Instance
Instance
1
Instance
2
Instance
3
13. 13
• Instance: Unique Id (Sling ID)
• Cluster: Unique Id and leader
TOPOLOGIES I
Apache Sling Discovery
Cluster 99Cluster 35
Clustered CRXCRX
ID : A ID : X ID : 42ID : 1
Single
Instance
Instance
1
Instance
2
Instance
3
Leader Leader
14. 14
• Topology: Set of clusters
TOPOLOGIES I
Apache Sling Discovery
Cluster 99Cluster 35
Clustered CRXCRX
ID : A ID : X ID : 42ID : 1
Single
Instance
Instance
1
Instance
2
Instance
3
Leader Leader
TopologyTopology
• Instance: Unique Id (Sling ID)
• Cluster: Unique Id and leader
15. 15
TOPOLOGIES I
Apache Sling Discovery
• Instance: Unique Id (Sling ID)
• Cluster: Unique Id and leader
• Topology: Set of clusters
Cluster 99
Clustered CRXCRX
ID : A ID : X ID : 42ID : 1
Single
Instance
Instance
1
Instance
2
Instance
3
Leader
Topology
Cluster 35
Leader
16. Cluster 99
16
• Instance
• Sling ID
• Optional: Name and description
• Belongs to a cluster
• Might be the cluster leader
• Additional properties which are distributed
• Extensible through own services (PropertyProvider)
• E.g. data center, region or enabled job topics
• Cluster
• Elects (stable) leader
• Stable instance ordering
• TopologyEventListener: receives events on topology changes
TOPOLOGIES II
Apache Sling Discovery
ID : 42
Instance
3
Topology
Leader
17. 17
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
18. 18
• Job : Guaranteed processing, exactly once
• Exactly one job consumer
• Started by client code, e.g. for replication, workflow...
• Job topic
• Payload is a serializable map
• Sling Job Manager handles and distributes them
• Sends out payload events to a consumer…
• …and waits for response
• Retry and failover
• Notification listeners (fail, retry, success)
JOB HANDLING I
Apache Sling Job Handling
19. 19
STARTING / PROCESSING A JOB
Apache Sling Job Handling
public interface JobConsumer {
String PROPERTY_TOPICS = "job.topics";
enum JobResult {
OK,
FAILED,
CANCEL,
ASYNC
}
JobResult process(Job job);
}
public interface JobManager {
Job addJob(String topic, String optionalName, Map<String, Object> properties);
…
}
Starting a job
Processing a job
Note: Starting/processing of jobs through Event Admin is deprecated but still supported
New in
5.6.1
20. 20
• New jobs are immediately persisted (resource tree / repository)
• Jobs are “pushed” to the processing instance
• Processing instances use different queues
• Associated with job topic(s)
• Main queue
• 0..n custom queues
• For example: replication agent queue or workflow queue
JOB HANDLING II
Apache Sling Job Handling
21. 21
• Queue is configurable
• Queue is started on demand in own thread
• And stopped if unused for some time
• Types
• Ordered queue (eg replication)
• Parallel queue: Topic Round Robin (eg workflow)
• Limit for parallel threads per queue
• Number of retries (-1 = endless)
• Retry delay
• Thread priority
JOB QUEUE
Apache Sling Job Handling
22. 22
• Job Manager Configuration = Main Queue Configuration
• Maximum parallel jobs (15)
• Retries (10)
• Retry Delay
• Eventing Thread Pool Configuration
• Used by all queues
• Pool size (35) = Maximum parallel jobs for a single instance
ADDITIONAL CONFIGURATIONS
Apache Sling Job Handling
24. 24
• Each instance determines enabled job topics
• Derived from Job Consumers (new API required)
• Can be whitelist/blacklisted (in Job Consumer Manager)
• Announced through Topology (used e.g. by Offloading)
• Job Distribution depends on enabled job topics and queue type
• Potential set of instances derived from topology (enabled job topics)
• Ordered : processing on leader only, one job after the other
• Parallel: Round robin distribution on all potential instances
• Local cluster instances have preference
• Failover
• Instance crash: leader redistributes jobs to available instances
• Leader change taken into account
• On enabled job topics changes: potential redistribution
JOB DISTRIBUTION
Apache Sling Job Handling
25. 25
• Scalability in AEM:
• DAM Ingestion
• Non-Clustered installation requirement
• The term Offloading:
• In AEM used for all things job distribution and topology in clustered and non-
clustered installations, e.g. ‘Offloading Browser’
• More technically it’s ‘only’ a little add-on in Granite to Sling Job Distribution
for handling non-clustered installations
WHY OFFLOADING
26. 26
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
27. 27
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
33. 33
• Detects offloading jobs
• Transport of job and job
payload between origin
and target instance
• Uses replication for the
transport
• No distribution of jobs
• No execution of jobs
OFFLOADING FRAMEWORK
Overview
Instance
Sling ID: 1
Job Manager
Job Consumer
Topic: A
Offloading
Job Manager
C:2
Instance
Sling ID: 2
Job Manager
Job Consumer
Topic: C
Offloading
Job Manager
C:2
C
Job
Replication
1
2
3 5
4
6
7
Topology
35. 35
• Defines the content to be transported between the instances
• Properties on the job payload
• Takes comma separated list of paths
• Used to build the replication package
• The job itself is implicitly added by the offloading framework
• Offloading job input
• Property: OffloadingJobProperties. INPUT_PAYLOAD (offloading.input.payload)
• Offloading job output
• Property: OffloadingJobProperties. OUTPUT_PAYLOAD
(offloading.output.payload)
OFFLOADING FRAMEWORK
Payload
36. 36
• Configures Job Consumers
• Configures the topic white/black listing properties of each instance
• What jobs to execute on what instance
• Configures the distribution
• Configuration applies for both, clustered and non-clustered installations
OFFLOADING BROWSER
UI
40. 40
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
41. 41
• New JobConsumer
• Class: WorkflowOffloadingJobConsumer
• Topic: com/adobe/granite/workflow/offloading
• Can launch new workflows
• Expects the workflow model on the job payload
• Expects the workflow payload on the job payload
• For use with clustered and non-clustered installations
WORKFLOW DISTRIBUTION
42. 42
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
43. 43
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
• Create service component
• Must register with topic
• Implement new JobConsumer
interface
44. 44
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
• Define the job topic
45. 45
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
• Access job properties (payload)
• Read workflow model and
payload from job properties
46. 46
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
• Workflow specific
• Use workflow API to start
workflow for the given model
and payload
47. 47
@Component
@Service
@Property(name = JobConsumer.PROPERTY_TOPICS, value = WorkflowOffloadingJobConsumer.TOPIC)
public class WorkflowOffloadingJobConsumer implements JobConsumer {
public static final String TOPIC = "com/adobe/granite/workflow/offloading";
public static final String WORKFLOW_OFFLOADING_MODEL = "offloading.workflow.model";
public static final String WORKFLOW_OFFLOADING_PAYLOAD = "offloading.workflow.payload”;
public JobResult process(Job job) {
// read workflow model and payload from job payload
String modelPath= job.getProperty(WORKFLOW_OFFLOADING_MODEL , "");
String payloadPath= job.getProperty(WORKFLOW_OFFLOADING_PAYLOAD , "");
// get/create WorkflowSession, WorkflowModel and WorkflowData objects
WorkflowSession wfSession = ..;
WorkflowModel wfModel = ..;
WorkflowData wfData = ..;
// start the workflow
wfSession.startWorkflow(wfModel, wfData, metaData);
// all good
return JobResutl.OK;
}
}
WORKFLOW DISTRIBUTION
Job Consumer (Simplified)
• Use JobResult enumeration to
report back the job status
48. 48
OVERVIEW
Where to fit in the stack
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
49. 49
• Default ingestion workflow: “DAM Update Asset”
• Load is put on the instance where the workflow is started, usually the author
• New ingestion workflow: “DAM Update Asset Offloading”
• Needs to be manually enabled by changing the workflow launcher
• New workflow model with a single step: AssetOffloadingProcess
• Uses WorkflowExternalProcess API
• Creates a new job on topic: com/adobe/granite/workflow/offloading
• Allows distributing the default ingestion workflow
• Load is put on the instance where the job is distributed to
• Can be used to distribute in clustered and non-clustered installations
DAM INGESTION
52. 52
@Component
@Service
public class AssetOffloadingProcess implements WorkflowExternalProcess {
@Reference
private JobManager jobManager;
private static final String TOPIC = "com/adobe/granite/workflow/offloading";
public Serializable execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap){
Asset asset = ..;
String workflowModel = “/etc/workflow/models/dam/update_asset/jcr:content/model”;
String workflowPayload = “/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg”;
ValueMap jobProperties = new ValueMapDecorator(new HashMap<String, Object>());
jobProperties.put(WORKFLOW_OFFLOADING_MODEL, workflowModel);
jobProperties.put(WORKFLOW_OFFLOADING_PAYLOAD, workflowPayload);
String offloadingInput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
String offloadingOutput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
jobProperties.put(OffloadingJobProperties.INPUT_PAYLOAD.propertyName(), offloadingInput);
jobProperties.put(OffloadingJobProperties.OUTPUT_PAYLOAD.propertyName(), offloadingOutput);
Job offloadingJob = jobManager.addJob(TOPIC, null, jobProperties);
return offloadingJob.getId();
}
DAM INGESTION
Create Job (from workflow step)
• DAM and Workflow specific
• Resolve to Asset
• Read model from meta data
• Read workflow payload from
Asset path
53. 53
public Serializable execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap){
Asset asset = ..;
String workflowModel = “/etc/workflow/models/dam/update_asset/jcr:content/model”;
String workflowPayload = “/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg”;
ValueMap jobProperties = new ValueMapDecorator(new HashMap<String, Object>());
jobProperties.put(WORKFLOW_OFFLOADING_MODEL, workflowModel);
jobProperties.put(WORKFLOW_OFFLOADING_PAYLOAD, workflowPayload);
String offloadingInput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
String offloadingOutput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
jobProperties.put(OffloadingJobProperties.INPUT_PAYLOAD.propertyName(), offloadingInput);
jobProperties.put(OffloadingJobProperties.OUTPUT_PAYLOAD.propertyName(), offloadingOutput);
Job offloadingJob = jobManager.addJob(TOPIC, null, jobProperties);
return offloadingJob.getId();
}
DAM INGESTION
Create Job (from workflow step)
• ValueMap for job properties
• Put model and payload on job
properties
• Used by the JobConsumer
54. 54
public Serializable execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap){
Asset asset = ..;
String workflowModel = “/etc/workflow/models/dam/update_asset/jcr:content/model”;
String workflowPayload = “/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg”;
ValueMap jobProperties = new ValueMapDecorator(new HashMap<String, Object>());
jobProperties.put(WORKFLOW_OFFLOADING_MODEL, workflowModel);
jobProperties.put(WORKFLOW_OFFLOADING_PAYLOAD, workflowPayload);
String offloadingInput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
String offloadingOutput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
jobProperties.put(OffloadingJobProperties.INPUT_PAYLOAD.propertyName(), offloadingInput);
jobProperties.put(OffloadingJobProperties.OUTPUT_PAYLOAD.propertyName(), offloadingOutput);
Job offloadingJob = jobManager.addJob(TOPIC, null, jobProperties);
return offloadingJob.getId();
}
DAM INGESTION
Create Job (from workflow step)
• Build offloading payload properties
• Comma separated list of paths
• Put them on the job payload as
well
• Only used for non-clustered
distribution
55. 55
public Serializable execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap){
Asset asset = ..;
String workflowModel = “/etc/workflow/models/dam/update_asset/jcr:content/model”;
String workflowPayload = “/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg”;
ValueMap jobProperties = new ValueMapDecorator(new HashMap<String, Object>());
jobProperties.put(WORKFLOW_OFFLOADING_MODEL, workflowModel);
jobProperties.put(WORKFLOW_OFFLOADING_PAYLOAD, workflowPayload);
String offloadingInput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
String offloadingOutput = “/etc/workflow/models/dam/update_asset/jcr:content/model,
/content/dam/geometrixx-outdoors/articles/downhill-ski-conditioning.jpg” ;
jobProperties.put(OffloadingJobProperties.INPUT_PAYLOAD.propertyName(), offloadingInput);
jobProperties.put(OffloadingJobProperties.OUTPUT_PAYLOAD.propertyName(), offloadingOutput);
Job offloadingJob = jobManager.addJob(TOPIC, null, jobProperties);
return offloadingJob.getId();
}
DAM INGESTION
Create Job (from workflow step)
• Create job using JobManager
service
• Use topic from job consumer
• Put job payload properties
• Return the jobId as the workflow
process id (workflow specific)
56. 56
@Component
@Service
public class AssetOffloadingProcess implements WorkflowExternalProcess {
@Reference
private JobManager jobManager;
private static final String TOPIC = "com/adobe/granite/workflow/offloading";
public Serializable execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap metaDataMap){
….
}
public boolean hasFinished(Serializable externalProcessId, ..){
// returns null, if job is finished
Job offloadingJob = jobManager.getJobById((String) externalProcessId);
return offloadingJob == null;
}
}
DAM INGESTION
Create Job (from workflow step)
• Workflow API specific callback
• Process id = jobId, from
execute()
• Query job by jobId
• Workflow step finished when job
is finished
57. 57
• Choose a job topic
• Create JobConsumer component and register with topic chosen
• To create a new job use new JobManager.addJob() API with the topic chosen
and the job payload
• Add offloading payload to job payload
• Bundle and deploy JobConsumer on topology instances
• Enable/Disable the new topic on the instances, using Offloading Browser
DEVELOPMENT - RECIPE
59. 59
TAKE AWAY
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
org.apache.sling.discovery.api
• Discovers topology
• Cross-Cluster detection
• Foundation for job distribution
60. 60
TAKE AWAY
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
org.apache.sling.event
• Uses Sling Discovery
• New JobConsumer API and job topics
• New JobManager API for creating new distributed jobs
• Distributes jobs based on available job topics and job queue
configuration.
• Distributes in the whole topology, including clustered and non-
clustered instances
• Can execute cluster local jobs only
61. 61
TAKE AWAY
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
com.adobe.granite.offloading.core
• Builds on top of Sling Distributed Jobs
• Does not perform distribution
• Detects jobs distributed to non-clustered instances
• Transports the jobs and payload to non-clustered
instances
• Uses replication for transport
• Does not execute jobs
62. 62
TAKE AWAY
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
com.adobe.granite.workflow.core
• Defines a job consumer to distribute the execution
of whole workflows
• Defines topic com/adobe/granite/workflow/
offloading
• Implements WorkflowOffloadingJobConsumer
component
• Supports clustered and non clustered installations
63. 63
TAKE AWAY
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
cq-dam-core
• Makes use of com/adobe/granite/workflow/
offloading topic from Workflow Distribution
• New workflow step (external step) that creates a
new job on topic com/adobe/granite/workflow/
offloading
• New “DAM Update Asset Offloading” workflow
• Supports clustered and non clustered
configurations
65. 65
TAKE AWAY
Potential Future
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
Felix OSGi Event Admin
FromEventingtoJobProcessing
New OSGi Specification
• Distributed Eventing
• Cloud Computing
66. 66
TAKE AWAY
Potential Future
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
Felix OSGi Event Admin
FromEventingtoJobProcessing
Job Distribution
• Improved load balancing
• Pull based distribution
67. 67
TAKE AWAY
Potential Future
Discovery
Offloading Workflow Distribution
DAM Ingestion
Sling
Granite
AEM
Job Distribution
Felix OSGi Event Admin
FromEventingtoJobProcessing
Offloading
• As part of job distribution
• Even simpler setup