SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
The natural tendency for application developers is to construct their code in a procedural, monolithic pattern. Veteran Developers know that this leads to error prone, unscalable, slow software – yet it is alarmingly prevalent. There have been several architectural patterns that have risen over the years which have attempted to mitigate this problem. We’ve heard of Service Oriented Architecture, Integration Patterns, and Event-Driven Systems, but the Reactive pattern has the best chance for success.
In this talk I will discuss the tenants of the Reactive Pattern and the importance of moving away from Monolithic architectures into Reactive. We will examine Spring Integration and the Grails Async features (along with Netty and RabbitMQ) in order to show they can quickly and effectively help your application to become Reactive. Finally, I will argue that the JVM is the best foundation currently for this architecture – but that if we’re not careful, NodeJS may be the most popular.
Welcome to “Why Reactive Architecture will take over the world, and why we should be afraid it may be NodeJS”. Another title is “Why I hate Monolithic apps, just like the one I showed you” I’m Steve. Let’s talk about application scalability and complexity
-I won’t try to make anyone guess what this is -I should explain it a bit, as I’ll be following a similar format for other diagrams. -Blue symbol represents some feature - browsing, managing, payments, cart -black box is a single web node or application -cloud is internet
-A few questions, before we begin. -I’d like for you to keep the answers to these in your head as we go along
-or rather, is everything in trunk / master?
-Do you miss Test Driven Development?
whether it be a total overhaul or just a few features. How was it?
And if so, did it take you longer than coding that new feature? Did you have to include other folks to help you? Someone who’d been around for a while that knows the entire codebase? How often does that person get dragged in to help others?
These are all symptoms of a Monolithic application, so If you answered “YES” to any of these, then you probably have a Monolithic app. If you laughed, I’m going to assume that’s a ‘yes’. I feel your pain!
That being said, Monolithic Architecture does have one big draw: it makes it easy for new projects to reach the Minimum Viable Product rapidly.
The Complexity will become enormous. Any gains you think you may have made at the beginning of the project will not continue, particularly once you start to attract users. ..And grow your team. - A monothlic app will not scale!
You may say to yourself, “Wait, Steve!” what do you mean it won’t scale?
I can create multiple nodes of my app and then load balance them...
Add Swimlanes/ master-slave dbs in case that doesn’t work That star is supposed to represent the ‘master’ db
… and suuurrre, you certainly could do those things. But that’s not I what I mean when I say ‘scale’
Can you really look at a class or schema diagram like this and really think, “Yeah, that’s awesome. Nailed it.”? -btw, this was the largest I could find on Google images. I’m sure we’ve all seen the massive versions of this that cover multiple pages, taped to the wall. - If you ever have to tape to the wall, done something wrong
The ability to add new features, fix bugs, or resolve technical debt cannot scale with the size of the application. Throwing more developers at it also will not help, they’ll just get in each others’ way. Be merge conflicts all over the place. - One of the best passive-aggressive statements I’ve seen - Passive aggressively scrawled on the wall: You can’t make a baby in 1 month with 9 women.
-And trying to refactor anything? Forget it. -I’ve seen it happen plenty of times: Your company will start out by creating a team to reimplement a feature or rebuild an application. Sounds good? Eventually, this team will start taking too long. Soon enough, the managers will be screaming for updates or new features on the old app. So, you end up with *two* teams, one maintaining the old app, while the other continues the rebuild AANNND maintains feature parity with the old system.
As the size of your codebase increases, the computational complexity will exponentially increase and will have adverse effects on maintenance. This has been known for some time; there’s been papers written on the subject. Although it’s difficult to accurately measure.
This is such a misguided architecture, it makes you wonder how exactly we collectively ended up here.
I blame it largely at the feet of the large, popular web frameworks, e.g. Rails, Django, Grails. Don’t get me wrong, they have their uses. I loooooove Grails.
But they’re touted as if they’re the magic cure for building your app or product; that your company should be based entirely within a framework. You hear folks say, “Oh, what are you using in your startup?” “Why, We’re a Ruby on Rails shop, obviously!”. I argue that your framework choice is largely irrelevant (unless it’s Grails ;)! ). When people ask you “what technology are you using?” the answer should be something like “Ugh, well, that’s difficult”.
And so we reach the key point of this presentation: The choice you make when designing your architecture is much more valuable than the frameworks or languages you choose.
In fact, that’s so important I’m going to mention it again, but … bigger… this time. Along, of course, with the point that even when choosing our framework, we’d only consider something on the JVM, right?
There have been attempts to mitigate this scalability problem with different architectures. This is nothing knew. One of the most widely used, I’d argue is Service Oriented Architecture, or SOA for short. Is everyone aware of the concept?
-Let’s take our anecdotal design, an e-commerce store.
With SOA, one would break up each feature into an individual component or service, and setup communication between each service over some direct service calls. Eg SOAP or REST. -Note that these calls are traditionally Synchronous over http, so it’s imperative that each node return as quickly as possible
This architecture keeps keeps the individual features easier to manage and keep track of, which reduces overall complexity dramatically as we can examine each service in isolation, rather than one big code base
… as each feature set will be in its own simpler repository
… because with distributed applications like SOA, each node becomes its own concentrated ecosystem. New features can be added with relatively little pain.
And the code is easier to maintain. I highlight this point for a reason. Bugs are vastly easier to discover. Refactoring is highly concentrated and (shouldn’t) disrupt the work of the other services.
and other numbers that managers love
TODO: And it’s easier to ‘scale’ in the traditional sense.
… if Grails is best for one service, fantastic. If Spring is, great. Perhaps this service needs Redis as a database? Great! This other one, Mysql? Perfect!
-organizing the communication and see things flow and respond, it’s magical
However! (There’s always a downside, eh?)
If you embrace SOA you’ll need someone to oversee the team and catch those engineers which are straying from the distributed vision. In other words, you’ll need a service-focused Architect or team of architects to design the system
-find out how each will communicate
Communication between your services can be expensive, especially if it’s synchronous. HTTP calls are relatively slow These calls can block resources on both services that other commands could be using.
* Having a web of interconnected services can also grow out of control. Having to configure service nodes so that each knows where the others are located in the network can be cumbersome to manage.
In other words, SOA (along with the other architecture patterns that have attempted to fix monolithic) do a decent job...
... we can do better
So what does this have to do with ‘Reactive’? That’s really why we’re here, right? There was a point to that lead in, I assure you.
I should be clear: The term ‘Reactive’ is a buzzword…
Popularized by a company called Typesafe. They are the maintainers of the Play and Akka Frameworks, and they believe that those two (along with Scala) are the embodiment of the Reactive buzzword.
To be Reactive, an application must comprise 4 features:
-Communication within the system and actions should be done via events, rather than long procedural code. -This naturally promotes highly decoupled code. Sender and recipient can be constructed without having to know or care about implementation details about the other. -Furthermore, if you were at my last talk, by using events, we can monitor the *history* of our application.
-The system should be able to quickly stretch and grow based on the demand placed on demand placed on it. Should be able to effectively be replicated on multiple nodes. Dynamic scaling is an optimal use of resources: can scale up quickly when needed, but can scale down when not needed (save $) -Also, like I mentioned earlier, it’s more than just machine deployment, but scalability also is about how easily your developers can maintain your app
-A Reactive application is resilient to failure. If one service node breaks down, the others should be able to take up the slack. If one feature goes down, the others should still operate. -Your system should be able to suffer damage and still operate. -For example, in the ecomm example, if the order placement feature goes down.. while your team is scrambling to fix, the end user should still be able to browse the products and add items to the cart.
Your application, in general, should respond to user interaction as quickly as possible. Each service should handle events rapidly. This leads to a pleasing experience to your end user. The faster your application responds to their input, the less time they sit staring at your app, and the happier they’ll be.
Even with all that, there’s no One Correct Reactive architecture.
Rather, Reactive is a mindset or a state of being. It’s a goal that you need to orient your thinking around when you are designing or architecting your individual service nodes and your distributed architecture
The people at Typesafe wrote up this thinking into something they call the ‘Reactive Manifesto’. After this talk, if you like what I’m saying, feel free to go and sign it.
Anyway, The Manifesto does say at one point that.. &lt;read&gt; Building on that, I believe that the best template or pattern to follow in order to make a Reactive application is to take a services-oriented approach each node uses an event-based model to communicate internally each node communicates with each other via Events instead of direct HTTP
So, going back to the original e-commerce example… - how to go reactive?
First of course, is to break up our application into individual components, but this is not good enough. Could still adhere to be resilient and responsive, and it’s certainly more scalable than it was.
Each service should be as small as possible, and be event-driven. The JVM really shines at this.
Some examples in the Groovy world: Grails async Ratpack (which is built on Netty.io) Netflix’s Groovy adapter for RxJava
In other words, extend the idea of having an event driven node or service into the notion of having event-driven communication between services. -And how do we facilitate event based communication between our services?
One method: Message Brokers!
A Basic Overview of Message Brokers -Messages are routed through an EXCHANGE in to one or many queues. -Messages are then plucked off the queue by an attached, waiting Consumer. Or, the first available if multiple are attached.
The key to using the message brokers in our reactive apps is when certain events occur to encapsulate data into individual messages and drop them on some queue or exchange
TODO: Image of Postman / message.
Can spin up or down nodes for targeted scaling of your services as demand dictates.
-Example from before, an e-commerce site -we’ve broken the features into individual applications and spun up one node per each -connected the users’ cart to a queue, and our order processing into another queue
-suppose the cart and order processing start experiencing heavy load - programatically spin up new instances to handle it - Going back into the refactoring discussion -&gt; Can pit different implementations of a service against each other via AB testing
LOCATION: Nodes do not care where the other nodes are, only worry about the broker
CARDINALITY: Each service instance does not care about the number of others out there
Why? uses the raw AMQP protocol (as do some of the others I mentioned) the actual broker itself is a lightweight Erlang application Offers two key security features: message persistence and error recovery
And, there’s a fantastic plugin written by Peter Ledbrook and Jeff Brown
Reactive Architectures are in use by several companies, particularly large ones. Although they may not refer to it as being Reactive
-make heavy use of AWS -replicate bundles of services as demand dictates -Netflix is big on resiliency. In order to test their resiliency, they use a custom tool called ‘Chaos Monkey’
Every weekday between 9am and 5pm (a.k.a. “Work Hours), Netflix unleashes this little adorable little guy into their infrastructure, where he goes to work randomly destroying various services.
These services need to prove their strength and stand up to the monkey. Or rather, this allows Netflix to find weaknesses and bugs in their services
…Twitter is another large one, perhaps the biggest example
They had one of the largest, most monolithic Rails deployments in the world. Engineers needed to pull in Global Experts to get anything done on the codebase Spent more time on “Archeological Digs” or “Whale Hunting expeditions” to track down bugs or learn how things worked than on developing new features
Turning point came during 2010 World Cup. Flurry of tweets brought twitter to its knees. Service was unavailable, engineers worked through the night Debated putting a vuvuzella sound effect here.
They shattered their monolithic application into multiple individual applications.. The size of the circles represent the number of deployment nodes of that particular application service -I know I just spoke about Rabbit, but Twitter developed their own tool called Finagle which handles asynchronous communication between nodes
Each application had their own small, determined, dedicated teams.
they switched from the Ruby Virtual Machine to the JVM. Twitter was by no means strangers to the JVM -Search features was written in Java Embraced Event-based programming from the bottom up Went with Netty (as opposed to something like NodeJS), which is an NIO technology for the JVM
When Twitter was Monolithic, they were able to achieve about 200 to 300 requests or tweets per second per application node. After going reactive, they’re up to 10 to 20 thousand. 2 orders or magnitude?
So, as of August 2013, Twitter handles an average of 5700 Tweets per minutes, and under load, can dynamically ramp that number up to 144k without any direct action by staff. I don’t believe they’ve actually reached the capacity of their new system.
With 5x - 12x fewer machines than before! Imagine the savings on server infrastructure!
That’s kind of mind blowing, right? Not Bad, eh? -&gt; Incidentally, JVM advocates and Rails detractors love to talk about how Twitter moved away from Rails to Java. And while that’s true that Java would give better performance, the Architecture choice was vastly more important
So, What does this all have to do with NodeJS?
The event loop is quite good with a high number of low activity connections.
Node has done an excellent job at furthering event-based programming and exposing developers to the concept that might not have otherwise experienced it.
-garbage collection is horrendous -there’s no proper package or module import rules (this will be coming along in the next few years) -the interpreter has some very strange quirks; -not a great choice for numerical work or precision numbers. There’s only a ‘Number’ Object, that does horrible things to floats. -0.1*0.4. The answer is not 0.4. - Not specifically JS, but the project structure conventions and build tools are still young and immature (compared to JVM) Plus, if you’re like me your start to miss things like static typing
Yet despite these things, NodeJS is becoming increasingly popular. This graph is from a web technology survey site called w3techs, and shows the percentage of all web apps that are powered by NodeJS (according to those that responded to the survey). In the past 6 months, it has gone from powering 0.02% of ALL web apps to powering 0.065% of ALL web apps.
Furthermore it is extremely popular with high traffic websites. There is power in the event loop based model. What NodeJS is showing us, though, is that the standard ‘Thread pool with Single request per thread’ model is antiquated. Which I think shows again that architecture choices is the most important thing.
I firmly believe that what this shows is that the world is seeing the power of Event-based and Reactive architectures. I feel that Groovylang and the JVM in general can compete with and outperform NodeJS. To do that, we as a community should spread awareness
Use tools like Grails async, the Netflix JavaRX Groovy library, and Ratpack (or event Netty), to build event-based Reactive apps. contribute to the community
Tweet, blog about the power of Groovylang and the JVM. Maybe even shout to people about it in the streets.
On this same topic, a quick anti-example of what can happen if we do not do this.
In 2013, PayPal rebuilt their web application which was used to serve up the Home page, a user’s activity feed and a user’s wallet. To mitigate risk of the new system, they built two versions: one using home-grown Java framework based on Spring, which they knew how to integrate and scale with the other systems. The other version used NodeJS. They released this blog post which describes this process and details the results. And slams the JVM
With the new java version of their app, they were able to achieve approximately 1.8 page requests per second for a single user. At 10 users, this speed drops to about 11.5 requests per second (avg response time of 1800 milliseconds)
With the NodeJS application, they were able to achieve 3.3 page requests per second with one user. At 20 concurrent users, they are able to process 24 requests per second. An average response time of 1200 milliseconds
Now they made news because PayPal claimed that they were serving these pages 2x as fast! Just by switching to NodeJS! And yet, these performance numbers are nothing to be truly excited about. Seeing response times of over 1 second under extraordinarily light load? I wouldn’t share that. But I believe this is not the entire story.
todo: slide showing backend services are bottleneck or weak point Front end app is Sonic, bunch of turtles for backend services. -backend services not responsive -synchronous communication blocks
They may be excited about their technology switch, but the overall architecture of their company didn’t change and so performance did not greatly increase.
… They should have embrace the Reactive mindset, as Twitter and Netflix have done. And so, I leave you with this:
Why Reactive Architecture Will Take Over The World (and why we should be wary of NodeJS)
Why Reactive Architecture Will
Take Over the World