To register for this webinar replay, click here:
https://info.dynatrace.com/apm_wc_nodejs_na_registration.html
There is no doubt that Node.js is one of the fastest growing platforms today. It can be found at start-ups and enterprises throughout all industries from high-tech to healthcare.
A lot of people have written about the reasons for its popularity and why it has made sense in “digital transformation” efforts. But when you implement Node.js, do you have to replace your mainframes and legacy software with a shiny new Node.js-based microservice architecture?
This 30-minute webinar walks in the shoes of those who oversee the whole digital value chain: Operation and performance teams. We will cover:
Node.js implementation requirements (Hint: you might not have to gut your whole system)
What challenges operations and performance teams face when they begin to implement Node.js
The big four gotchas that can make using Node.js difficult for an operations team
Gain the know-how to support your development and ops teams in implementing Node.js.
75. Confidential, Dynatrace, LLC
Participate in our Forum :
community.dynatrace.com
Like us on Facebook :
facebook.com/dynatrace
Follow us on LinkedIn :
linkedin.com/company/dynatrace
Follow us on Twitter :
twitter.com/dynatrace
Watch our Videos & Demos :
youtube.com/dynatrace
Read our Blog :
https://www.dynatrace.com/blog/
Connect with us!
Notes de l'éditeur
Those are NOT startups. Those are big brands. This is unique.
Server-side JavaScript
Built on Google V8
Evented non-blocking IO
8.000 Lines of C/C++ Code, 2.000 Lines of JavaScript
But I really like this definition.
What V8 does: It translates JavaScript into machine code and runs it.
And every running program is represented by
Work done on the CPU
Data Stoed im memory
And here we already have our two problem classes.
Let’s review the memory handling of Node.js
It’s very common and quite like Java.
RSS: Code and Stack where local variables are stored
It contains the heap for where objects and closures – let’s say long living resources like objects or closures are stored
And it’s actually very easy to query the memory usage of a given Node process.
And this is what you get if you create a graph of a running Node application.
As we already see the heapUsed graph looks funny.
It somehow is very dynamic and seems to follow a pattern.
Something seems to take care that memory is freed. And I have a real live example of how this works.
The console spits this out
And now that we have found our problem we can add this. And the weekend is saved.
My script will look at the memory usage and it constantly grows it will create a snapshot.
Event loop latency
The latency of a timer task. The timer should be triggered in a set interval (e.g. 200ms). If it is however triggered in 250ms the latency is 50ms.
Work processed latency
The time that passes from adding an asynchronouswork item to the threadpool's work queue and a worker thread picking it up
Event Loop Tick Frequency
The number of event loop cycles during the observation interval.
Event loop tick duration
The time it takes the event loop to complete a loop cycle.
the event loop is busy processing js callbacks (hence the increase in timer latency and the event loop tick frequency decrease and duration increase)
user land code wird im event loop thread abgearbeitet - das heisst slow callbacks wirken sich auf die event loop zeit aus und die frequenz - darum entsteht eine latenz bei einem timer task wenn die event loop es nicht schafft den nächsten timer rechtzeitig zu feuern
Again v8 profiler. Again a function already there in V8.
This will give you this.
So this netflix latency problem.
the event loop is struggling with async work (e.g. fs.read) which basically means the threadpool is exhausted with long running/lot of async work (hence the increase in work item latency)
the event loop is struggling with async work (e.g. fs.read) which basically means the threadpool is exhausted with long running/lot of async work (hence the increase in work item latency)
Automatic Browser Injection for UEM
Eliminate guesswork across the lifecycle
No averages 100% all Transactions
Low maintenance
Private and Public cloud environments