For DevOps efforts to be a true success, it’s crucial to have real-time performance monitoring and metrics that can be reviewed and shared across teams. By accessing real-time and historical metrics, from architecture and application performance, to customer experience and business metrics – you can ensure you are meeting all of your technology and business objectives. Electric Cloud and Dynatrace have partnered up to bring tight integrations to your CI/CD and release pipelines, with ElectricFlow and Dynatrace AppMon. Dynatrace AppMon’s test automation integration stops bad code in its tracks by inspecting key performance and architectural metrics. Now, if a deployment makes it into production but shows problems, the ElectricFlow integration allows AppMon to trigger an automatic rollback or other actions to make sure the system can self-heal. Teammates can collaborate easier and faster, with shared insights, and leaders benefit from knowing that everyone is aligned and moving toward the same goals. In this webinar, we will give a live demonstration on release automation by pushing code changes through the ElectricFlow pipeline.
23. +8000 Customers
#1 Gartner Market Share
of top twenty retailers of Fortune 500 of ten largest banks
19 386 9
Consumer goods
GovernmentTeleco & Media
TravelFinance
Retail
26. Confidential, Dynatrace, LLC
Monitoring redefined
Every user, every app, everywhere. AI powered, full stack, automated.
Full lifecycle - development, test, and production
Andi –
Why do we need to invest in Shift-Left and Self-Healing?
Andi: talks to the Gartner portion
Anand : talks to the DORA portion
Because DevOps has been a driving factor in recent years to transform IT Organizations
Lead: Anand
Andi talks about the employee retention
The driving factors are all clear – but too often it is just “Speed to Market” that is driving the change
Anand leads
Based on the State of DevOps Report we know that companies are indeed speeding up. But they are falling behind on quality
Andi leads
The biggest challenges make it clear: Technology Complexity and being Driven by Business leads to bad quality and an overwhelmed IT Organizations that is drowning in alerts and is overwhelmed with the new complexity
Andi leds
The right way is to MATURE towards “Faster Quality to Market” – and here are 3 Mandatory Steps for you
Anand Starts and Andi brings it home
Continuous Delivery is not only about Delivery but also about Feedback Loops. But not only the big feedback loop from Ops/Biz back to Dev. It is also about the smaller feedback loops within a phase of the pipeline
Promotional Flow
Dev: Execute Performance Unit Tests, do performance and resource profiling, optimize your code before committing it into the Source Repo
CI: Continuous Integration as the vehicle to promote code from Dev to QA. Executing our Core Tests + Typically comes with Automated Deployments to deploy into different test environments
Perf/Test: Continuous Performance Testing allows to identify performance and scalability regressions for new code commits. Works very well if you can easily spin up load environments and then compare build to build performance metrics -> we call this a “Performance Signature” of a service or app
CD: Continuous Delivery brings in more automation and orchestration so that we can deploy the tested and verified code changes into Operations
Ops: Monitoring the impact of a new Deployment. All Lights Green? Still meeting SLAs? Deployment Successful?
Biz: Monitoring the impact of new Features/Capabilities: Are they accessible for our end users? Are they being used? What is the impact on User Experience, User Behavior, Conversion Rate, …
Feedback Flow
Biz: Through techniques such as Feature Flagging, A/B Testing, Blue/Green Deployments, … - Biz can independently decide which features to promote or remove depending on the feedback from monitoring. This is the fastest way to react to problems and boost business
Ops: Through monitoring and orchestration Ops can triage situations that arise in production. Ops might be able to mitigate performance or resource issues by adding more virtual resources, diverting traffic, changing configuration, … - without having to request a new build from Dev. This requires a good architecture and automation strategy. It also requires good and granular monitoring to understand the dependencies between services, application, processes and the underlying infrastructure
Perf/Test: Looking at Production Monitoring Data allows Perf/Test to constantly update their load scenarios to better reflect real user behavior in the next set of continuous performance tests
Dev: Take the feedback from all Biz, Ops and Test and have it influence the next product iteration by innovating in areas that end users like, optimize where optimization is essential
We want to speed up, to get more features out to make our users happy, but we also need to take care that we are lowering the costs, because if you just push more stuff out, Dev. code, and it runs on an infrastructure that is dynamically (because Amazon and Microsoft, Google provides it), it potentially is very costly if you don't look at quality from a performance perspective, from a resource consumption perspective.
Hey, we saw something in production, our system is now responding slower, or faster. Or all of it's unavailable because we made a mistake." The business can see the feedback flow saying, "Hey, we just deployed this new feature, but guess what? Nobody's using it. And we know this thanks to Dynatrace monitoring."
We believe there's a lot of chances in the promotion flow to actually empower all these individual roles and people to make better decisions on whether to push it forward. Developers, they can run some unit performance tests, and we actually have within Dynatrace, features to show them if they have any performance regressions in there, and if Dynatrace says, "Not good” then, perhaps don’t push it out.
So, our point is there is many options with smaller feedback loops within individual stages, Dynatrace gives you the data so that you can make the right decisions.
Continuous Delivery is not only about Delivery but also about Feedback Loops. But not only the big feedback loop from Ops/Biz back to Dev. It is also about the smaller feedback loops within a phase of the pipeline
Promotional Flow
Dev: Execute Performance Unit Tests, do performance and resource profiling, optimize your code before committing it into the Source Repo
CI: Continuous Integration as the vehicle to promote code from Dev to QA. Executing our Core Tests + Typically comes with Automated Deployments to deploy into different test environments
Perf/Test: Continuous Performance Testing allows to identify performance and scalability regressions for new code commits. Works very well if you can easily spin up load environments and then compare build to build performance metrics -> we call this a “Performance Signature” of a service or app
CD: Continuous Delivery brings in more automation and orchestration so that we can deploy the tested and verified code changes into Operations
Ops: Monitoring the impact of a new Deployment. All Lights Green? Still meeting SLAs? Deployment Successful?
Biz: Monitoring the impact of new Features/Capabilities: Are they accessible for our end users? Are they being used? What is the impact on User Experience, User Behavior, Conversion Rate, …
Feedback Flow
Biz: Through techniques such as Feature Flagging, A/B Testing, Blue/Green Deployments, … - Biz can independently decide which features to promote or remove depending on the feedback from monitoring. This is the fastest way to react to problems and boost business
Ops: Through monitoring and orchestration Ops can triage situations that arise in production. Ops might be able to mitigate performance or resource issues by adding more virtual resources, diverting traffic, changing configuration, … - without having to request a new build from Dev. This requires a good architecture and automation strategy. It also requires good and granular monitoring to understand the dependencies between services, application, processes and the underlying infrastructure
Perf/Test: Looking at Production Monitoring Data allows Perf/Test to constantly update their load scenarios to better reflect real user behavior in the next set of continuous performance tests
Dev: Take the feedback from all Biz, Ops and Test and have it influence the next product iteration by innovating in areas that end users like, optimize where optimization is essential
We want to speed up, to get more features out to make our users happy, but we also need to take care that we are lowering the costs, because if you just push more stuff out, Dev. code, and it runs on an infrastructure that is dynamically (because Amazon and Microsoft, Google provides it), it potentially is very costly if you don't look at quality from a performance perspective, from a resource consumption perspective.
Hey, we saw something in production, our system is now responding slower, or faster. Or all of it's unavailable because we made a mistake." The business can see the feedback flow saying, "Hey, we just deployed this new feature, but guess what? Nobody's using it. And we know this thanks to Dynatrace monitoring."
We believe there's a lot of chances in the promotion flow to actually empower all these individual roles and people to make better decisions on whether to push it forward. Developers, they can run some unit performance tests, and we actually have within Dynatrace, features to show them if they have any performance regressions in there, and if Dynatrace says, "Not good” then, perhaps don’t push it out.
So, our point is there is many options with smaller feedback loops within individual stages, Dynatrace gives you the data so that you can make the right decisions.
Anand starts the slides and Andi finishes
Implementing a continuous delivery pipeline with feedback loops currently happens for individual projects. The challenge comes if enterprises adopt these new concepts on large scale: for many projects, building new microservice architectures, pushing code changes on separate streams through multiple pipelines into production.
Challenge:
Individual Micro-Service (or call them Feature-/App-) Teams are pushing their code changes through the pipeline. Their focus is on their service. In earlier stages they mock away depending services so that they can move faster. The production environment (or a production like environment) is the first environment where all moving pieces are deployed in the way they are supposed to work together. Responsible for that environment are those Cloud Ops Teams that provide the underlying Cloud or PaaS Infrastructure for the applications & services that get deployed. The Cloud Ops Teams are the ones that need to make sure that constant new updates from X Service Teams are not impacting overall Service Quality as well as not consuming too much costs in case bad code changes get deployed
Cloud Ops:
They need Monitoring for multi-platform/technology environments: Enterprise Stack, Cloud Stack, Virtualization, Containers, …
Provide “Monitoring as a Service” for each individual Service Teams components as Feedback Loop for their next development cycle
Understand Dependencies between different service teams to optimize deployment, e.g: co-deployment of tightly coupled services, …
Understand Resource requirements for each service and how scaling impacts resource consumption of all depending services -> this is great for capacity planning
Locate and remediate bad deployments to save costs and lower risks
Service Team
Through Monitoring from Operations better understand the real behavior, issues and resource consumption of their service in combination with all other depending services
Shift-Left Quality and Performance by integrating “Monitoring as a Pipeline Feature”. Do not promote builds that show any type of performance regression when testing your services in isolation or in a combined testing environment
Shift-Right Metrics by demanding more End User and Operational Feedback from Production, e.g: User Behavior, Failure, Resource Consumption, … -> let this influence your next iteration as well as help Operations to make better decisions while running your current build in production, e.g: include feature flags to turn on/off features that behave good or bad
Business
Understand End User Behavior after deploying new features -> which features are well received – which updates are negatively impacting user behavior?
Understand the actual Running Costs of the new Features as provided by the Cloud Ops Teams
Start Experimenting and Innovating with A/B Testing, Blue/Green Deployments, … -> Service Teams can provide the switches to turn on/off certain features, Cloud Ops provides the granular monitoring required to decide whether a feature is good to promote or better to take out
Andi will talk to this slide
Who we are!
Our own DevOps Transformation Story
We understood that monitoring had to change
And that’s really what’s possible with Dynatrace. With Dynatrace you can monitor every user, every app, everywhere; the platform is powered by Artificial Intelligence which is leveraged to identify issues and suggest remedies. Dynatrace monitors the full stack – meaning from the network through database and application tiers, into code-specific analysis all the way out to end user devices and third party add-ins. The same holds true for both in house hosted applications as well as private clouds, public clouds and hybrid clouds where containerization, hyper-scale and elastic computing dominate.
And all of this is automated – not only initial set up and instrumentation but problem identification, dashboards, ongoing adaptation to environmental and application changes as well as upgrading the Dynatrace platform itself. All automated.
We’re here to tell you today about Dynatrace, the unified and most modern digital performance management. It’s a single, modern platform that provides everything you need for all types of applications. Over the next few slides we’ll talk about why we built this platform.
Andi to talk this slide
Anand
Anand
ElectricFlow Pipeline is initiated with a bad build
Andi
Continuous Performance Testing or Continuous Performance Validation is a good Pipeline Phase to have before deploying into a Production Environment. It is an envioronment running under continuous load. New builds of individual services or complete applications get deployed on a regular basis. The question is whether a new version of a service, application or component shows any degradation in performance, scalability or resrouce consumption. If so it should not be promoted to the next phase before closer examination
Dynatrace automatically understands applications but more importantly services. Dynatrace also integrates with testing tools so that traffic on certain services can be associated to certain test scenarios you run in your continuous performance environment. Based on this information it is possible to see any regressions between builds or different loads. In the example above it is easy to spot that the build from Nov 17 shows a significant performance regression. Instead of allowing this build into production it is better to look into the differences between Build Nov 16 and Build Nov 17
Anand
ElectricFlow’s Gates stop Pipeline from moving forward