SlideShare une entreprise Scribd logo
1  sur  63
Telefónica Digital 
Barcelona, March 26th 
2014 
TELEFÓNICA DIGITAL
Architecture Design 
Web Performance OOppttiimmiizzaattiioonn
No instruments 
Users Review 
Late or none Performance Testing 
No Real User Monitoring 
Reactive Performance Tuning
No tools, no performance dashboard, performance is for sysadmins No tools, no performance dashboard, performance is for sysadmins a anndd o oppeerraattoorrss
Releases are costly and it may take Releases are costly and it may take s seevveerraal lm moonntthhss o of fw woorrkk
Manual testing of each release Manual testing of each release a aftfteerr c cooddee f rfreeeezzee
Non functional Requirements are Non functional Requirements are m moosstt l ilkikeelyly i gignnoorreedd
In production there is no monitoring of the traffic and how it In production there is no monitoring of the traffic and how it a afffefeccttss t thhee b buussinineessss
Users feedback is usually negative and there is no interaction with developers Users feedback is usually negative and there is no interaction with developers a anndd d deessigignneerrss
Application’s performance affects directly to Application’s performance affects directly to m maarrkkeett’s’s p peerrfoforrmmaannccee
• Continuous Integration 
– Functional testing 
– Automation 
– Monitoring
Continuous Integration for functional testing is working already Continuous Integration for functional testing is working already i nin n nigighhttlyly b buuilidldss
Automation reduces time to market Automation reduces time to market f oforr t thhee a apppplilcicaattioionnss
Monitoring the real user behaviour and not just Monitoring the real user behaviour and not just h heeaaltlthhcchheecckk o of fs seerrvveerrss
Error and Error and r risiskkss m maannaaggeemmeenntt
TTuunniningg a anndd b buuggfifxixiningg
Listening Listening t too u usseerr f efeeeddbbaacckk
TThhee FFuuttuurree 
• Continuous Performance Integration 
– Performance tests integrated in Jenkins 
– Automation of the trend reports 
– Real User Monitoring  Real time feedback
AGILE PERFORMANCE 
Telefónica Digital 
Product Development & Innovation 
20
TTTTeeeesssstttitinniinngggg 
PPPPrrrroouuaacecettbibivvaeae sPPs ee ddrrffeeoor r mRmReaaennnnccddee ititmemessttiiiienenggnn ffttoooorr eeaacchh rreelleeaassee.. LLooaadd tteessttss wwiillll ddiissccoovveerr tthhee ffllaawwss aanndd bboottttlleenneecckkss,, tthhee aapppplliiccaattiioonn oorr tthhee ssyysstteemm 
mmaayy hhaavvee iinn pprroodduuccttiioonn eennvviirroonnmmeenntt
AAAAvvvvaaaaiilliiaallaabbbbiilliiiillttiitytyyy 
CCoonnoocceerr Conocer el eell eesscceennaarriioo escenario pprroodduuccttiivvoo productivo ppaarraa para tomar ttoommaarr uunnaa una bbuueennaa buena decisión ddeecciissiióónn ssoobbrree sobre cómo ccóómmoo oorriieennttaarr orientar llaass las pprruueebbaass pruebas ddee rreennddiimmiieennttoo eess 
HHiigghh LLaa hhiippóótteessiiss aavvaaiillaabbiilliittyy ddee ppaarrttiiddaa ooff tthhee ccoonn aapppplliiccaattiioonn mmááss ééxxiittoo.. 
aanndd tthhee ssyysstteemm iiss tthhee ggooaall ooff aa rreeaaddyy ffoorr sseerrvviiccee ssttaattuuss.. TThhee aapppplliiccaattiioonn aanndd tthhee ssyysstteemmss mmuusstt bbee ssttaabbllee ,, 
eeffffiicciieenntt aanndd ddiimmeennssiioonneedd aaccccoorrddiinngg ttoo tthhee uussaaggee..
VVVVeeeelloolloocccciittiitytyyy 
Not only the response time is important, an intelligent use of the resources is needed to grow in the future. Efficiency, uunnddeerrssttoooodd aass 
ccaappaacciittyy ttoo ddiissppoossee ooff tthhee ssyysstteemm rreessoouurrcceess ttoo aacchhiivvee aann oobbjjeeccttiivvee,, iinn oouurr ccaassee rreessppoonnssee ttiimmee aanndd uuppttiimmee
SSSSccccaaaallaallaabbbbiilliiiillttiitytyyy 
Being able to grow depending on the necessities of the market, users and new technologies is one of the questions to wwhhiicchh aa ppeerrffoorrmmaannccee 
eennggiinneeeerrffoorr wwiillll hhaavvee ttoo aannsswweerr
WWWWoooorrrrkkkkLLLLooooaaaaddddssss 
A performance test is easy. It is easy to design non realistic scenarios. It is easy to collect irrelevant data. Even with aa ggoooodd sscceennaarriioo aanndd 
AApppprrooppiiaattee ddaattaa,, iitt iiss eeaassyy ttoo uussee aanndd iinnccoorrrreecctt ssttaattiissttiicc mmeetthhoodd ttoo aannaallyyssiiss tthhee rreessuullttss.. 
-- AAllbbeerrttoo SSaavvooiiaa
PPPPrrrreeeePPPPrrrroooodddduuuuccccttttiiioiooonnnn 
One of the most important parts of a good performance test design is to have an appropiate load test environment, as similar aass ppoossssiibbllee ttoo 
PPrroodduuccttiioonn aatt aallll lleevveellss,, nneettwwoorrkkiinngg,, ssyysstteemmss aanndd aapppplliiccaattiioonn aarrcchhiitteeccttuurree..
Otro título 
MMMMoooonnnniiititttoooorrrriiininnngggg 
EEEEEEEEEEEEEEEEEEEEsssssssssssssssssssscccccccccccccccccccceeeeeeeeeeeeeeeeeeeennnnnnnnnnnnnnnnnnnnaaaaaaaaaaaaaaaaaaaarrrrrrrrrrrrrrrrrrrriiiiiiiiiiiiiiioooooiiiiiooooooooooooooossssssssssssssssssss 
TToo kknnooww tthhee pprroodduuccttiioonn eennvviirroonnmmeenntt iiss kkeeyy ttoo ttaakkee ggoooodd ddeecciissiioonnss aabboouutt hhooww ttoo ddeessiiggnn aa ppeerrffoorrmmaannccee tteesstt ppllaann.. DDeessiiggnniinngg aa ppllaann aaccccoorrddiinngg 
TToo rreeaall ttrraaffffiicc aanndd uussaaggee ooff tthhee ppllaattffoorrmm iiss kkeeyy iinn ccrreeaattiinngg vvaalliiddaattiioonn ccrriitteerriiaa
PPPPeeeerrrfrfofofoorrrmrmmmaaaannnncccceeee TT TTeeeeaaaammmmssss 
DDeevveellooppeerrss,, DDBBAA''ss,, QQAA''ss,, DDeevvOOppss,, pprroodduucctt oowwnneerrss ...... AAllll tthhee tteeaamm iiss ppaarrtt ooff ppeerrffoorrmmaannccee
Otro título 
TTTToooooooollssllss 
There are many tools available in the market for load testing and monitoring. An effort in evaluating these tools will benefit aatt lloonngg tteerrmm tthhee 
EExxeeccuuttiioonn ooff tthhee tteessttss.. HHoowweevveerr,, tthhee mmoosstt iimmppoorrttaanntt ppaarrtt iiss hhooww tthhee rreeppoorrttss aarree ggeenneerraatteedd aanndd wwhhoo iiss ggooiinngg ttoo iinntteerrpprreett tthheemm..
Otro título 
Mas puntos 
RRRReeeeaaaall llUU UUsssseeeerrr r MM MMoooonnnniittiitotooorrririnniinngggg 
Not only unique users or session times are important. How the users work with the application and the psicology ooff tthhee tthheemm aarree kkeeyy ttoo 
UUnnddeerrssttaanndd tthhee rreessuullttss aanndd hhooww iitt aaffffeeccttss ttoo bbuussiinneessss..
BBBBeeeessssttt t PP PPrrraraaacccctttiticciicceeeessss 
Keep it simple, use cache wisely, invest in testing and monitoring, create a culture of performance in aallll tthhee oorrggaanniizzaattiioonn
Users vs request 
• Yeah right … but how many users?
SCENARIOS: Identify the scenarios that are most commonly executed or most resource-intensive
METRICS: Only well-selected metrics that are analyzed correctly and contextually provide 
information of value.
WORKLOAD MODEL: User Session Duration in average. It is important to define the load levels 
that will translate into concurrent usage, overslapping users, or user sessions per second.
THINK TIMES: User thinktimes  Pause between pages during a User Session depending on the 
User Type*
User Types: Identify the User … new, revisiting or both.
Performance Acceptance Criteria: Response time, System load, Throughput ...
Tuning 
IInInInnnnnnoooovvvvaaaatttitiooiioonnnn 
Techonology develops at high speed. To bring out the best of our product, business and techonology need to evolve by the hhaanndd.. IInnvveessttiinngg iinn 
PPeerrffoorrmmaannccee rreesseeaarrcchh iiss ccrruucciiaall ttoo kkeeeepp uupp wwiitthh ootthheerr iinntteerrnneett ccoommppeettiittoorrss..
Establish the Funnel 
Telefónica Digital 
Product Development & Innovation 
41
Determine the session navigation 
Telefónica Digital 
Product Development & Innovation 
42
Establish the Session time 
Telefónica Digital 
Product Development & Innovation 
43
Test Strategies 
Load Test: Stepping Increasing 
Goal: Determine the maximum throughput attainable without errors 
and loss of responsiveness 
Model: threads 
Metrics: Throughput 
Telefónica Digital 
Product Development & Innovation 
44
Test Strategies 
Load Test: Variance/Uniform 
Goal: determine the maximum throughput for a fixed number of threads 
in a fixed time span 
Model: Threads/throughput 
Metrics: Max Number of threads, Max throughput 
Telefónica Digital 
Product Development & Innovation 
45
Test Strategies 
Load Test: Soak/Avalanche 
Goal: determine the limits and bottlenecks of the system during a peak 
load. 
Model: Threads/throughput 
Metrics: Time to Manage the Peak of Requests 
Telefónica Digital 
Product Development & Innovation 
46
Test Strategies 
Load Test: Spike 
Goal: determine the limits and bottlenecks of the system during a 
sudden peak load under load circumstances 
Model: throughput 
Metrics: Time to Manage the Peak of Requests 
Telefónica Digital 
Product Development & Innovation 
47
Test Strategies 
Load Test: Endurance/Robustness 
Goal: determine if the system looses performance during a long period 
of stationary load 
Model: Throughput 
Metrics: Optimized Throughput 
Telefónica Digital 
Product Development & Innovation 
48
Understand the Project Vision and Context 
Project Vision 
Project Context 
Understand the system 
Understand the Project Environment 
Understand the Performance Build Schedule 
Telefónica Digital 
Product Development & Innovation 
49
Improved way of working 
Improve performance unit testing by pairing performance testers with 
developers. 
Assess and configure new hardware by pairing performance testers 
with administrators. 
Evaluate algorithm efficiency 
Monitor resource usage trends, and usage of profilers in dev/qa 
environments 
Measure response times in production and trends in dev/qa 
environments 
Collect data for scalability and capacity planning. 
Telefónica Digital 
Product Development & Innovation 
50
Configure the Test Environment 
Set up isolated networking environment 
Procure hardware as similar as possible to production or at least 
keeping ration amongst all elements 
Coordinate bank of IP’s for IP spoofing 
Monitoring tools and operating systems like production 
Load generation tools or develop your own 
Telefónica Digital 
Product Development & Innovation 
51
Identify and Coordinate Tasks 
Work item execution method 
Specifically what data will be collected 
Specifically how that data will be collected 
Who will assist, how, and when 
Sequence of work items by priority 
Telefónica Digital 
Product Development & Innovation 
52
Execute Task(s) 
Keys to Conducting a Performance-Testing Task 
• Analyze results immediately and revise the plan accordingly. 
• Work closely with the team or sub-team that is most relevant to the task. 
• Communicate frequently and openly across the team. 
• Record results and significant findings. 
• Record other data needed to repeat the test later. 
• Revisit performance-testing priorities after no more than two days. 
Telefónica Digital 
Product Development & Innovation 
53
Analyze Results and Report 
 pause periodically to consolidate results 
conduct trend analysis 
create stakeholder reports, 
pair with developers, architects, and administrators to analyze results 
Telefónica Digital 
Product Development & Innovation 
54
But … what are you actually ddooiinngg ddaayy bbyy ddaayy??
Case of STUDY 
Telefónica Digital 
Product Development & Innovation 
56
• HTML5 trends using Yslow and Firebug with 
Selenium
• API testing using JMeter
• BackEnd services using LoadUI reusing 
functional tests from soapUI
• Streaming Content
Branches comparison
Workshop for newcomers
Workshop for newcomers
Workshop for newcomers

Contenu connexe

Tendances

Automated software testing
Automated software testingAutomated software testing
Automated software testinggauravpanwar8
 
Develop a Defect Prevention Strategy—or Else!
Develop a Defect Prevention Strategy—or Else!Develop a Defect Prevention Strategy—or Else!
Develop a Defect Prevention Strategy—or Else!TechWell
 
Software testing
Software testingSoftware testing
Software testingfatboysec
 
Sourcin IQPC San Francisco October 2013
Sourcin IQPC San Francisco October 2013Sourcin IQPC San Francisco October 2013
Sourcin IQPC San Francisco October 2013Sourcin1
 
Sourcin IQPC Munich 2014
Sourcin IQPC Munich 2014 Sourcin IQPC Munich 2014
Sourcin IQPC Munich 2014 Sourcin1
 
Beginner guide-to-software-testing
Beginner guide-to-software-testingBeginner guide-to-software-testing
Beginner guide-to-software-testingbiswajit52
 
SOFTWARE ENGINEERING UNIT 6 Ch 13
SOFTWARE ENGINEERING UNIT 6 Ch 13SOFTWARE ENGINEERING UNIT 6 Ch 13
SOFTWARE ENGINEERING UNIT 6 Ch 13Azhar Shaik
 
Sourcin Tech Transfert 2014
Sourcin Tech Transfert 2014Sourcin Tech Transfert 2014
Sourcin Tech Transfert 2014Sourcin1
 
IT Quality Testing and the Defect Management Process
IT Quality Testing and the Defect Management ProcessIT Quality Testing and the Defect Management Process
IT Quality Testing and the Defect Management ProcessYolanda Williams
 
6 Tips to Accelerate MedTech Time To Clinical Data
6 Tips to Accelerate MedTech Time To Clinical Data6 Tips to Accelerate MedTech Time To Clinical Data
6 Tips to Accelerate MedTech Time To Clinical DataAndy Rogers
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based TestingTechWell
 
Approaches to Software Testing
Approaches to Software TestingApproaches to Software Testing
Approaches to Software TestingScott Barber
 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning Udayantha de Silva
 
Effective Testing fo Startups
Effective Testing fo StartupsEffective Testing fo Startups
Effective Testing fo StartupsTestnetic
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 

Tendances (20)

Automated software testing
Automated software testingAutomated software testing
Automated software testing
 
Develop a Defect Prevention Strategy—or Else!
Develop a Defect Prevention Strategy—or Else!Develop a Defect Prevention Strategy—or Else!
Develop a Defect Prevention Strategy—or Else!
 
Devopsdays barcelona
Devopsdays barcelonaDevopsdays barcelona
Devopsdays barcelona
 
Resume
ResumeResume
Resume
 
Software testing
Software testingSoftware testing
Software testing
 
Sourcin IQPC San Francisco October 2013
Sourcin IQPC San Francisco October 2013Sourcin IQPC San Francisco October 2013
Sourcin IQPC San Francisco October 2013
 
Sourcin IQPC Munich 2014
Sourcin IQPC Munich 2014 Sourcin IQPC Munich 2014
Sourcin IQPC Munich 2014
 
Publication1
Publication1Publication1
Publication1
 
Beginner guide-to-software-testing
Beginner guide-to-software-testingBeginner guide-to-software-testing
Beginner guide-to-software-testing
 
SOFTWARE ENGINEERING UNIT 6 Ch 13
SOFTWARE ENGINEERING UNIT 6 Ch 13SOFTWARE ENGINEERING UNIT 6 Ch 13
SOFTWARE ENGINEERING UNIT 6 Ch 13
 
Sourcin Tech Transfert 2014
Sourcin Tech Transfert 2014Sourcin Tech Transfert 2014
Sourcin Tech Transfert 2014
 
IT Quality Testing and the Defect Management Process
IT Quality Testing and the Defect Management ProcessIT Quality Testing and the Defect Management Process
IT Quality Testing and the Defect Management Process
 
6 Tips to Accelerate MedTech Time To Clinical Data
6 Tips to Accelerate MedTech Time To Clinical Data6 Tips to Accelerate MedTech Time To Clinical Data
6 Tips to Accelerate MedTech Time To Clinical Data
 
Getting Started with Risk-Based Testing
Getting Started with Risk-Based TestingGetting Started with Risk-Based Testing
Getting Started with Risk-Based Testing
 
Approaches to Software Testing
Approaches to Software TestingApproaches to Software Testing
Approaches to Software Testing
 
Defect Prevention
Defect PreventionDefect Prevention
Defect Prevention
 
Larry W Shackelford V1
Larry W Shackelford V1Larry W Shackelford V1
Larry W Shackelford V1
 
NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning NITC-2016 - Effectiveness of Agile Test Planning
NITC-2016 - Effectiveness of Agile Test Planning
 
Effective Testing fo Startups
Effective Testing fo StartupsEffective Testing fo Startups
Effective Testing fo Startups
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 

En vedette

Performance Continuous Integration
Performance Continuous IntegrationPerformance Continuous Integration
Performance Continuous IntegrationAlmudena Vivanco
 
Performance Best Practices
Performance Best PracticesPerformance Best Practices
Performance Best PracticesAlmudena Vivanco
 
High Performance Web Components
High Performance Web ComponentsHigh Performance Web Components
High Performance Web ComponentsSteve Souders
 
Oslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringOslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringAlmudena Vivanco
 
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15Proyectalis / Improvement21
 
SREcon 2016 Performance Checklists for SREs
SREcon 2016 Performance Checklists for SREsSREcon 2016 Performance Checklists for SREs
SREcon 2016 Performance Checklists for SREsBrendan Gregg
 

En vedette (8)

Performance Continuous Integration
Performance Continuous IntegrationPerformance Continuous Integration
Performance Continuous Integration
 
Performance Best Practices
Performance Best PracticesPerformance Best Practices
Performance Best Practices
 
High Performance Web Components
High Performance Web ComponentsHigh Performance Web Components
High Performance Web Components
 
WPT Midiendo la Felicidad
WPT Midiendo la FelicidadWPT Midiendo la Felicidad
WPT Midiendo la Felicidad
 
Niji power to the user
Niji power to the userNiji power to the user
Niji power to the user
 
Oslo Schibsted Performance Gathering
Oslo Schibsted Performance GatheringOslo Schibsted Performance Gathering
Oslo Schibsted Performance Gathering
 
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15
Unicorns, Krakens, Self-Organizing Teams and other mythological beasts - #APIL15
 
SREcon 2016 Performance Checklists for SREs
SREcon 2016 Performance Checklists for SREsSREcon 2016 Performance Checklists for SREs
SREcon 2016 Performance Checklists for SREs
 

Similaire à Workshop for newcomers

Online hostel management_system
Online hostel management_systemOnline hostel management_system
Online hostel management_systemmd faruk
 
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptx
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptxvnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptx
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptxKrishna20539
 
A Real-Time Information System For Multivariate Statistical Process Control
A Real-Time Information System For Multivariate Statistical Process ControlA Real-Time Information System For Multivariate Statistical Process Control
A Real-Time Information System For Multivariate Statistical Process ControlAngie Miller
 
online movie ticket booking system
online movie ticket booking systemonline movie ticket booking system
online movie ticket booking systemSikandar Pandit
 
Software Operation Knowledge
Software Operation KnowledgeSoftware Operation Knowledge
Software Operation KnowledgeDevnology
 
AfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionAfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionPeter Marshall
 
Testwarez 2009 Use Proper Tool
Testwarez 2009 Use Proper ToolTestwarez 2009 Use Proper Tool
Testwarez 2009 Use Proper ToolAdam Sandman
 
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...IEEEMEMTECHSTUDENTPROJECTS
 
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...IEEEMEMTECHSTUDENTSPROJECTS
 
Developing bottom up devices and advanced manufacturing processes - nov 29 20...
Developing bottom up devices and advanced manufacturing processes - nov 29 20...Developing bottom up devices and advanced manufacturing processes - nov 29 20...
Developing bottom up devices and advanced manufacturing processes - nov 29 20...Dave Litwiller
 
PAC 2020 Santorin - Stijn Schepers
PAC 2020 Santorin - Stijn SchepersPAC 2020 Santorin - Stijn Schepers
PAC 2020 Santorin - Stijn SchepersNeotys
 
QA is not quality
QA is not qualityQA is not quality
QA is not qualityAlex Wilson
 
APM Center of Excellence Drives Improved Business Results at Itau Unibanco
APM Center of Excellence Drives Improved Business Results at Itau UnibancoAPM Center of Excellence Drives Improved Business Results at Itau Unibanco
APM Center of Excellence Drives Improved Business Results at Itau UnibancoCA Technologies
 
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...Shift team effectiveness: Don't bother if you can't change "shop floor" shift...
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...Yokogawa1
 

Similaire à Workshop for newcomers (20)

Prototyping
PrototypingPrototyping
Prototyping
 
Batch Process Analytics
Batch Process Analytics Batch Process Analytics
Batch Process Analytics
 
Online hostel management_system
Online hostel management_systemOnline hostel management_system
Online hostel management_system
 
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptx
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptxvnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptx
vnd.openxmlformats-officedocument.presentationml.presentation&rendition=1.pptx
 
Pm unit 1,2,3,4,5,6
Pm unit 1,2,3,4,5,6Pm unit 1,2,3,4,5,6
Pm unit 1,2,3,4,5,6
 
A Real-Time Information System For Multivariate Statistical Process Control
A Real-Time Information System For Multivariate Statistical Process ControlA Real-Time Information System For Multivariate Statistical Process Control
A Real-Time Information System For Multivariate Statistical Process Control
 
online movie ticket booking system
online movie ticket booking systemonline movie ticket booking system
online movie ticket booking system
 
Software Operation Knowledge
Software Operation KnowledgeSoftware Operation Knowledge
Software Operation Knowledge
 
AfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing IntroductionAfterTest Madrid March 2016 - DevOps and Testing Introduction
AfterTest Madrid March 2016 - DevOps and Testing Introduction
 
Bijayalaxmi Behera_CV
Bijayalaxmi Behera_CVBijayalaxmi Behera_CV
Bijayalaxmi Behera_CV
 
Testwarez 2009 Use Proper Tool
Testwarez 2009 Use Proper ToolTestwarez 2009 Use Proper Tool
Testwarez 2009 Use Proper Tool
 
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...
IEEE 2014 DOTNET DATA MINING PROJECTS Product aspect-ranking-and--its-applica...
 
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...
2014 IEEE DOTNET DATA MINING PROJECT Product aspect-ranking-and--its-applicat...
 
Developing bottom up devices and advanced manufacturing processes - nov 29 20...
Developing bottom up devices and advanced manufacturing processes - nov 29 20...Developing bottom up devices and advanced manufacturing processes - nov 29 20...
Developing bottom up devices and advanced manufacturing processes - nov 29 20...
 
PAC 2020 Santorin - Stijn Schepers
PAC 2020 Santorin - Stijn SchepersPAC 2020 Santorin - Stijn Schepers
PAC 2020 Santorin - Stijn Schepers
 
software engineering
software engineering software engineering
software engineering
 
QA is not quality
QA is not qualityQA is not quality
QA is not quality
 
APM Center of Excellence Drives Improved Business Results at Itau Unibanco
APM Center of Excellence Drives Improved Business Results at Itau UnibancoAPM Center of Excellence Drives Improved Business Results at Itau Unibanco
APM Center of Excellence Drives Improved Business Results at Itau Unibanco
 
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...Shift team effectiveness: Don't bother if you can't change "shop floor" shift...
Shift team effectiveness: Don't bother if you can't change "shop floor" shift...
 
Sdpl1
Sdpl1Sdpl1
Sdpl1
 

Plus de Almudena Vivanco

Performance Microservices in the Cloud
Performance Microservices in the CloudPerformance Microservices in the Cloud
Performance Microservices in the CloudAlmudena Vivanco
 
The sWag of performance Testing
The sWag of performance TestingThe sWag of performance Testing
The sWag of performance TestingAlmudena Vivanco
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance TestingAlmudena Vivanco
 
Integrating taurus and jmeter
Integrating taurus and jmeterIntegrating taurus and jmeter
Integrating taurus and jmeterAlmudena Vivanco
 
Fine line between performance and security
Fine line between performance and securityFine line between performance and security
Fine line between performance and securityAlmudena Vivanco
 
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoDia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoAlmudena Vivanco
 
Modelling performance tests
Modelling performance testsModelling performance tests
Modelling performance testsAlmudena Vivanco
 
Web pagetest Meetup At Trovit
Web pagetest Meetup At TrovitWeb pagetest Meetup At Trovit
Web pagetest Meetup At TrovitAlmudena Vivanco
 
After test Barcelona 20160303
After test Barcelona 20160303After test Barcelona 20160303
After test Barcelona 20160303Almudena Vivanco
 
Workshop performance vl ctesting
Workshop performance vl ctestingWorkshop performance vl ctesting
Workshop performance vl ctestingAlmudena Vivanco
 
Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Almudena Vivanco
 
Webperfdays2014 movistar tv
Webperfdays2014 movistar tvWebperfdays2014 movistar tv
Webperfdays2014 movistar tvAlmudena Vivanco
 
cómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experiencecómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experienceAlmudena Vivanco
 

Plus de Almudena Vivanco (18)

Performance Microservices in the Cloud
Performance Microservices in the CloudPerformance Microservices in the Cloud
Performance Microservices in the Cloud
 
Dotnet conf2019 barcelona
Dotnet conf2019 barcelonaDotnet conf2019 barcelona
Dotnet conf2019 barcelona
 
The sWag of performance Testing
The sWag of performance TestingThe sWag of performance Testing
The sWag of performance Testing
 
Continuous Performance Testing
Continuous Performance TestingContinuous Performance Testing
Continuous Performance Testing
 
Integrating taurus and jmeter
Integrating taurus and jmeterIntegrating taurus and jmeter
Integrating taurus and jmeter
 
Fine line between performance and security
Fine line between performance and securityFine line between performance and security
Fine line between performance and security
 
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivancoDia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
Dia02 t03-s04-vlctesting2017-seminario-almudena-vivanco
 
Modelling performance tests
Modelling performance testsModelling performance tests
Modelling performance tests
 
Web pagetest Meetup At Trovit
Web pagetest Meetup At TrovitWeb pagetest Meetup At Trovit
Web pagetest Meetup At Trovit
 
Expo qa 2016
Expo qa 2016Expo qa 2016
Expo qa 2016
 
After test Barcelona 20160303
After test Barcelona 20160303After test Barcelona 20160303
After test Barcelona 20160303
 
Workshop performance vl ctesting
Workshop performance vl ctestingWorkshop performance vl ctesting
Workshop performance vl ctesting
 
Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015Deployment Driven Development and Performance Testing TEFCON2015
Deployment Driven Development and Performance Testing TEFCON2015
 
Webperfdays2014 movistar tv
Webperfdays2014 movistar tvWebperfdays2014 movistar tv
Webperfdays2014 movistar tv
 
Velocity2014 gvp
Velocity2014 gvpVelocity2014 gvp
Velocity2014 gvp
 
Speed me up!
Speed me up!Speed me up!
Speed me up!
 
cómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experiencecómo medir lo inmensurable: real user experience
cómo medir lo inmensurable: real user experience
 
Performance
Performance Performance
Performance
 

Workshop for newcomers

  • 1. Telefónica Digital Barcelona, March 26th 2014 TELEFÓNICA DIGITAL
  • 2. Architecture Design Web Performance OOppttiimmiizzaattiioonn
  • 3. No instruments Users Review Late or none Performance Testing No Real User Monitoring Reactive Performance Tuning
  • 4. No tools, no performance dashboard, performance is for sysadmins No tools, no performance dashboard, performance is for sysadmins a anndd o oppeerraattoorrss
  • 5. Releases are costly and it may take Releases are costly and it may take s seevveerraal lm moonntthhss o of fw woorrkk
  • 6. Manual testing of each release Manual testing of each release a aftfteerr c cooddee f rfreeeezzee
  • 7. Non functional Requirements are Non functional Requirements are m moosstt l ilkikeelyly i gignnoorreedd
  • 8. In production there is no monitoring of the traffic and how it In production there is no monitoring of the traffic and how it a afffefeccttss t thhee b buussinineessss
  • 9. Users feedback is usually negative and there is no interaction with developers Users feedback is usually negative and there is no interaction with developers a anndd d deessigignneerrss
  • 10. Application’s performance affects directly to Application’s performance affects directly to m maarrkkeett’s’s p peerrfoforrmmaannccee
  • 11. • Continuous Integration – Functional testing – Automation – Monitoring
  • 12. Continuous Integration for functional testing is working already Continuous Integration for functional testing is working already i nin n nigighhttlyly b buuilidldss
  • 13. Automation reduces time to market Automation reduces time to market f oforr t thhee a apppplilcicaattioionnss
  • 14. Monitoring the real user behaviour and not just Monitoring the real user behaviour and not just h heeaaltlthhcchheecckk o of fs seerrvveerrss
  • 15. Error and Error and r risiskkss m maannaaggeemmeenntt
  • 16. TTuunniningg a anndd b buuggfifxixiningg
  • 17. Listening Listening t too u usseerr f efeeeddbbaacckk
  • 18. TThhee FFuuttuurree • Continuous Performance Integration – Performance tests integrated in Jenkins – Automation of the trend reports – Real User Monitoring  Real time feedback
  • 19. AGILE PERFORMANCE Telefónica Digital Product Development & Innovation 20
  • 20. TTTTeeeesssstttitinniinngggg PPPPrrrroouuaacecettbibivvaeae sPPs ee ddrrffeeoor r mRmReaaennnnccddee ititmemessttiiiienenggnn ffttoooorr eeaacchh rreelleeaassee.. LLooaadd tteessttss wwiillll ddiissccoovveerr tthhee ffllaawwss aanndd bboottttlleenneecckkss,, tthhee aapppplliiccaattiioonn oorr tthhee ssyysstteemm mmaayy hhaavvee iinn pprroodduuccttiioonn eennvviirroonnmmeenntt
  • 21. AAAAvvvvaaaaiilliiaallaabbbbiilliiiillttiitytyyy CCoonnoocceerr Conocer el eell eesscceennaarriioo escenario pprroodduuccttiivvoo productivo ppaarraa para tomar ttoommaarr uunnaa una bbuueennaa buena decisión ddeecciissiióónn ssoobbrree sobre cómo ccóómmoo oorriieennttaarr orientar llaass las pprruueebbaass pruebas ddee rreennddiimmiieennttoo eess HHiigghh LLaa hhiippóótteessiiss aavvaaiillaabbiilliittyy ddee ppaarrttiiddaa ooff tthhee ccoonn aapppplliiccaattiioonn mmááss ééxxiittoo.. aanndd tthhee ssyysstteemm iiss tthhee ggooaall ooff aa rreeaaddyy ffoorr sseerrvviiccee ssttaattuuss.. TThhee aapppplliiccaattiioonn aanndd tthhee ssyysstteemmss mmuusstt bbee ssttaabbllee ,, eeffffiicciieenntt aanndd ddiimmeennssiioonneedd aaccccoorrddiinngg ttoo tthhee uussaaggee..
  • 22. VVVVeeeelloolloocccciittiitytyyy Not only the response time is important, an intelligent use of the resources is needed to grow in the future. Efficiency, uunnddeerrssttoooodd aass ccaappaacciittyy ttoo ddiissppoossee ooff tthhee ssyysstteemm rreessoouurrcceess ttoo aacchhiivvee aann oobbjjeeccttiivvee,, iinn oouurr ccaassee rreessppoonnssee ttiimmee aanndd uuppttiimmee
  • 23. SSSSccccaaaallaallaabbbbiilliiiillttiitytyyy Being able to grow depending on the necessities of the market, users and new technologies is one of the questions to wwhhiicchh aa ppeerrffoorrmmaannccee eennggiinneeeerrffoorr wwiillll hhaavvee ttoo aannsswweerr
  • 24. WWWWoooorrrrkkkkLLLLooooaaaaddddssss A performance test is easy. It is easy to design non realistic scenarios. It is easy to collect irrelevant data. Even with aa ggoooodd sscceennaarriioo aanndd AApppprrooppiiaattee ddaattaa,, iitt iiss eeaassyy ttoo uussee aanndd iinnccoorrrreecctt ssttaattiissttiicc mmeetthhoodd ttoo aannaallyyssiiss tthhee rreessuullttss.. -- AAllbbeerrttoo SSaavvooiiaa
  • 25. PPPPrrrreeeePPPPrrrroooodddduuuuccccttttiiioiooonnnn One of the most important parts of a good performance test design is to have an appropiate load test environment, as similar aass ppoossssiibbllee ttoo PPrroodduuccttiioonn aatt aallll lleevveellss,, nneettwwoorrkkiinngg,, ssyysstteemmss aanndd aapppplliiccaattiioonn aarrcchhiitteeccttuurree..
  • 26. Otro título MMMMoooonnnniiititttoooorrrriiininnngggg EEEEEEEEEEEEEEEEEEEEsssssssssssssssssssscccccccccccccccccccceeeeeeeeeeeeeeeeeeeennnnnnnnnnnnnnnnnnnnaaaaaaaaaaaaaaaaaaaarrrrrrrrrrrrrrrrrrrriiiiiiiiiiiiiiioooooiiiiiooooooooooooooossssssssssssssssssss TToo kknnooww tthhee pprroodduuccttiioonn eennvviirroonnmmeenntt iiss kkeeyy ttoo ttaakkee ggoooodd ddeecciissiioonnss aabboouutt hhooww ttoo ddeessiiggnn aa ppeerrffoorrmmaannccee tteesstt ppllaann.. DDeessiiggnniinngg aa ppllaann aaccccoorrddiinngg TToo rreeaall ttrraaffffiicc aanndd uussaaggee ooff tthhee ppllaattffoorrmm iiss kkeeyy iinn ccrreeaattiinngg vvaalliiddaattiioonn ccrriitteerriiaa
  • 27. PPPPeeeerrrfrfofofoorrrmrmmmaaaannnncccceeee TT TTeeeeaaaammmmssss DDeevveellooppeerrss,, DDBBAA''ss,, QQAA''ss,, DDeevvOOppss,, pprroodduucctt oowwnneerrss ...... AAllll tthhee tteeaamm iiss ppaarrtt ooff ppeerrffoorrmmaannccee
  • 28. Otro título TTTToooooooollssllss There are many tools available in the market for load testing and monitoring. An effort in evaluating these tools will benefit aatt lloonngg tteerrmm tthhee EExxeeccuuttiioonn ooff tthhee tteessttss.. HHoowweevveerr,, tthhee mmoosstt iimmppoorrttaanntt ppaarrtt iiss hhooww tthhee rreeppoorrttss aarree ggeenneerraatteedd aanndd wwhhoo iiss ggooiinngg ttoo iinntteerrpprreett tthheemm..
  • 29. Otro título Mas puntos RRRReeeeaaaall llUU UUsssseeeerrr r MM MMoooonnnniittiitotooorrririnniinngggg Not only unique users or session times are important. How the users work with the application and the psicology ooff tthhee tthheemm aarree kkeeyy ttoo UUnnddeerrssttaanndd tthhee rreessuullttss aanndd hhooww iitt aaffffeeccttss ttoo bbuussiinneessss..
  • 30. BBBBeeeessssttt t PP PPrrraraaacccctttiticciicceeeessss Keep it simple, use cache wisely, invest in testing and monitoring, create a culture of performance in aallll tthhee oorrggaanniizzaattiioonn
  • 31. Users vs request • Yeah right … but how many users?
  • 32. SCENARIOS: Identify the scenarios that are most commonly executed or most resource-intensive
  • 33. METRICS: Only well-selected metrics that are analyzed correctly and contextually provide information of value.
  • 34. WORKLOAD MODEL: User Session Duration in average. It is important to define the load levels that will translate into concurrent usage, overslapping users, or user sessions per second.
  • 35. THINK TIMES: User thinktimes  Pause between pages during a User Session depending on the User Type*
  • 36. User Types: Identify the User … new, revisiting or both.
  • 37. Performance Acceptance Criteria: Response time, System load, Throughput ...
  • 38. Tuning IInInInnnnnnoooovvvvaaaatttitiooiioonnnn Techonology develops at high speed. To bring out the best of our product, business and techonology need to evolve by the hhaanndd.. IInnvveessttiinngg iinn PPeerrffoorrmmaannccee rreesseeaarrcchh iiss ccrruucciiaall ttoo kkeeeepp uupp wwiitthh ootthheerr iinntteerrnneett ccoommppeettiittoorrss..
  • 39.
  • 40. Establish the Funnel Telefónica Digital Product Development & Innovation 41
  • 41. Determine the session navigation Telefónica Digital Product Development & Innovation 42
  • 42. Establish the Session time Telefónica Digital Product Development & Innovation 43
  • 43. Test Strategies Load Test: Stepping Increasing Goal: Determine the maximum throughput attainable without errors and loss of responsiveness Model: threads Metrics: Throughput Telefónica Digital Product Development & Innovation 44
  • 44. Test Strategies Load Test: Variance/Uniform Goal: determine the maximum throughput for a fixed number of threads in a fixed time span Model: Threads/throughput Metrics: Max Number of threads, Max throughput Telefónica Digital Product Development & Innovation 45
  • 45. Test Strategies Load Test: Soak/Avalanche Goal: determine the limits and bottlenecks of the system during a peak load. Model: Threads/throughput Metrics: Time to Manage the Peak of Requests Telefónica Digital Product Development & Innovation 46
  • 46. Test Strategies Load Test: Spike Goal: determine the limits and bottlenecks of the system during a sudden peak load under load circumstances Model: throughput Metrics: Time to Manage the Peak of Requests Telefónica Digital Product Development & Innovation 47
  • 47. Test Strategies Load Test: Endurance/Robustness Goal: determine if the system looses performance during a long period of stationary load Model: Throughput Metrics: Optimized Throughput Telefónica Digital Product Development & Innovation 48
  • 48. Understand the Project Vision and Context Project Vision Project Context Understand the system Understand the Project Environment Understand the Performance Build Schedule Telefónica Digital Product Development & Innovation 49
  • 49. Improved way of working Improve performance unit testing by pairing performance testers with developers. Assess and configure new hardware by pairing performance testers with administrators. Evaluate algorithm efficiency Monitor resource usage trends, and usage of profilers in dev/qa environments Measure response times in production and trends in dev/qa environments Collect data for scalability and capacity planning. Telefónica Digital Product Development & Innovation 50
  • 50. Configure the Test Environment Set up isolated networking environment Procure hardware as similar as possible to production or at least keeping ration amongst all elements Coordinate bank of IP’s for IP spoofing Monitoring tools and operating systems like production Load generation tools or develop your own Telefónica Digital Product Development & Innovation 51
  • 51. Identify and Coordinate Tasks Work item execution method Specifically what data will be collected Specifically how that data will be collected Who will assist, how, and when Sequence of work items by priority Telefónica Digital Product Development & Innovation 52
  • 52. Execute Task(s) Keys to Conducting a Performance-Testing Task • Analyze results immediately and revise the plan accordingly. • Work closely with the team or sub-team that is most relevant to the task. • Communicate frequently and openly across the team. • Record results and significant findings. • Record other data needed to repeat the test later. • Revisit performance-testing priorities after no more than two days. Telefónica Digital Product Development & Innovation 53
  • 53. Analyze Results and Report  pause periodically to consolidate results conduct trend analysis create stakeholder reports, pair with developers, architects, and administrators to analyze results Telefónica Digital Product Development & Innovation 54
  • 54. But … what are you actually ddooiinngg ddaayy bbyy ddaayy??
  • 55. Case of STUDY Telefónica Digital Product Development & Innovation 56
  • 56. • HTML5 trends using Yslow and Firebug with Selenium
  • 57. • API testing using JMeter
  • 58. • BackEnd services using LoadUI reusing functional tests from soapUI

Notes de l'éditeur

  1. Performance is reactive and it is developed only for the system administrators remaining a dark side of the deployment and exploition of the platform.
  2. Changes are delayed into production sometimes performance tweaks never go back to development or QA environments. Those tweaks are directly done onto Production environment with 0 testing.
  3. Manual testing no automation leads to no concurrent testing and even less performance and load testing.
  4. Requirements such as how much does it take for the home to load are irrelevant and not taken into consideration. The systems are overdimensioned and covers for all the bad software design.
  5. As there is no monitoring of the performance or the real user experience only information is scattered around the marketing KPI’s and business revenues
  6. Automated Tests for continupus functional testing, regression tests, smoke tests in each build. Unit testing
  7. Builds are faster and they get to LIVE even faster.
  8. Validate that the application be able to handle X users per hour. Validate that the users will experience response times of Y seconds or less 95 percent of the time. Validate that performance tests predict production performance within a +/- 10-percent range of variance. Investigate hardware and software as it becomes available, to detect significant performance issues early. The performance team, developers, and administrators work together with minimal supervision to tune and determine the capacity of the architecture. Conduct performance testing within the existing project duration and budget. Determine the most likely failure modes for the application under higher-than-expected load conditions. Determine appropriate system configuration settings for desired performance characteristics.
  9. Performance testing is indispensable for managing certain significant business risks. For example, if your Web site cannot handle the volume of traffic it receives, your customers will shop somewhere else. Beyond identifying the obvious risks, performance testing can be a useful way of detecting many other potential problems. While performance testing does not replace other types of testing, it can reveal information relevant to usability, functionality, security, and corporate image that is difficult to obtain in other ways.   Many businesses and performance testers find it valuable to think of the risks that performance testing can address in terms of three categories: speed, scalability, and stability. Summary Matrix of Risks Addressed by Performance Testing Types Performance test type Risk(s) addressed Capacity Is system capacity meeting business volume under both normal and peak load conditions? Component Is this component meeting expectations? Is this component reasonably well optimized? Is the observed performance issue caused by this component? Endurance Will performance be consistent over time? Are there slowly growing problems that have not yet been detected? Is there external interference that was not accounted for? Investigation Which way is performance trending over time? To what should I compare future tests? Load How many users can the application handle before undesirable behavior occurs when the application is subjected to a particular workload? How much data can my database/file server handle? Are the network components adequate? Smoke Is this build/configuration ready for additional performance testing? What type of performance testing should I conduct next? Does this build exhibit better or worse performance than the last one? Spike What happens if the production load exceeds the anticipated peak load? What kinds of failures should we plan for? What indicators should we look for? Stress What happens if the production load exceeds the anticipated load? What kinds of failures should we plan for? What indicators should we look for in order to intervene prior to failure? Unit Is this segment of code reasonably efficient? Did I stay within my performance budgets? Is this code performing as anticipated under load? Validation Does the application meet the goals and requirements? Is this version faster or slower than the last one? Will I be in violation of my contract/Service Level Agreement (SLA) if I release?  
  10. Preproduction is not necessary a copy of production environment that will be the ideal world. Usually we deal with a copy and down scaled environment that ( and this is important ) Will keep the ratio between the different elements in the complex system.
  11. Validation criteria such as Resources of the machines and scalability of the system is key to deliver a good product. 3 validation criteria: Error Rate Response time System Resources
  12. From the developers using Decorators or Annotations in Java and tools in .net to control the elapsed time in execution and compilation time to the deployment team and ops that take care of the performance once the product is live. Everyone is entitled to participate in performance.
  13.  for the vast majority of the development life cycle, performance testing is about collecting useful information to enhance performance through design, architecture, and development as it happens. Comparisons against the end user–focused requirements and goals only have meaning for customer review releases or production release candidates. The rest of the time, you are looking for trends and obvious problems, not pass/fail validation. Make use of existing unit-testing code for performance testing at the component level. Doing so is quick, easy; helps the developers detect trends in performance, and can make a powerful smoke test. Performance testing is one of the single biggest catalysts to significant changes in architecture, code, hardware, and environments. Use this to your advantage by making observed performance issues highly visible across the entire team. Simply reporting on performance every day or two is not enough ― the team needs to read, understand, and react to the reports, or else the performance testing loses much of its value.
  14. Identify the scenarios that are most commonly executed or most resource-intensive; these will be the key scenarios used for load testing. For example, in an e-commerce application, browsing a catalog may be the most commonly executed scenario, whereas placing an order may be the most resource-intensive scenario because it accesses the database. The most commonly executed scenarios for an existing Web application can be determined by examining the log files. The most commonly executed scenarios for a new Web application can be obtained from market research, historical data, market trends, and so on. Resource-intensive scenarios can be identified by using design documents or the actual code implementation. The primary resources are: Processor Memory Disk I/O Network I/O Identify your application scenarios that are important from a performance perspective. If you have documented use cases or user stories, use them to help you define your scenarios. Key scenarios include the following: Critical scenarios. Significant scenarios. Critical Scenarios These are the scenarios that have specific performance expectations or requirements. Examples include scenarios covered by SLAs or those that have specific performance objectives. Significant Scenarios Significant scenarios do not have specific performance objectives such as a response time goal, but they may impact other critical scenarios. To help identify significant scenarios, identify scenarios with the following characteristics: Scenarios that run in parallel to a performance-critical scenario. Scenarios that are frequently executed. Scenarios that account for a high percentage of system use. Scenarios that consume significant system resources. Do not ignore your significant scenarios. Your significant scenarios can influence whether your critical scenarios meet their performance objectives. Also, do not forget to consider how your system will behave if different significant or critical scenarios are being run concurrently by different users. This "parallel integration" often drives key decisions about your application's units of work. For example, to keep search response brisk, you might need to commit orders one line item at a time.
  15. 3 possible pov User, System and Operator Define questions related to your application performance that can be easily tested.  For example, what is the checkout response time when placing an order? How many orders are placed in a minute? These questions have definite answers. With the answers to these questions, determine quality goals for comparison against external expectations.  For example, checkout response time should be 30 seconds, and a maximum of 10 orders should be placed in a minute. The answers are based on market research, historical data, market trends, and so on. Identify the metrics.  Using your list of performance-related questions and answers, identify the metrics that provide information related to those questions and answers. Identify supporting metrics.  Using the same approach, you can identify lower-level metrics that focus on measuring the performance and identifying the bottlenecks in the system. When identifying low-level metrics, most teams find it valuable to determine a baseline for those metrics under single-user and/or normal load conditions. This helps you determine the acceptable load levels for your application. Baseline values help you analyze your application performance at varying load levels and serve as a starting point for trend analysis across builds or releases. Reevaluate the metrics to be collected regularly.  Goals, priorities, risks, and current issues are bound to change over the course of a project. With each of these changes, different metrics may provide more value than the ones that have previously been identified. Additionally, to evaluate the performance of your application in more detail and to identify potential bottlenecks, it is frequently useful to monitor metrics in the following categories: Network-specific metrics.  This set of metrics provides information about the overall health and efficiency of your network, including routers, switches, and gateways. System-related metrics.  This set of metrics helps you identify the resource utilization on your server. The resources being utilized are processor, memory, disk I/O, and network I/O. Platform-specific metrics.  Platform-specific metrics are related to software that is used to host your application, such as the Microsoft .NET Framework common language runtime (CLR) and ASP.NET-related metrics. Application-specific metrics.  These include custom performance counters inserted in your application code to monitor application health and identify performance issues. You might use custom counters to determine the number of concurrent threads waiting to acquire a particular lock, or the number of requests queued to make an outbound call to a Web service. Service-level metrics.  These metrics can help to measure overall application throughput and latency, or they might be tied to specific business scenarios. Business metrics.  These metrics are indicators of business-related information, such as the number of orders placed in a given timeframe. Step 6 – Allocate Budget Spread your budget (determined in Step 4, "Identify Budget") across your processing steps (determined in Step 5, "Identify Processing Steps") to meet your performance objectives. You need to consider execution time and resource utilization. Some of the budget may apply to only one processing step. Some of the budget may apply to the scenario and some of it may apply across scenarios. Assigning Execution Time to Steps When assigning time to processing steps, if you do not know how much time to assign, simply divide the total time equally between the steps. At this point, it is not important for the values to be precise because the budget will be reassessed after measuring actual time, but it is important to have an idea of the values. Do not insist on perfection, but aim for a reasonable degree of confidence that you are on track. You do not want to get stuck, but, at the same time, you do not want to wait until your application is built and instrumented to get real numbers. Where you do not know execution times, you need to try spreading the time evenly, see where there might be problems or where there is tension. If dividing the budget shows that each step has ample time, there is no need to examine these further. However, for the ones that look risky, conduct some experiments (for example, with prototypes) to verify that what you will need to do is possible, and then proceed. Note that one or more of your steps may have a fixed time. For example, you may make a database call that you know will not complete in less than 3 seconds. Other times are variable. The fixed and variable costs must be less than or equal to the allocated budget for the scenario. Assigning Resource Utilization Requirements When assigning resources to processing steps, consider the following: Know the cost of your materials. For example, what does technology x cost in comparison to technology y. Know the budget allocated for hardware. This defines the total resources available at your disposal. Know the hardware systems already in place. Know your application functionality. For example, heavy XML document processing may require more CPU, chatty database access or Web service communication may require more network bandwidth, or large file uploads may require more disk I/O.
  16. A user scenario is defined as a navigational path, including intermediate steps or activities, taken by the user to complete a task. This can also be thought of as a user session. A user will typically pause between pages during a session. This is known as user delay or think time. A session will have an average duration when viewed across multiple users. It is important to account for this when defining the load levels that will translate into concurrent usage, overlapping users, or user sessions per unit of time. Not all scenarios can be performed by a new user, a returning user, or either; know who you expect your primary users to be and test accordingly. Step 2 – Identify Workload Workload is usually derived from marketing data. It includes the following: Total users. Concurrently active users. Data volumes. Transaction volumes and transaction mix. For performance modeling, you need to identify how this workload applies to an individual scenario. The following are example requirements: You might need to support 100 concurrent users browsing. You might need to support 10 concurrent users placing orders. Note   Concurrent users are those users that hit a Web site at exactly the same moment. Simultaneous users are those users who have active connections to the same site.
  17. Step 4 – Identify Budget Budgets are your constraints. For example, what is the longest acceptable amount of time that an operation should take to complete, beyond which your application fails to meet its performance objectives. Your budget is usually specified in terms of the following: Execution time. Resource utilization. Execution Time Your execution time constraints determine the maximum amount of time that particular operations can take. Resource Utilization Resource utilization requirements define the threshold utilization levels for available resources. For example, you might have a peak processor utilization limit of 75 percent and your memory consumption must not exceed 50 MB. Common resources to consider include the following: CPU. Memory. Network I/O. Disk I/O. Additional Considerations Execution time and resource utilization are helpful in the context of your performance objectives. However, budget has several other dimensions you may be subject to. Other considerations for budget might include the following: Network. Network considerations include bandwidth. Hardware. Hardware considerations include items, such as servers, memory, and CPUs. Resource dependencies. Resource dependency considerations include items, such as the number of available database connections and Web service connections. Shared resources. Shared resource considerations include items, such as the amount of bandwidth you have, the amount of CPU you get if you share a server with other applications, and the amount of memory you get. Project resources. From a project perspective, budget is also a constraint, such as time and cost.
  18. Identifying performance acceptance criteria is most valuable when initiated early in the application’s development life cycle. It is frequently valuable to record the acceptance criteria for your application and store them in a place and format that is available to the entire team for review and comment. Criteria are typically determined by balancing your business, industry, technology, competitive, and user requirements. Test objectives frequently include the following: Response time.  For example, the product catalog must be displayed in less than 3 seconds. Throughput.  For example, the system must support 100 transactions per second. Resource utilization.  A frequently overlooked aspect is the amount of resources your application is consuming, in terms of processor, memory, disk input output (I/O), and network I/O. Maximum user load.  This test objective determines how many users can run on a specific hardware configuration. Business related metrics.  This objective is mapped to business volume at normal and peak values; for example, the number of orders or Help desk calls handled at a given time. Step 3 – Identify Performance Objectives For each scenario identified in Step 1, write down the performance objectives. The performance objectives are determined by your business requirements. Performance objectives usually include the following: Response time. For example, the product catalog must be displayed in less than 3 seconds. Throughput. For example, the system must support 100 transactions per second. Resource utilization. A frequently overlooked aspect is how much resource your application is consuming, in terms of CPU, memory, disk I/O, and network I/O. Consider the following when establishing your performance objectives: Workload requirements. Service level agreements. Response times. Projected growth. Lifetime of your application. For projected growth, you need to consider whether your design will meet your needs in six months time, or one year from now. If the application has a lifetime of only six months, are you prepared to trade some extensibility for performance? If your application is likely to have a long lifetime, what performance are you willing to trade for maintainability?
  19. If the site is not available or it is slow, users will shop in another e-commerce. There is a direct relation between abandon rates and performance problems in e-commerce.
  20. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  21. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  22. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  23. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  24. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  25. The test team’s initial understanding of the system under test, the project environment, the motivation behind the project, and the performance build schedule can often be completed during a work session involving the performance specialist, the lead developer, and the project manager (if you also have a tentative project schedule) Project Vision the features, implementation, architecture, timeline, and environments are likely to change over time, you should revisit the vision regularly, as it has the potential to change as well Project Context factors that are, or may become, relevant to achieving the project vision. Such as Client expectations, budget, timeline, project environment … Understanding the system Understanding the system you are testing involves becoming familiar with the system’s intent, what is currently known or assumed about its hardware and software architecture, and the available information about the completed system’s customer or user. Understanding the Project Environment In terms of the project environment, it is most important to understand the team’s organization, operation, and communications techniques. Agile teams tend not to use long-lasting documents and briefings as their management and communications methods; instead, they opt for daily stand-ups, story cards, and interactive discussions. Failing to understand these methods at the beginning of a project can put performance testing behind before it even begins. Understand the Performance Build Schedule At this stage, the project schedule makes its entrance .However, someone or something must communicate the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the performance success criteria. Because you are not creating a performance test plan at this time, remember that it is not important to concern yourself with dates or resources. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds. The specific performance builds that will most likely interest you relate to hardware components, supporting software, and application functionality becoming available for investigation.
  26. Success Criteria Validate that the application be able to handle X users per hour. Validate that the users will experience response times of Y seconds or less 95 percent of the time. Validate that performance tests predict production performance within a +/- 10-percent range of variance. Investigate hardware and software as it becomes available, to detect significant performance issues early. The performance team, developers, and administrators work together with minimal supervision to tune and determine the capacity of the architecture. Conduct performance testing within the existing project duration and budget. Determine the most likely failure modes for the application under higher-than-expected load conditions. Determine appropriate system configuration settings for desired performance characteristics.
  27. PC Client, Set top box, smart TV’s and HTML5 portals for android and iOs