This document provides an overview of various performance testing tools and techniques including:
- LoadRunner - Functions for correlation, exiting iterations, string manipulation, important LoadRunner functions, runtime options, recording options, and common issues/solutions.
- Neoload - Components like forks for parallel execution, adding pacing and think time, populations, duration and load variation policies, and common issues.
- JMeter - Recording, components hierarchy, configuration, running in non-GUI mode, distributed testing, and common issues.
- Perfmon for system performance monitoring using counters, data collector sets, and generating performance reports.
- JVisualVM for monitoring Java applications via tabs for overview, monitoring, threads, sampling,
1. Gearing up for -
performance testing and engineering jobs
then you should go through below at leastonce.
--Pratik V. Mohite (freelancer and testing tool developer and integration consultant)
--13pratik.mohite@gmail.com
--+91-9535462495
2. 1. Loadrunner
a. Correlation function
i. Web_reg_save_param(“param”,”LB/IC=”,”RB/IC=”,LAST);
ii. Web_reg_save_param_regexp(“ParamName=cPar”,”RegExp=abc(n)(s)(.+?)xyz”,”Group=3”,”ORD=All”,L
AST)
iii. wild card
1. . * ?
2. escape characters
b. Exit iteration functions
i. lr_exit(intcontinuation_option,intexit_status
1. Continuationoption:
a. LR_EXIT_VUSER
b. LR_EXIT_ACTION_AND_CONTINUE
c. LR_EXIT_MAIN_ITERATION_AND_CONTINUE
d. LR_EXIT_ITERATION_AND_CONTINUE
e. LR_EXIT_VUSER_AFTER_ITERATION
f. LR_EXIT_VUSER_AFTER_ACTION
2. Exit status
a. LR_PASS
b. LR_FAIL
c. LR_AUTO
c. String manipulation funtions
i. Char * token = strtok(str,s);
ii. Char * val=strcat(char *dest,constchar *src)
iii. char *strcpy(char *dest, const char *src)
iv. size_t strlen(constchar *str)
d. Important Loadrunner functions
i. Lr_output_message(“%s”,lr_eval_string(“{cVar}”));
ii. Lr_eval_string() : to evaluatethe string
iii. Web_text_find(“Text=value”,LAST) ; for validation testcheck
iv. Lr_save_int() , lr_save_string
v.
e. Runtime options
i. Run logic :
1. to set number of iteration
2. and assign %block for action
ii. Pacing
1. After the previous iteration ends
a. the time you want to keep idleafter iteration ends
2. At random intervals :
a. it includes total action time + remainingtime from given pacing
b. e.g. if action =5 sec and pacingyou have provided is 8 sec then 3 sec will beutilized as
idletime between iterations
iii. Run as process vs run as thread
iv. Browser emulation
1. simulatenew user on each iteration
v. internet protocol preference options
1. http request connect timeout
2. step download timeout
vi. Download filter : if you want to exclude certain hostrequests
vii. content check :
f. Recording options
i. http://www.softwarehour.com/loadrunner/recording-options-loadrunner-illustrations/
ii. recording
3. 1. html vs URL
2. Rules : enable correlation rules by addingexplicitly new ones or usingexistingones
a. framework rules : this is assumed as partof dynamic correlation where you add listof
correlation rules which gets recognized by design studio
iii. Network: mappingand filtering
1. capture level
a. Socket: can be used for all kind of application (technology),itLR inbuiltmechanism
which uses SOCKS mechanism
b. WinInet: It used windows packet capturingmechanism
i. most of the time work for .net based application
ii. resourceutilization more
g. Parametrization
i. data assignmentmethod
1. sequential
2. random
3. unique
ii. Update method
1. each iteration
2. each occurrence
3. once
iii. file:
h. How to execute number of request parallely
i. web_concurrent_null ()
i. Loadrunner general issueand solution
i. SSL error
1. web_set_sockets_option("SSL_VERSION", "TLS")
2. web_set_sockets_option("SSL_VERSION", "2 & 3")
ii. No events arerecording
1. Port mapping capturelevel Socket level data SSL 2& 3 or TLS 1.0
2. try switchingbrowser
iii. Specific events are not recorded
1. Locate the followingregistry key:
a. [HKEY_CURRENT_USERSoftwareMercury InteractiveNetworkingMulti
SettingsQTWebRecording]
2. Add the followingstringvalueto it:
a. "GenerateApiFuncForCustomHttpStatus"="301"
iv. Duringthe recordingthe recorded application shows an error messageabout the wrong server certificate
1. LoadRunner CertificateAuthority (CA) fileshould be added to the machine’s “Trusted Root
Certificate
v. The browser crashes duringrecordingwhen usingthe Ajax Click and Scriptprotocol
1. Go to the <LR_folderdatprotocols> folder and open the WebAjax.lrp file
2. Comment out the following:DllGetClassObject:jscript.dll=DllGetClassObjectHook:ajax_hooks.dll
(simply put a semi-colon (‘;’) in front of the line)
vi. SAP : loadrunner agent failed to connect controller
1. Loadrunner agent machine: open load runner agent configuration and turn off the enable
terminal services
2. controller machine: under load generator properties, turn on enable to terminal services
vii. server name" has shutdown the connection prematurely
1. To Resolvethis, you can try addingthe below command:
a. web_set_sockets_option("MAX_CONNECTIONS_PER_HOST","1")
2. if still issueexists
a. web_set_sockets_option("IGNORE_PREMATURE_SHUTDOWN", "1")
4. viii. https://softwaretesttips.com/2010/11/17/troubleshooting-guide-for-problems-with-loadrunner-web-
recording/
ix. Flex protocol
1. https://easyloadrunner.blogspot.ie/2013/11/common-problems-in-flex-applications.html
x.
j. Controller
i. Basic schedulevs real world schedule
ii. Schedule by scenario vs scheduleby group
iii. How to add monitors
k. Analyzer
i. group by scripts
ii. overview of summary report
iii. how to compare two reports
iv. check runtime setting of executed run
v. what are values can be highlighter in final testreport
2. Neoload
a. what is use of fork : to execute requests parallely
b. how to add pacing
i. how itworks
c. how to add think time
d. use of population
e. population advanced parameters
f. duration policy
i. no limit
ii. by iteration
iii. by time
g. load variation policy
i. constant
ii. ramp up
iii. peaks
iv. custom
h. add monitors
i. design Monitors new monitor machine IP listof attributes to monitor
i. general issues
i. Access is denied to config.zip
1. in neoload project folder “config.zip.<date-time>.bal change itto config.zip
2. if still issuepersistthen open the scriptand savewith another name
ii. not ableto capture video fragments
1. identify the video packets abbreviation(such as mpeg, flv,f4f) and add this under project settings
media streaming
iii. certificateissue
1. in neoload installed folder ,find conf folder under this certificateis present
iv. if not ableto access RTMP protocol or recordingprocess takinglotof time
1. Closeall instanceof intended recordingbrowser
2. if still issueexist then run tool with admin access
3. if still issueexistthen changethe compatibitlty of tool wrt to OS
v.
3. Jmeter
a. recording
i. Workbench Non test elements HTTP test recorder
1. target controller
2. sampler
3. port
5. b. all components hierarchy
i. logic controller
1. if controller
2. loop controller
3. transaction controller
4. throughput controller
a. used when want to configurethe pacingor want to emulate the required number of
transaction countbased on the time
ii. configelement
1. CSV data config
a. filename
b. variablename(comma delimited)
c. delimiter
2. cookiemanager
3. cachemanager
4. HTTP request default
iii. timer
1. constanttimer
2. Gaussian randomtimer
iv. pre processor
1. HTML url rewritingmodifier
v. post processor
1. Beanshell postprocessor
2. regular expression extractor
a. regular expression
b. template
c. match no(0 for random values)
d. refname_MatchNr = for total count
vi. Samplers
1. HTTP request
a. servers
b. path
c. parameters
d. connect timeout
2. java request
3. jdbc request
a. query type
b. query
4. SOAP/XML –RPC request
a. end pointurl
b. message
5. debug sampler
6. JSR223 sampler
a. Can selectrequired language
b. Vars.get(“ ”), vars.put(“dest”, sourcrVariable),OUT.println()
vii. assertions
1. Response assertion
a. Apply
b. responsefield to test
c. pattern matchingrules
d. pattern to test
viii. Listeners
1. aggregate report
2. summary resulttree
6. 3. summary report
c. how to use stepping up thread group
d. how to use pacing
e. how to configure SSL version : under system.properties
f. how to run via Non gui mode
i. jmeter –H <proxyserver > -P <port> -n t <file.jmx> -l <results.jtl>-u <username> -p <password>
g. Configure load generator machines(master and slave)
i. add remote_machine=<IP > in jmtere.properties
ii. run server.bat in LG (slave) machine
h. how to integrate with blazemeter
i. how to integrate with sense blazemeter
j. Jmeter general issues
i. http://www.technix.in/jmeter-problems-and-solutions/
ii. distributed mode , non gui, credential
1. jmeter -n -t <testplan.jmx> -R <ip1>,<ip2> -l <result.jtl>-u <userId> -p <pwd>
iii. usingjava program
1. beanshell / postprocessor,JSR223 sampler
iv. If jmeter is getting stuck at run time then need to increaseheap valuefor jmeter
1. edit jmeter.bat
a. increaseXmx value
v. how to record AJAX request (i.e. XMLHttpRequest) usingJMeter HTTP Proxy Server
1. They are recorded same way as any other request, becausebrowser sends all http-requests
(even AJAX) thru the defined proxy. The most difficultpartusually isto find the proper
parameters for that AJAX request as well as how to do reporting.
2. Firstyou have to find out how the parameters for AJAX request are created. Usually they can be
found from the page which does the AJAX request, so you have to startreadingthe HTML-
sources,and then create proper post-processor.In my opinion the regular expression is usually
the best one if it's not atsome parameter and its attribute
3. and reporting needs manual work if you want to do itwell. You have to check how the requests
are made to server. Usually AJAX-requests are done followingway:
a. Load the page which does the AJAX
b. Short delay while parsingthe Javascript
c. Concurrently 1 or more AJAX-requests.
4. There might be additional AJAX-requests after step 3, so you should includethem also to total
loadingtime.
5. So the real loadingtime of web page is usually Step 1 + step 2 + maximum of items at step 3
(+rest of the steps). At final calculations you mightend up to conclusion,thatsome AJAX-
requests are not affecting how to user experience. Those you can exclude from final calculations.
One such can be e.g. statisticstracking
vi. SSLhandshakeException
1. locateApacheJMeterTemporaryRootCA.crt and install it
vii. jmeter stress/break pointperformance test usingconstantthroughput timer
1. throughput shapingtimer
viii. Convert epoch time format in jmeter, of jtl fileresults
a. add this fileto jmeter properties : Timestamp format - this only affects CSV output files,
legitimate values:none, ms, or a format suitablefor SimpleDateFormat
i. # jmeter.save.saveservice.timestamp_format=ms
ii. jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
ix. record without downloadingjmeter
1. https://guide.blazemeter.com/hc/en-us/articles/206732579-Chrome-Extension
x. Send email on failed responses
1. http://stackoverflow.com/questions/43845778/regular-expression-extractor-jmeter-i-want-to-
send-email-for-404-response-code
xi. Run jmeter from java program
7. 1. http://stackoverflow.com/questions/19147235/how-to-create-and-run-apache-jmeter-test-
scripts-from-a-java-program
2.
4. Perfmon: monitor system performance with counters
a. data collector set
i. properties
1. Sampler interval:number
2. units:string
3. directory
4. name format
5. stop condition
6. logformat
b. counters
i. Memory
1. % committed bytes
2. availablebytes
ii. Processor
1. % Processor Timeis the percentage of elapsed time that the processor spends to execute a non-
Idlethread
a. threshold: 80%
2. % IdleTime is the percentage of time the processor is idleduringthe sampleinterval
3. % Interrupt Time is the time the processor spends receivingand servicinghardwareinterrupts
duringsampleintervals
4. % User Time is the percentage of elapsed time the processor spends in the user mode.
iii. process
1. Process/PageFileBytes: counter displaysthesize of the pagefile.Windows uses virtual memory
(the pagefile) to supplement a machine's physical memory. As a machine's physical memory
begins to fill up,pages of memory aremoved to the pagefile. It is normal for the pagefileto be
used even on machines with plenty of memory. But if the size of the pagefilesteadily increases,
that's a good sign a memory leak is occurring.
2. Process/HandleCount counter. Applications usehandles to identify resources that they must
access.If a memory leak is occurring,an application will often create additional handles to
identify memory resources.So a risein the handlecount might indicatea memory leak.
However, not all memory leaks will resultin a risein thehandle count
3.
c. System performance report
d. how to connect to remote machine
5. JVisualvm
a. https://visualvm.github.io/documentation.html
b. jvisual vmdefaultport :10224
c. jvisualVmdiscovers therunningjava app with the help of jps(jvmprocess status) tool
d. jstat:: The jstattool displays performancestatisticsfor an instrumented HotSpot Java virtual machine(JVM).
e. rightclick properties of application
i. thread dump
ii. heap dump
iii. profile(visual vmcan not profileitself)
iv. applicatin snapshot
v. Enable Heap Dump on OOME
f. main tabs
i. overview
1. PID
2. main class
3. jvm arguments
8. 4. system properties
ii. Monitor
1. heap : java.lang.Runtime.totalMemory() and java.lang.Runtime.freeMemory()
2. permgen: is the areas of the heap where class and methods objects are stored
a. if largenumber of classes loaded then the sizeof perm gen might need to be increased
3. cpu : cpu consumed by java process
4. classes:loaded and shared
5. threads
a. it is based on JMX
b. if jvisVmmake jmx connection with target application then threads will beenable
c. livethread/finished threads
iii. Threads
1. states
a. running
b. sleeping
c. wait
d. monitor
iv. Sampler
1. CPU sampler
a. Available.Press the'CPU' button to start collectingperformancedata
b. CPU samples
c. thread cpu time
d. view
i. methods/ classes/packages
e. call tree
f. hotspots
2. Memory sampler
a. Available.Press the'Memory' button to start collectingmemory data.
b. classname:liveobjects
3. settings:
a. samplingand refresh rate in ms units
v. Profilesnashots :*.nps , *.npss
vi. Samplers Vs profilers
1. Overhead
a. Sampling:less overhead as ittakes samples or dumps atregular interval
b. Profiler:create more overhead
2.
vii. thread dump: *.tdump
viii. heap dump
1. summary
2. classes
3. instances
4. OQL instance
g. get the heap sizedetails in command prompt
i. jstat-gc <PID> | tail -n 1 | awk '{split($0,a,""); sum=a[3]+a[4]+a[6]+a[8]; printsum}'
6. New relic
a. Overview
b. Transaction traces
c. Apdex_t and apdex_f values
i. Apdex t
ii. Apdex_f is four time of apdex_t
d. Key transction: One benefit of makinga transaction a key transaction is thatyou can set a transaction-specific
Apdex that is differentfrom your general Apdex T threshold.
e. Installation
9. 7. Dynatrace : APM tool
a. Basic components
i. Dt agent
ii. Dt collector
iii. Dt server
iv. Dt client
v. Dt performance warehouse
b. basic diagnostics
c. identify hotspots and isolateperformance problems
d. dynatrac capabilities
i. cross browser diagnostics
ii. code level visibility
iii. page load times
iv. javasciptand dom tracking
v. sharepurepath
e. KPI
i. KPI on load time
1. time to firstimpression
a. issue>2.5s
2. time to onLoad event
a. issue>4s
3. time to fully loaded
a. issue> 5s
ii. KPI on resources
1. total number of request
2. total http 300 400 500
3. sizeif website
4. total number of XHR requests
iii. KPI on nw connection
1. DNS time
2. connect time
3. server time
4. transfer time
5. waittime
6. number of domains
f. best practices
i. browser caching
ii. network requests and rounf trips
1. avoid redirects,400 , 500
2. optimize images,css,js
3. rank calculation
a. page scores a 100 if thre are no redirects , 400s 500s and no static resources
iii. javascriptand ajax performance
1. bloack longrunningscripts
2. slowcss ,jquery
3. should not be more than 5 XHR request per page
iv. server side
g. purepath : A PurePath is the horizontal viewof a transaction in a monitored application environmentand is the
basis for top down analysis,which is defined by analyzinghowan application or transaction is impacted by the
underlyinginfrastructure.
h. PureStackTechnology® directlycorrelatessysteminfrastructure healthdatafromeverytransactiontier
ina monitoredapplicationenvironmentwithindividualtransactionsandaffectedendusersinreal-time.
PureStackisthe vertical viewof infrastructure inamonitoredapplicationenvironmentandisthe basis
10. for bottomup analysis,whichisdefinedbyanalyzingproblemsinthe infrastructure andassessingand
correlatingtheirimpactonapplicationperformance andend-userexperience.
8. Java
a. Java architecture
i. Source code compile(javac) byte code (.class) jvm machinecode
ii. JDK [ JRE[ classloader +libraries+JVM [ ] ] + compileand execution tools ]
b. JVM architecture
i. class loader
1. Loading| Linking| Initialization
ii. runtime data areas
1. Method area:class name,methods, variable information,staticvariable
a. there is only one method area per JVM
2. heap: information aboutall objects ,
3. stack area : for every thread one runtime stack , all local variables of methods
4. PC registers : store address of current instruction of a thread
5. native method stack
11. iii. execution engine
1. Interpreter : interprets byte code lineby line
2. JIT compiler : increaseefficiency of bytecode
a. whenever see repeated method callsJITprovidedirectmethod code
3. GC : it destroys undeclared objects
iv. Native method interface
v. Native method lib
c. Quick notes
i. For every loaded .class file,only one object of Class iscreated
ii. http://www.geeksforgeeks.org/jvm-works-jvm-architecture/
9. Garbage collection
a. https://www.dynatrace.com/resources/ebooks/javabook/how-garbage-collection-works/
b. http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html
c. architecture
i. Young generation: itcauses minor gc
1. eden space
2. Survivor space(1 &2)
ii. Old generation : tenured : major gc
iii. Permanent generation
d. Steps
i. First,any new objects areallocated to the eden space.Both survivor spaces startoutempty.
ii. When the eden spacefillsup,a minor garbagecollection is triggered.
iii. Referenced objects are moved to the firstsurvivor space.Unreferenced objects are deleted when the
eden spaceis cleared.
iv. object aging: At the next minor GC, the same thinghappens for the eden space.Unreferenced objects
are deleted and referenced objects are moved to a survivor space.However, in this case,they aremoved
to the second survivor space(S1).In addition,objects from the lastminor GC on the firstsurvivor space
(S0) have their age incremented and get moved to S1. Once all survivingobjects havebeen moved to S1,
both S0 and eden are cleared.Notice we now have differently aged object in the survivor space.
v. Additional aging: At the next minor GC, the same process repeats. However this time the survivor spaces
switch.Referenced objects aremoved to S0. Survivingobjects areaged. Eden and S1 are cleared.
vi. Promotion : This slidedemonstrates promotion. After a minor GC, when aged objects reach a certain age
threshold (8 in this example) they are promoted from young generation to old generation.
vii. As minor GCs continue to occureobjects will continueto be promoted to the old generation space.
viii. GC process summary So that pretty much covers the entire process with the young generation.
Eventually, a major GC will be performed on the old generation which cleans up and compacts that space.
ix.
12. e. types of GC
i. Serial
1. -XX:+UseSerialGC
ii. parallel
1. -XX:+UseParallelGC
iii. CMS Concurrent Mark Sweep (CMS) : default
1. -XX:+UseConcMarkSweepGC
iv. G1 used for largeheap memory area
1. –XX:+UseG1GC
v.
vi.
10. Workload modelling
a. Simulatingthe application under test (AUT) real users' behavior is the fundamental pointin performance testing. A
workload model is designed to identify key test scenarios and load distribution across thesescenarios
b. Question
i. 100 users ,1000 iteration, 1 iteration avg 4 sec , what pacing
1. per user 10 iteration
2. 10 * 4 = 40 sec
c. Throughput = (number of requests) / (total time).
d. littlelaw : N = Throughput * (Response Time + Think Time)
i. N :max number of users at peak time
ii. Throughput : number of users arrival rateper sec
iii. avg responsetime in sec
e. 90 percentile=The response time value for a transaction belowwhich 90% of the data points
i. calculate90 percentile
1. arrangeall values in ascendingorder
2. 0.90 * NumberOfValues= <value>
11. Metrics
a. Clientsidemetrics
i. average responsetime
ii. error %
iii. passed failed transaction
13. iv. hits per seconds
v. throughput
b. Web server metrics
i. queue length
ii. request waittime
iii. connection pool size/ maximum connection
iv. transaction per seconds
v. error rate
c. app server metrics
i. cpu utilization
ii. memory utilization
iii. activethreads, total threads
iv. connection waittime
v. timeouts
d. databasemetrics
i. volume of data sent and received by the server
ii. connection summary
1. total numbner of open and closeconnection,excess of databaseconnection can badly affectthe
databaseperformacne
iii. db thread summary
1. number of new threads cnnected and used and active
iv.
12. Performance test plan
a. objective of report
b. scope of perf testing
c. assumption
d. risks
e. dependencies
f. business sceanrios:scopeand descope
g. workload model
i. max users
ii. max volume
h. test approach and methodology
i. test env and configuration
j. test tool details
k. LG configuration
l. performacne monitoring
13.Performance final report
a. objective of report
b. scope of perf testing
c. business sceanrios:scopeand descope
d. workload model
i. max users
ii. max volume
e. test approach and methodology
f. test env and configuration
g. test tool details
h. LG configuration
i. performacne monitoring
j. executions
i. Benchmark test
ii. load test
iii. stress test
14. iv. endurance test
v. comparison test : benchmark vs load vs stress
k. recommendation and suggestion
l. risks for future scalability of system
14. Performance final report
https://github.com/Netflix/SimianArmy/wiki/The-Chaos-Monkey-Army
google analytics
machine learning
Splunk–
hadoop
AEM
angular , node.js , react .js
blockchain
IoT
CIA