Mobile apps bring a new set of challenges to testing—fast-paced development cycles with multiple releases per week, multiple app technologies and development platforms to support, dozens of devices and form factors, and additional pressure from enterprise and consumers who are less than patient with low quality apps. And with these new challenges comes a new set of mistakes testers can make! Fred Beringer works with dozens of mobile test teams to help them avoid common traps when building test automation for mobile apps. Fred shares some useful best practices, starting with mobile test automation. He explains what and where to automate, how to build testability into a mobile app, how to handle unreliable back-end calls and different device performance, and how to automate the automation. Fred shares real customer stories and shows how small changes in process can make mobile apps ten times more reliable.
Top Practices for Successful Mobile Test Automation
1. T22
Mobile Testing
5/8/2014 3:00:00 PM
Top Practices for Successful
Mobile Test Automation
Presented by:
Fred Beringer
SOASTA
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Fred Beringer
SOASTA
Fred Beringer is VP of products at SOASTA where he is helping customers get started with
mobile test automation. Previously, as software testing director for Experian, Fred built and led a
worldwide organization responsible for testing all products within the Experian Decision
Analytics portfolio. He introduced cloud-based testing at Experian while helping the
development organization transition to agile. Throughout his career with IBM and Experian, Fred
has driven strategic projects around data replication (for Big Data customers), business
intelligence, performance testing, cloud computing, cloud testing, and open source. Follow him
on Twitter @fredberinger, email him at fred@fredberinger.com, and learn more about Fred at
fredberinger.com.
7. EXPECTATION
• It is as difficult as developing a good mobile app
• It is a software project by itself
• It is not optional in mobile
• You will probably fail before you succeed
9. AUTOMATION OBJECTIVES
• Continuous feedback to fix bugs faster
• Enable new activities
• Increase test coverage
• Raise confidence
• Better customer support
10. #2 – TRACK YOUR OBJECTIVES
IF IT IS NOT MEASURED, IT DOESN’T EXIST
11. SET YOUR OWN METRICS
• Turnaround time for fixes
• Customer satisfaction on app store
• Customer onboarding time
• Lower number of bugs on production
• EMTE (Equivalent Manual Test Effort)
12. ACTIONABLE TEST INTELLIGENCE
• Part of the automation framework
• Available and highly visible to everyone
• Establish a baseline and track ROI
• Take costs into account (Value=Benefit-cost)
• Too many metrics can hurt
• Rinse and repeat until you get it right
14. MISTER JENKINS IN YOUR FRIEND
• Automate your build
• Automate your app deployment
• Automate your environment deployment
• Automate your tear down
• Automate your test execution
• Automate your reporting
• Automate your metrics tracking
• Automate your communication
16. Careful planning
• Plan your Automation infrastructure
• Cloud, on-premise, devices connectivity, app deployment, data
aggregation, reporting, etc.
• Pick your automation tool wisely
• Start small and grow
• Don’t hesitate to pause tests building activity
17. #6 – PICK THE RIGHT TESTS TO AUTOMATE
AND I CLEARLY DON’T HAVE A GOOD PICTURE FOR THIS SLIDE
18. ANALYZE THE RISK OF FAILURE
R(c)=P(c) * I(c)
Probability
• Code Complexity
• Changed Areas
• Affected Interfaces
• New Technology
• Component Maturity
Impact
• Financial
• Reputation
• Legal
• Security
• Loss of Customers
19. #7 – WHERE TO RUN TESTS
HINT: 0% USERS RUN YOUR APP ON A SIMULATOR
20. SIMULATOR VS REAL DEVICES
• Pros
• Cheap
• Integrated with IDE
• Cons
• Not testing on actual platform.
What if the test pass? What’s
next?
• Network is different
• OS is different (stock)
• Can’t simulate real hardware
(CPU, Memory, etc.) Not fit for
mobile performance
• Pros
• Reproduce real gestures
• Real results, no false negative
• Can test under OEM
customization
• Fit for mobile performance
• Cons
• Need to be managed (device
Cloud helps!)
24. S.F.I.R.S.T.R
• Small – Easier to understand & Fix
• Fast – Parallel execution for faster feedback
• Independent – Can run any subsets in any order
• Repeatable – Tests get the same result every time
• Self-Checking – No human checking
• Timely – Should be written in parallel with dev
• Reusable – To avoid maintenance nightmares
26. WHICH IS BETTER?
• Build more reliable tests with unique IDs
• Lower maintenance cost
• Test early
• A non-optional investment
//TiUIScrollView[@touchTestId='mfaSiteKey-
signOn_securityPhraseWindow_ScrollView']/
TiUIScrollViewImpl/UIView/TiUIView[.//UILabel[@text='If
you don't recognize your personalized security image,
don't enter your password.']]/TiUITextField/TiTextField
classname=TiTextField[1]
id=mfaSiteKey-siteKey_Password_Input
XPATH
INDEXED OBJECT
UNIQUE ID
text=blah
TEXT
28. BE DATA-DRIVEN
• Increase test coverage FAST
• Easy to add, remove and configure tests
• Reduce the number of tests to maintain
• Separation of Tests and Datas
30. USE RELIABLE AND INTELLIGENT WAITS
• Use object visibility to manage the flow of the test
• Never use time delays to manage back-end variability
• Never use time delays to account for device
performance
• Spend time finding the right locator to wait on
• Run your test 100 times before claiming success
31. YOU GET THE PICTURE
#11 – LEVERAGE FUNCTIONAL TESTS FOR PERFORMANCE
32. COVER ALL YOUR BASES
CPU
Battery
Memory
Transaction timing
Native Apps
No backend connection
Stress tests
Endurance tests
Load test
Mobile Web Apps
App
Server
Database
App
Server
Web
Server
Cache
Load
Balancer
Web
Server
Data traffic
HTTP(S)
UDP
WebSocket
Shared web &
mobile infrastructure
Web
browser
users
Mobile
browser
users
Native app
users
CPU
Battery
Memory
Transaction timing
Response time