This document discusses strategies for selecting the right cross-browser testing tools. It begins with an introduction to cross-browser testing and the current digital landscape. It then covers criteria for evaluating cross-browser testing tools, including ensuring coverage of responsive web and progressive web app testing across different browsers and platforms. The document also provides examples of testing methodologies and considerations for tools that support automation and at scale testing. It concludes with a case study demonstrating how one company evaluated and selected a cross-browser testing tool based on defined capabilities, importance weights, and scoring.
2. About Me
• Lead Technical Evangelist at Perfecto
• Blogger and Speaker
• http://continuoustesting.blog
• https://www.infoworld.com/author/Eran-
Kinsbruner/
• 18+ Years in Development & Testing
• Author of The Digital Quality Handbook
Weekly Podcast - Testiumpod
Twitter: @ek121268
Email: Erank@perfectomobile.com
4. Cross-Browser Testing != Desktop Web Testing
There is no Web Testing vs.
Mobile Testing
• 4 out of 10 transactions today take place on multiple
devices
• 48% of users today complain that the websites they use are
not optimized for their smartphones and tablets
10. • Identify your object in a robust fashion that fits all digital
Platforms
• Build object repository and use smart locators
Responsive Web Design (RWD) - Objects
The average website includes nearly 400
different objects.
Now try locating them on each and every
DIGITAL platform in your lab…
11. • Take screenshot and use Visual Checkpoint/assertion to
validate responsive aspects
Responsive Web Design (RWD) – visual validation w/ Screenshots
Often, when evaluating a potential new tool, it can be overwhelming and difficult to quantify which tool to select.
The approach our team takes is to outline success criteria very clearly, to ease the evaluation and decision process.
We’ll use Eran’s selection criteria as an example.
Walk through needed capabilities (selection criteria) definition
Once you identify the needed capabilities, then you’ll want to identify the importance of each capability. As in, how critical is the capability to your decision? This will help you “weight” each capability which you’ll see is very important when making a decision later on.
Also useful when you have someone that wants to include criteria that you don’t think is as important (you just add it and give it a lower weight).
Helps get away from “shiny penny” distractions.
Once you have identified the importance, then you’ll want to define your scoring key. This is how you will actually document whether or not the tool you are evaluating has met (or not met) your expectations for that particular capability. This can also be useful if you are asking multiple people to evaluate the tool and provide feedback. (I usually hide the weighting when asking others to evaluate the tool).
The scores are pretty close…
The final step is to put it all together and score the applications together. You’ll see we just do basic math to determine the final decision. Walk through the math and show the total that takes into account all aspects.
Note that it is always good to do a “reality check” to make sure the scores reflect overall needs (or even add additional capabilities discovered throughout the evaluation).
The Infrastructure as a freeway slide will go here…
Essentially, I’ll talk about what happens when other teams want to utilize a different tool than the one you “selected” and use my “Infrastructure as a Freeway” concept to describe the concept and how I handle that.
Here is the concept if you want more specifics: https://www.linkedin.com/pulse/software-infrastructure-freeway-bryan-osterkamp/