Tristan Wilson, Dana Douglas
UXDC 2017 Workshop:
Remote Mobile User Testing: New Tools for Moderated Mobile Testing at a Distance
UXDC 2017 Listing:
http://uxdcconference.org/sessions/remote-mobile-user-testing-new-tools-moderated-mobile-testing-distance/
Description:
Remote user testing on personal computers is now a staple of UX research, but until recently, methods for remote testing on mobile platforms were still underdeveloped. In the past, technological limitations made mobile screen mirroring and streaming impractical. Thus, researchers were forced to use to use low fidelity methods like the “laptop hug” to test mobile interactions remotely.
Fortunately, mobile hardware has improved in recent years and with this advancement, new screen mirroring and streaming tools have emerged. Over the last year, we looked into a range of these new tools. We compared their functionality and capabilities, and adapted traditional remote testing methods to work with this new technology. (.....)
DD
Intros
Who’s done remote usability testing?
- What tools have you used for remote testing?
Who’s done in-person mobile testing?
- What tools have you used for in-person mobile testing?
Who’s done remote mobile testing?
- If someone has, what was your experience?
- If no one has, what, if anything, have you heard about it? What might be the challenges?
What we want to do today is help you think through how you can go about doing a remote test on a mobile device and give you a chance to try out some of the tools that can make it possible.
TW
But first, let’s set the stage with a little story about how one might find themselves needing to figure out how to do a remote mobile usability test.
TW
TW
TW
DD
Before we jump into solving this client request, first, let’s take a step back and talk about remote testing in general.
DD
A couple of points of clarity here to start.
You keep hearing us talk about “remote mobile testing”
When we say “Remote Testing”, what we’re talking about is moderated user testing in which the participant and researcher are located in two different geographical areas.
When we say “Mobile”, we’re basically just talking about modern portable smart devices – phones and tablets. This is what’s being evaluated and therefore what the participant is using during the test.
DD
For those of you familiar with remote testing, what are some of the benefits?
Enhanced Research Validity
+ Participants can use their own device and their own setup
+ More naturalistic environment and real-world use case
Lower Cost & Increased Efficiency
+ Fewer travel-related expenses for researchers and participants
+ Decreased need for fancy labs and/or equipment
Greater Convenience
+ Ability to conduct global research from one location
+ No participant travel to and from the lab
Expanded Recruitment Capability
+ Increased access to diverse participant sample
+ The money saved from reduced travel costs may allow for more participants to be included in the study
DD
What are some of the drawbacks of remote testing?
Potential Reduction in Quality of Data
- Difficult to control testing environment, and particularly distractions in the participant’s environment
Expanded Spectrum of Technical Issues
- More reliance on quality of internet connection
- Dealing with a variety of hardware
Reduced Scope of Research
- Typically limited to software testing
- Shorter recommended session duration
Diminished Participant-Researcher Interaction
- Restricted view of participant body language
- Difficult to establish rapport
These are all true to remote mobile testing as well.
DD
When it comes to remotely testing mobile interactions, there are a few additional drawbacks to consider.
First, you’re often dealing with a wider range of operating systems among the various mobile devices on the market.
Also, while on desktop, you can pretty easily track the participant’s mouse movements, that is not really possible with mobile devices. So it can be more difficult (or sometimes impossible) to see where participants are tapping or hovering when interacting with a mobile device.
With current offerings (which we’ll get into more later), participants must share their own screen, rather than control the researcher’s screen. This has the potential to feel invasive on the part of the participant.
Finally, when relying on wifi, as mobile devices do, you are at the mercy of the bandwidth of the wireless network to transfer the data between participant and researcher. As many of us know, wifi can often be slower and less reliable than hard-wired networks.
TW
One thing that we think is important to note, is that there ARE already a few ways of conducting remote mobile tests. They aren’t the most elegant solutions in the world, but you can make it work.
For anyone who’s conducted a remote mobile test before, how have you done it? What tools have you used?
TW
One of these methods is what we call, “The Browser Resize”
The setup for this is essentially the same as a traditional desktop remote session. The researcher opens up a web page on their computer, and shares their screen with a remotely located participant. The difference here, is that while the participant is still interacting with the web page via their PC, the researcher has resized the window of the browser to mimic the dimensions of a mobile device.
The browser resize is great for situations in which you’re just trying to test layout, or IA – It’s not so great if you’re concerned about people’s interactions with your product on a real mobile device.
The laptop can be used concurrently for communication with researchers and/or study material display
TW
While all of those other methods work, there are new methods and new software tools emerging that are higher quality and higher fidelity and are easier for the participant and for the moderator.
So these are the kinds of methods and tools we want to focus on today.
Two major pieces of the remote mobile puzzle
- Mirroring
- Streaming
Mirroring is relatively commonplace in itself, but tricky when it comes to live streaming that feed over the internet
A majority of the remote mobile setups we found can be differentiated by the ways in which these two ingredients are combined.
DD
There are different combinations of these two ingredients (mirroring and streaming) that will accomplish what we need for remote mobile testing. These combinations or setups vary in complexity.
[pass out handout]
We talked about the different tools we use for remote desktop testing or in-person mobile testing, so these are the tools that we’d need to make remote mobile testing work. The first and most complicated setup includes two apps, or software tools. First, the participant installs one tool on both their mobile device and computer. This enables them to mirror their mobile screen onto their PC. Then, both the participant and the researcher install one web conferencing tool on each of their PCs. This web conference tool then shares the participant’s PC screen which is mirroring their mobile device (equivalent of sharing your screen while playing a YouTube video).
TW
In this setup, the participant does not need to install anything on their mobile device because they can use the native screen mirroring technology on their device (e.g., GoogleCast or AirPlay). However, they do need two software tools installed on their laptop – one that will mirror their mobile device, and the other that will stream that view of their mobile device to the researcher.
DD
In this third setup, it is simpler in that both the participant and the researcher install only one web conferencing tool on each of their PCs. But that tool also works with the native screen mirroring technology on the participant’s mobile device (e.g., AirPlay), meaning the participant does not need to install an app on their phone. In addition, the web conferencing capability of this tool enables the researcher to see the participant’s mirrored mobile screen shared from the participant’s PC.
TW
Potentially the least complicated setup that we found. The participant does not even need a computer on their end because their mobile device screen can be streamed directly from the phone over the internet to the researcher’s computer
Notably, only on Android, as far as we’ve found.
TW
DD
We did a lot of research to find different tools that could meet our needs. All had their own pros and cons as you’re going to see for yourselves.
But as we did our research, we decided to come up with a way to assess the different tool options in regards to how they met our needs. So, we came up with a set of criteria that we considered the ideal characteristics of a tool that could serve the purpose of testing a mobile device remotely.
But first, let’s go back to that initial client request and try to figure out the type of setup or tool we would want to meet our client’s needs.
[review list]
Some of the characteristics we came up with in our research were:
First, the tool should be easy to use, both for the participant and for us, the researchers. This means easy to install and set up, as well as easy to use during the session. It also means that ideally the participant doesn’t have to worry about setting up multiple devices, such a PC in addition to their mobile device, as you saw in a few of the setups.
Secondly, of course, our ideal tool would be inexpensive. Knowing that technology changes rapidly, we knew we wouldn’t want to invest a lot of money into a tool that would become outdated quickly.
Third, we needed the tool to perform well. This includes minimal lag time between the participant’s interactions and what we saw on our end. We also wanted the tool to play nice with other applications that needed to be running on the device at the same time. And, if possible, we wanted to be able to see a clear representation of the participant’s interactions with the device (like taps and gestures) – similar to mouse tracking on a desktop.
Fourth, we wanted the tool to be a one-stop shop in that it included things like web conferencing, chat, and recording directly within the tool. Also, we wanted it to at least work on both iOS and Android, but also other mobile platforms as well. And to avoid participants needing a landline or separate device or computer to communicate with us, we wanted the app to allow them to talk to us on the phone while also mirroring their screen.
Finally, we wanted to protect our participants’ privacy as much as possible and put them at ease. Therefore, the ideal tool would allow the participant to remotely control our mobile device from theirs. They would have to be able to control it from their mobile device (not their desktop) in order for the interaction to be as natural as possible.
But we knew that may not be an option, so we figured that if the participant had to share their own screen, we wanted to see a few features that would protect their privacy while they were sharing their screen.
First, the tool would clearly indicate to them when mirroring has begun. They should also have complete control over when the sharing would start and could stop it at any time. We thought it would be nice if the tool would snooze the participant’s notifications while the screen was being shared. (Technically, this is possible, but many participants do not know how to do it. Also, Do Not Disturb will silence the notifications, but they still appear on the screen and would therefore be mirrored to the researcher.)
Finally, as you can do with many desktop screen sharing tools, we thought it would be nice if the mobile tool would have the option to share only one application. Therefore, if the participant accidentally pulled up their messaging or photo app, it would not be shared with the researcher.
TW
Now we’re going to demo one of the tools we found (we’ll try a few others out later too).
We just want to note that we are not affiliated with any of these tools, nor are we endorsing any of them. We evaluated them for our own purposes and simply wanted to share our findings with you.
The tool we’re going to test out is called Mobizen.
[pass out handout]
We created this handout that will help you keep track of the differences between the various tools we try out today. As you’ll see, it can get a little confusing to keep track of the pros and cons of each tool.
First, I’d like a volunteer from the audience who has an Android device and is willing to be our “participant.”
[If time, do iOS demo as well. Need a volunteer with an iPhone AND a laptop.]
10-15 minute break
Before you get your hands on a few more tools, let’s think about the other logistics that come into play when you’re doing a remote mobile usability test.
What might you need to consider when recruiting participants for a remote mobile study?
How often do they use a mobile device? How proficient are they?
Do they use an Android or iOS device? Something else?
For certain tools, do they have a laptop also available?
For certain tools, do they have a landline also available?
What might you need to consider in regards to participant privacy?
Will you be able to see their notifications? Desktop? Apps?
How should you warn participants about that?
Should you ask participants to turn on Do Not Disturb or turn off notifications or remove any sensitive content from what you might see?
Are participants aware of when their screen is being shared?
Will participants be wary of downloading unknown apps on their mobile device or computer?
Will participants be concerned when they have to agree to all the permissions of the app?
Should you offer to help them remove the apps when the session is over?
What might you need to consider in regards to recording and live observers?
Can you record directly in the tool or do you need a separate recording software?
Can observers watch remotely without disturbing the session?
What’s the quality of the recording?
Are participants notified when recording begins?
What might you need to consider in regards to technical setup with participants?
Should you do all this setup prior to the day of the session?
How much time should you ask the participant to dedicate toward setup?
Who should do the setup – the recruiter, you, someone else?
What might you need to consider in regards to actually running the session?
How much extra time should you build in for technical difficulties?
What is your backup if something goes wrong?
How will participants get to the app or website you are testing? How far in advance do you want to share that information with them?
Do you want the participant to be able to read the tasks? If so, will you send them ahead of time or ask them to have a computer handy during the session?
How will you know where the participant has tapped or where they are looking?
DD
Based on our experience running remote mobile tests, we learned a few lessons about how to make a remote mobile usability test run as smoothly and successfully as possible. Some of these things we did and therefore thought they contributed to our success and other things we didn’t do, but realized after the fact that they would have been helpful.
Planning ahead: Spending extra time up front making sure the setup worked for participants. Clearly thinking through the logistics of sending participants the information they needed (tasks, links, phone numbers) when they needed it.
Practice makes perfect. By becoming intimately familiar with the tools we were going to use during the session allowed us to more easily troubleshoot any issues that arose. In particular, we wanted to become familiar with what the participant sees on their end so we could give them accurate instructions.
Always have a backup. When technical issues arise, it’s always good to have a backup. We knew that if the phone screen sharing didn’t work during the session, we could quickly relegate our testing method to one of the less optimal, but still valid mobile testing methods, such as re-sizing the browser to a mobile device-sized screen. If Zoom or Join.me didn’t work at all, we knew we could revert to our more reliable and commonly used tool for sharing desktops remotely, GoToMeeting. Fortunately, we didn’t need to use either of these options in our study.
Put participants at ease. We gave participants a verbal overview of the process and walked them through it on the phone, which allowed us to establish a rapport with them and gain their trust. Therefore, they were much more willing to stick with us through the bumps in the road.
Tailor recruitment. By limiting recruiting to either iOS OR Android (not both), you will only need to support one screen sharing tool. Obviously, this was something we did not do, but we feel it would have been easier if we had. However, one thing we did do, and we feel it made things go smoother, was that we recruited participants who already possessed basic mobile device interaction skills, such as being able to switch from one app to another. This made giving them instructions much easier.
Now you all are going to get your hands on a few more tools.
We’re going to split into groups of 4-5 people. Make sure there’s at least one Android and one iOS user and at least one laptop in the group. We have a few extra laptops that people can use.
We’ll give you 30 minutes to research the tool you choose. Test it out amongst yourselves. Uncover the features, the pros, the cons, and figure out how it might work in the context of a remote mobile usability test.
Rate the tool using the hand out, take notes about what you’ve learned, what you like, and what you don’t like.
Then, each group will present what you’ve learned about the tool to the rest of the group.
Going back to that client request, let’s think about what tool you would use.
DD
As a reminder, we were looking for tools with certain characteristics, and the tools that we found generally fulfilled most of them. However, there are a couple areas with room for improvement.
DD
As far as features, we have yet to find a tool that works reliably with both Android and iOS.
But the area where the most improvement is necessary is in participant privacy. We would love to see a tool that allows participants to share just one app rather than their whole screen (without having to change the app’s code), or even better, a tool that allows participants to control the researcher’s mobile device from their own.
We believe the option to test mobile interactions remotely will only continue to grow in demand. So, as technology improves and the need for more robust tools is recognized, we anticipate more options and capabilities will emerge on the market in the near future.
What other areas for improvement would you add?