In this webinar, we will show you how you can remove face to face moderation, have hundred of users participate simultaneously from different locations in a cost effective and fast way. Userzoom is an on demand research service that will automatically capture and gather both customers attitudes and behaviors. After our demonstration, we will open up to Questions, but let’s start now by diving into UserZoom software as a service.
In this webinar, we will show you how you can remove face to face moderation, have hundred of users participate simultaneously from different locations in a cost effective and fast way. Userzoom is an on demand research service that will automatically capture and gather both customers attitudes and behaviors. After our demonstration, we will open up to Questions, but let’s start now by diving into UserZoom software as a service.
Thank you (Alfonso? Viginie?) I’m Denise Nangle, User Experience Research Manager at Monster. Today, I’d like to share with you how Userzoom helped Monster teams evaluate the success of our total redesign of the seeker experience. Before the launch of this complete redesign, Monster’s strategy was to make incremental changes to the user interface. Since REDUX was a complete web site redesign, with an entirely new user experience and new products, it offered us the opportunity to evaluate the entire experience, and not just one or two parts of the site. To do this, we needed the right tool that would allow us to evaluate both the strengths and weaknesses. So, we called upon Userzoom and started planning a remote, international online study. In late January 2009, we launched identical studies in the US, UK, France, Germany and the Netherlands.
We knew we wanted to measure several areas, including user satisfaction, task success and efficiency, ease of use and likelihood of using Monster in the future. But we also wanted to gather qualitative feedback on what our users liked and didn’t like about the new experience. Since we were going to interpret the results of this study in context with with other remote studies done prior to the launch of REDUX, we had to structure the study to obtain comparative measures. We recruited panels of 250 people in each of the 5 markets we wanted to evaluate. Each participant was given 4 tasks to complete. All participants completed tasks in the same order, since some tasks had dependencies that required completion of an earlier task. We followed each task with self-reported success and satisfaction questions. We also gave participants an opportunity to tell us what they liked and didn’t like about the experience. Finally, we captured clickstream data to generate heat maps. This gave us another opportunity to objectively evaluate user behavior. We could see how they navigated the site, which areas of the site they visited, where they clicked, and more.
We knew we wanted to measure several areas, including user satisfaction, task success and efficiency, ease of use and likelihood of using Monster in the future. But we also wanted to gather qualitative feedback on what our users liked and didn’t like about the new experience. Since we were going to interpret the results of this study in context with with other remote studies done prior to the launch of REDUX, we had to structure the study to obtain comparative measures. We recruited panels of 250 people in each of the 5 markets we wanted to evaluate. Each participant was given 4 tasks to complete. All participants completed tasks in the same order, since some tasks had dependencies that required completion of an earlier task. We followed each task with self-reported success and satisfaction questions. We also gave participants an opportunity to tell us what they liked and didn’t like about the experience. Finally, we captured clickstream data to generate heat maps. This gave us another opportunity to objectively evaluate user behavior. We could see how they navigated the site, which areas of the site they visited, where they clicked, and more.
When we use large sample sizes across different markets, we are able to quickly identify where we are doing things the right way and where we need to make improvements. Because we have captured related metrics in past studies, are always planning future studies with similar measures, we are able to show improvement over time. We also recognize that our findings must be viewed as part of our entire knowledgebase, so we review other sources of data to validate and explain what users tell us in our remote studies. When we share the bigger picture with our teams, it allows us to make decisions that are based on user data. Another interesting learning came from the qualitiative sections of our studies. We found that when we asked users what they like, they often respond with vague comments such as “it’s easy” or “ it’s fast.” But when you ask people what they don’t like, they tend to be very specific about what they want to see on the site. We are taking advantage of learning when we design additional studies. If users are happy, that’s wonderful to hear. If they are not happy, the value of the response is learning why they are not happy. Lastly, this study allowed us to look at the complete site, not just this area or that page or this functionality. In sharing the results with the entire team, we succeed in pulling all our colleagues together and create a stronger team. It doesn’t matter if they are visual designers, or interaction designers or prototypers - everyone had a role in building our site and when we present the findings, everyone knows they played a part in our success. Thank you. [Back to Alfonso]