1. 145-157 St John Street
London, EC1V 4P
United Kingdom
www.thinkstrait.com
+44 (0)20 8798 0696
Customer research: a balanced approach
An introduction to the pros and cons of different research methods
and how to combine them to elicit valid customer requirements
2. Customer research: a balanced approach
2
The importance of different angles
No product is successful if it doesn’t meet the needs of its users at a price they are
willing to pay. That sounds simple enough. However, finding out what users want is
anything but. Although there are a lot of ways to elicit user requirements, none gives you
the full picture. Take a look at this cup of coffee.
A familiar and welcome sight for most on Monday
morning, surely. Now focus on the mug. Could you
find an exact copy of it based on this picture alone?
Probably not. However, if you were shown several
“mug shots” –who could resist the pun?- taken at
different angles then you just might. The same holds
for understanding users: to get the full picture you
have to combine different methods.
The idea of combining different methods is not new in user research, but the
implementation is often bungled. Modern Portfolio Theory (Markowitz, 1952) offers a
useful lesson here. Prudent financial investors build a portfolio of investments to avoid
the risk that comes with putting all their eggs in one basket. To make this work the
performance of one investment needs to be largely independent from other investments
in the portfolio. Without such diversification –for instance if all your investments depend
on sunny weather to pay off- you might be taking on more risk than you think. It’s this
rule that is often ignored in user requiremens gathering: a lot of pictures are taken but
mostly from the same angle.
So which methods should you combine to
get a balanced portfolio? In short, methods
with complementary characteristics. To
simplify the selection process it’s helpful to
use a classification structure like the 2X2
matrix on the right. The two dimensions of
the matrix are 1) whether the method is
focused on the problem or on features of
the solution, and 2) whether the method
uses an open ended or closed style in
much the same way as exam questions can
be open or closed.
3. Customer research: a balanced approach
3
Once you’ve arrived at a good mix of the methods from the different quadrants –watch
out with quadrant 1 however for the reasons mentioned below- you can start adding
methods from the same quadrant. This will generally protect you from putting all your
eggs in one basket. Thus, inter-quadrant first, then intra-quadrant to finetune. The 4
quadrants will be discussed in more detail below.
Quadrant 1: Solution focus & Open question
Most companies and designers have learnt –some the hard way- that they can’t just rely
on asking users what they need. This approach, which includes focus groups and
interviews, suffers from at least four problems.
Firstly, there are two kinds of user requirements: manifest needs and latent ones. In the
case of latent requirements the user him/herself is not aware of the actual needs or at
least not able to communicate them. As a result, you might fail to pick up on all the
needs.
Secondly, when you ask users what they need, more often than not you’ll hear what they
want. Put this question to a group of 5-year-olds, and chances are that you’ll hear more
answers related to candy than to vegetables. Wanting and needing are two different
concepts, and not only for children.
Thirdly, as Gustafsson et al. (2012) put it,“customers create
solutions based on their previous experiences of usage of
different products or services… Radical solutions can often be
considered unthinkable in advance… but customers know a
good idea when they see and use it.” That is, users are
“anchored” to what they know and are not always able to
break away from the available solutions, especially if there is
no tangible design to guide them. As a result, you’ll get more
incremental improvements than truly new insights. The
development of Sony’s Walkman is often mentioned as an
example of the limited use of asking customers when it comes
to more radical innovations.
Finally, whenever you ask questions to a user there’s a chance that the social desirability
bias sneaks in, i.e. a tendency of respondents to answer questions in a way that makes
them look good. Throw in a social setting, e.g. a focus group, and you have the
possibility of confirmy bias raising its head. The bias can be quite strong as Asch (1956)
showed in his classic experiment with different line sizes and can make user groups
appear more homegenous than they are in reality.
4. Customer research: a balanced approach
4
Quadrant 2: Solution focus & Closed question
One way of overcoming the issues associated with the first quadrant is by getting users
to react to a design. This can take the form of presenting a list of design features or
letting users interact with a prototype.
The problem with using a list and asking “would you like this” is that it could lead to a lot
of False Positives. That is, users might indicate that a feature is absolutely necessary but
don’t go near it in the final product. Norman (1988) refers to the tendency to include
more and more and as “creeping featurism”, and points out that more isn’t always better
(1998). Conjoint analysis and similar approaches also fall into this quadrant but suffer less
from the risk of producing “creeping featurism” as they actively gauge how important
each feature is compared to others.
Interacting with prototypes enables users to understand the system and form
expectations about its behaviour. When these expectations are not met, the disconnect
can point to a need, even if it had been latent till now. Caroll et al. (1991) point out that
deploying several of these Task-artifact cycles may give the designer a clear picture of
the user requirements.
For most people, the word “prototype” is associated with sophisticated technology and
polished design, the type you expect in concept cars. However, you can make
prototypes as “lo-fi” as you want, using techniques like paper prototyping and “Wizard
of Oz” simulations. In fact, there are good reasons to go this route. For starters, hi-fi
prototypes can unintentionally convey that the design is as good as finished. This can
make users hesitant to criticize it because they feel that they can no longer influence the
design. In addition, a hi-fi prototype is more risky since it requires a large investment that
may have to be discarded completely. So don’t throw away those post-it notes just yet.
Two risks hoover over this quadrant. Firstly, a designer may not anticipate certain
requirements and leave them out of the prototype or feature list. If the user doesn’t
mention them, they won’t make it to the final design, even though they may turn out to
be important requirements (i.e. risk of False Negatives). Although this is less of a concern
when prototypes are used than with feature lists and conjoint analyses there is still a risk
as users might be so absorbed by what the prototype offers that they fail to notice what
it doesn’t. Secondly, when prototypes are used in an artifical setting, e.g. a usability lab,
you might find that the findings don’t match up to what users do in real life. One way to
balance this out is to go beyond the lab and conduct user tests with prototypes in the
field although this usually requires a hi-fi prototype.
5. Customer research: a balanced approach
5
3. Problem focus & Open question
To balance out the disadvantages of quadrant 2 you can choose to observe and/or
interact with users in their natural environment. This doesn’t only give you insight into
the problems they face, but also in the ways they go about solving them, and the
context in which they do so. This approach, which includes design ethnography and
contextual inquiry, has become increasingly popular over the last two decades. However,
some caution is advised.
First of all, interpreting the observations and translating them into design requirements
might not be all that easy. As a result, the benefits of this approach depend to a large
extent on the skills of the designer. A particular risk with this approach is stopping short
of the actual root cause of the behaviour observed. Consider the following example
about the redesign of an office floor.
During your design ethnography studies you
observe that your subject leaves her desk to
get water between 2 and 5 times a day. The
watercooler is on the other side of the building
and when asked your subject says she would
like to have the watercooler closerby. So, in the
new design you put several watercoolers at
different points in the office. Problem solved.
Until you hear that nobody uses them. It’s only
after diving deeper that you discover that the
original watercooler was used as a central
meeting point.
To reduce the risk of falling into this trap you can start by using the “5 WHYs” that
underpinned the Toyota Production System and earned a place in Kaizen, Lean and Six
Sigma.
The second disadvantage of this quadrant is that it can be resource-intensive. It’s likely
to require traveling to the customer site and spending hours –a lot longer in most cases-
collecting data, only to spend a good amount on interpreting the results as well.
Thirdly, because the method is so resource-intensive the number of subjects is usually
quite small which means that the results from a contextual inquiry may be inadequate for
conducting statistical inference. Although this doesn’t have to be the case you should
keep in mind that your subject might not have been representative of the user
population.
6. Customer research: a balanced approach
6
4. Problem focus & Closed question
Surveys are a relatively resource-efficient way of gathering data and can add a
quantitative component to the qualitative insights that quadrant 3 provides. This angle
can protect you from being sidetracked by non-representative users (i.e. outliers). That
said, there are some pitfalls to keep in mind when deciding on your survey design and
how your surveys will be administered.
Firstly, as was the case with the second quadrant, what is not included will be ignored.
That is, if a question is not asked you won’t get an answer to it.
Secondly, how participants answer a question depends to a large extent on how the
question is phrased. Tversky and Kahneman (1981) clearly showed this framing bias by
asking participants which type of treatment they would choose for a group of 600 people
with a deadly disease. If treatment A was presented as “saves 200 lives” then 72% of
participants preferred this option. This dropped to 22% if it was framed as “400 people
will die”. The same effect of the treatment only worded differently rendered radically
different answers.
Thirdly, whenever a question touches on social
norms and expectations you might bump into a
social desirability bias (see quadrant 1). This
effect can be exacerbated by the characteristics
of the people involved especially if the results
are not anonymous, e.g. a female researcher
handing out surveys about gender equality to a
group of men. When anonymity isn’t feasible
you should consider using so called “lie
scales”, which were introduced over 5 decades
ago (Crowne & Marlowe, 1960). These scales
measure replies to items with a fairly obvious
socially desirable content, like “I have never
stolen a thing in my life, not even a hairpin‟
allowing you to either control for this bias or at
least interpret the rest of the answers with
caution.
Picture by Dan Piraro @ Bizarro.com
7. Customer research: a balanced approach
7
So, what now?
The table below summarizes some of the pros and cons of the quadrants described
above. If you’re about to start a new design project you could do a lot worse than using
this table as a first step in building your balanced user research portfolio.
However, before you dive in it’s worth noting that it’s not only about which methods you
use but also when you use them. The most efficient strategy is probably to cast your net
wide at first and then to gradually zoom in on the solution. Starting with quadrant 3 (e.g.
observations), and working your way to quadarant 2 (prototypes) via quadrant 4 (surveys)
makes more sense than the other way around.
Missing an important requirements can make the difference between a successful design
and one that fails in the market. So even if you’re already well underway it wouldn’t hurt
to double check that you haven’t forgotten an essential angle.
After all, who anticipated the mug at the beginning of this paper to look like this?
8. Customer research: a balanced approach
8
References
Asch, S. E. (1956). Studies of independence and conformity: A minority of one against a
unanimous majority. Psychological Monographs, 70.
Carroll, J. M., Kellogg, W. A., and Rosson, M.B. (1991). The Task-Artifact Cycle. In:
Carroll, John M. (ed.). Designing Interaction: Psychology at the Human-Computer
Interface. Cambridge, UK: Cambridge University Press.
Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of
psychopathology. Journal of Consulting Psychology, 24, 349-354.
Gustafsson, A., Kristensson, P., & Witell, L.. (2012). Customer co-creation in service
innovation: a matter of communication? Journal of Service Management; 23 (3): 311-327
Markowitz, H. M. (1952). Portfolio Selection. In: Journal of Finance. 7, 1952, ISSN 0022-
1082, S. 77–91.
Norman, D.A. (1998). The invisible computer: why good products can fail, the personal
computer is so complex, and information appliances are the solution. Cambridge, MA:
The MIT Press.
Norman, D.A. (1988). The psychology of everyday things. New York: Basic Books.
Tversky, A., & Kahneman, D. (1981). The Framing of decisions and the psychology of
choice". Science 211 (4481): p.453–458.
Wixon, D., Holtzblatt, K., & Knox, S. (1990). Contextual Design: An Emergent View of
System Design, in Proceedings of CHI ‘90: Conference of Human Factors in Computing
Systems. Seattle, WA.