JAYNAGAR CALL GIRL IN 98274*61493 ā¤CALL GIRLS IN ESCORT SERVICEā¤CALL GIRL
Ā
The Case for Moderation
1. The case for!
moderation!
!
A view on pre-testing research!
!
!
James Hurman!
Head of Planning, Colenso BBDO, New Zealand!
Author of The Case for Creativity
2. Disclaimerā¦
This isnāt an attempt to persuade you to abandon pre-testing research!
Itās a discussion about how best to interpret and apply the conclusions
and recommendations that come out of pre-testing research.
3. Why are great brands so skeptical of pre-testing?
āWe donāt ask āWe do no market āWe never pretested āGeoļ¬ believes
consumers what research. We ļ¬gure anything we did at Nike, research is a blunt
they want. They out what we want. none of the ads. Dan instrument that
donāt know. Instead And I think weāre Wieden (the founder of bludgeons good
we apply our brain pretty good at Nikeās agency Wieden & ideas to death. He
power to what they having the right Kennedy) and I had an was determined that
need and will want discipline to think agreement that as long 42 Below would
and make sure weāre through whether a as our hearts beat, we never be subjected
there, ready.ā lot of other people would never pretest a to pre-testing.ā
are going to want it word of copy. It makes
- Akio Morita, Sony too. Thatās what we you dull. It makes you -āÆ Justine Troy on
founder get paid to do.ā predictable.ā Geoļ¬ Ross,
founder of 42
- Steve Jobs, former - Scott Bedbury, Nikeās Below Vodka
Apple CEO former worldwide
advertising director
4. We humans arenāt great at picking successesā¦
āItās a great invention, but who would want to use it?ā
-āÆ US President Rutherford Hayes
Six years later, as the telephone was transforming Americaā¦
āThe Americans have need of the telephone, but we do not. We have
plenty of messenger boys.ā
- British Post Ofļ¬ceās Chief Engineer
5. We humans arenāt great at picking successesā¦
āEveryone acquainted with the subject will
recognize it (the light bulb) as a conspicuous
failure.ā
- The President of Stevens Institute of Technology,
notable for producing several Nobel Prize winners,
1880
6. We humans arenāt great at picking successesā¦
āWe donāt like their sound, and guitar
music is on the way out anyway.ā
Decca Recordsā leading A&R man
explaining his rejection of the
Beatles, 1962
7. We humans arenāt great at picking successesā¦
āThe market researchers concluded that no other product had ever
performed so poorly in consumer testing: the look, taste and mouth-feel
were regarded as ādisgustingā and the idea that it āstimulates mind and
bodyā didnāt persuade anyone that the taste was worth tolerating.ā
- Philip Graves on Red Bull in his book Consumer.ology
In the two decades that followed, Red Bull sold over three billion cans of
its ādisgustingā drink, achieving sales of ā¬2.6B.
Source: Consumer.ology by Philip Graves
8. We humans arenāt great at picking successesā¦
āAmericans arenāt interested in Swedish vodka, with many people
unaware of where Sweden even is.ā
- Conclusion of Manhattanās Carillon Importers $80,000 Absolut
Vodka pre-testing research project
Absolut went on to sell over 70 million litres of vodka to the US
annually.
Source: Consumer.ology by Philip Graves
9. A history of pre-testing research
ā¢āÆ Conceived in the 1950ās by American
psychologist Horace Schwerin
ā¢āÆ He created a research product he called
āpersuasion testingā and sold it to
advertisers as a way to measure the
potential sales impact of an
advertisement
ā¢āÆ The method was analysed by university
researchers in 1965 and found to be
barely more reliable than ļ¬ipping a coin.
Source: Excellence in advertising: the IPA guide to
best practice by Leslie Butterļ¬eld, p17
10. A history of pre-testing research
ā¢āÆ In the 1990s, Beecham (now GSK), carried out
a long term global review of advertising testing
methods.
ā¢āÆ They concluded āIt ought to be emphasised
that no reliable pre-testing technique exists for
assessing the sales effectiveness of a speciļ¬c
advertisement.ā
ā¢āÆ In 2004, researchers from the London
Business School noted that there was still āno
evidence in the public domain that pre-testing
is predictive.ā
Source: Why Pre-Testing is Obselete by Tim Broadbent,
published in Admap Magazine, October 2004
11. A history of pre-testing research
Source: āMarketing in the Era of Accountabilityā
by Les Binet & Peter Field
ā¢āÆ In 2007, the UKās Institute of Practitioners in Advertising produced the largest ever study of historical marketing
effectiveness case studies (880 in total). They compared pre-tested campaigns with those that werenāt pre-tested.
ā¢āÆ āBeware of pre-testingā, the study concluded. āIf pre-testing really did lead to more effective campaigns, then one
would expect cases that reported favourable pre-testing outcomes to show bigger effects than those that did not. In
fact the reverse is true. Cases that reported favourable pre-testing results actually did signiļ¬cantly worse than those
that did not.ā
ā¢āÆ This shows that the judgement of marketers is in fact much more reliable than positive pre-testing outcomes
12. A history of pre-testing research
āI can never get a positive result. No matter how I cut the data, no
matter how I stack the odds in favour of pretesting by doing ļ¬ne cuts
of the data, I can never get a result that says that pre-tested
campaigns are more effective than non-pretested campaigns.ā
āIf pre-testing really did work, we should at least get some positive
correlations, but we only ever get negative ones. After a while you
think, well, thereās an obvious conclusion to draw from all of that.ā
-āÆ Peter Field, author of āMarketing in the Era of Accountabilityā
Source: Interview with the author
13. Equally, there are all sorts of pre-testing successes
Among others, Cadbury Gorilla and Old Spice ļ¬ew
through pre-testing, and went on to become highly
effective campaigns.
It isnāt that pre-testing always gets it wrong.
Itās just very difļ¬cult to predict whether the pre-testing
conclusions are right or wrong.
So why is pre-testing so ļ¬ckle?
14. We are biased toward the familiar
American social psychologist Robert Zajonc studied what he called āthe
exposure effectā in the 1970s. His experiments showed that simply
exposing subjects to a familiar stimulus led them to rate it more
positively than other, similar stimuli that had not been previously
presented.
In one experiment, people were shown a random sample of squiggle
drawings. Some time later, they were shown the same sample, but this
time the squiggles were placed randomly among a selection of other,
similar squiggles. The subjects were asked whether they could
remember which of the squiggles were the ones they were previously
shown. As youād expect, they had a hard time with the exercise and
rarely chose correctly. Then they were asked to show the interviewer
which squiggles they preferred. They found this test considerably
easier, and unbeknownst to them, chose the squiggles that theyād seen
the ļ¬rst time around.
Zajoncās work concluded that people tend to develop a preference for
things merely because theyāre familiar with them.
Source: Affective Discrimination of Stimuli
That Cannot Be Recognizedā, Kunst-Wilson &
Zajonc, published in Science, Vol. 207
15. We tend to be wrong about what we think we want
Google asked customers how many results they
wanted the search engine to throw back on the ļ¬rst
page.
āSince conventional wisdom says more is always better,
people naturally said āmoreā. When Google tripled the
number of results, however, it found that trafļ¬c actually
declined. Not only did the results take a fraction of a
second longer to load, but having more options led
people to click on links that were less relevant. The
respondents in Googleās research didnāt intentionally
lead researchers down the wrong path; they just didnāt
understand the real-world implications of their choices.ā
- Steve McKee, BusinessWeek, 2010.
16. Often things we dislike āgrow on usā
Thereās an example of this effect that most of us are familiar with. Upon initially listening to a new album, we prefer
certain songs to others. On subsequent listens this preference usually changes, and our long term favourite songs
tend not to be the ones we liked at ļ¬rst.
"It is easy to slip into the comfortable belief that 'I like it' comments after the ļ¬rst exposure of a new execution are a
must. In researching Levi's executions over the years, it has become abundantly clear that such ļ¬ndings should be
treated with extreme caution.ā
-āÆ Kirsty Fuller, Managing Director of RDS International Research, in 1995.
Source: Walking the creative tightrope: the research challengeā, Kirsty Fuller, published in Admap Magazine, March 1995
17. Often things we dislike āgrow on usā
eg Leviās āSwimmerā
"In pre-launch qualitative research, response from the consumer on ļ¬rst exposure
to Swimmer was one of stunned silence. The hero's status was initially seen to be
seriously undermined - he did little to earn his colours. Moreover the slow music
did not have the immediate appeal of previous executions. Perhaps the most ļ¬tting
description of response was disappointment. Swimmer broke the mould of the
campaign to date, and consumers claimed not to like it.ā
āAt this stage the weight of negative reactions was strong. Then two months after
airing, research uncovered a marked shift in response. Swimmer had become a
talking point: new, different, challenging. A further four months later and Swimmer
was being widely described as one of the best ever Levi's ads, destined to live
among the greats, such as the universally acclaimed Launderette.ā
āResearch must therefore seek to evaluate the potential of an execution, not its
immediate impact on one viewing. Challenging advertising is not necessarily either
immediately liked or fully understood. It may however, be rich and long-lasting.ā
18. Soā¦
ā¢āÆ A long history of pre-testing research being studied and proven unreliable
ā¢āÆ Pre-testing often gets it right, but itās very difļ¬cult to predict when that will be the case
ā¢āÆ Pre-testing is hampered by a few inconvenient realitiesā¦
ā¢āÆ Weāre biased toward the familiar, not the effective
ā¢āÆ We often think (and will report) we want things that we actually donāt
ā¢āÆ Pre-testing only offers a āļ¬rst impressionā whereas new ideas tend to āgrow on usā
ā¢āÆ The numbers show that marketersā judgment is signiļ¬cantly more reliable than positive pre-testing outcomes
19. The case for moderation
Alcohol has positive and negative effects.
When we use it moderately, itās great.
When we use it immoderately, we get into trouble.
Pre-testing is the same.
Itās useful when used as part of a wider decision and
development process
But when used āimmoderatelyā ā as a decision maker ā
itās dangerous.
Letās be moderate in how we use the outcomes from
our pre-testing research.