18. Analyzing the Data
• Script to evaluate the data
• Calculate level of significance for each day
• Visualization:
– Ad 1 reaches statistical significance (95%)
– Ad 2 reaches statistical significance (95%)
19. The Result (a small part)
statistically significant
no longer significant
still significant…
waiting for significance
22. Results
• Most tests reached a significance level of 95% at some point
Minimum total Impressions Tests to reach significance Still significant in the end
1,000 55% 13%
10,000 62% 12%
100,000 81% 11%
30. Segmented by Network
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 240 8%
Google 1,000 220 22%
Search Partners 2,000 20 1%
31. Also possible…
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 240 8%
Google 1,000 220 22%
Search Partners 2,000 20 1%
32. Also possible…
Impressions Clicks CTR
Ad 1 2,000 200 10%
Google 1,000 180 18%
Search Partners 1,000 20 2%
Ad 2 3,000 270 9%
Google 1,000 220 22%
Search Partners 2,000 50 2.5%
33. How common is this?
Based on a study of 6,500 ad pairs, compared with an AdWords Script
• Overall winner loses on Google • Overall winner loses on
Google & Search Partners
32.74%
12.23%
36. Same thing with slots
Based on a study of 6,500 ad pairs, compared with an AdWords Script
• Overall winner loses in the top slot • Overall winner loses on top & other
18.46%
6.30%
49. CTR
The position feedback
• Positive feedback
• No loop:
position effects do not affect QS
Ad position
Ad Auction
Ad rankingHigher CTR
Higher
Quality
Score
Better
Position
Higher CTR
51. The impressions feedback
• Negative feedback:
• No loop:
low relevance impressions
evaluated separately
Higher CTR
Higher
Quality
Score
More less
relevant
impressions
Lower CTR
CTR
Ad Ad Auction
55. The AdWords business model
„How much would you give us if we gave you the click?“
„How much would you give us if we showed your ad?“
Sell ad clicks
Sell ad impressions
Convert bids
No control over clicks
Advertisers want clicks
56. The ad auction
• 𝐴𝑑 𝑅𝑎𝑛𝑘 = 𝐶𝑃𝐶 ∗ 𝑄𝑢𝑎𝑙𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒
• = 𝐶𝑃𝐶 ∗ 𝐶𝑇𝑅
• =
89:;
8<=>?:
∗
8<=>?:
@ABCD::=9E
• =
89:;
@ABCD::=9E
= „How much would you give us if we showed your ad?“
60. Example: Location Context
• Are they out or at home?
• Are they moving or standing still?
• Are they at a familiar place?
61. Example: Search History
• Have they searched for this before?
• Did they interact with ads?
• Did they interact with organic results?
• Have they seen our ad before?
62. Example: Personality
• Do they take their time to read the entire ad?
• How do they respond to
– discounts
– reassurances
– testimonials
65. New Mindset
• You don‘t have control over ad testing. Let it go.
• There can be multiple winners.
• Use Google‘s optimized ad rotation by default.
66. Let the Machines Do Their Job
• Google is well motivated
• Google is really good with data and algorithms[citation needed]
• Let Google decide which ads to show
67. Provide Meaningful Ad Variations
• Create ads for client personas
• Try out different USP‘s
• Big stuff first
69. Keep an Eye on the Machines
• If necessary, force data collection
• Rotate at adgroup level
• Consider the cost of even rotation
• Alternative: Add the ad again
iStock.com/RichVintage
70. To Sum Up…
• No more micromanaging ad tests
• Focus on messaging and supervising the machines