Publicité
Publicité

1. @bloomarty Debunking Ad Testing Martin Röttgerding Head of SEM, Bloofusion Germany Logo Here voted “Best Overall Heroconf Presentation” – THANK YOU!!!
2. @bloomarty Learner Outcomes • Ad testing best practices don‘t work • What actually happens when you follow best practices • What you should be doing instead
3. @bloomarty Additional Slide • To explain what was said in the presentation, I added a few slides like this one • I plan to write about all of this in greater detail on my blog, PPC-Epiphany.com • Additional data will also be covered on my blog • … hopefully soon …
4. @bloomarty@bloomarty Martin Röttgerding • Head of SEM @ Bloofusion Germany • Blogger @ PPC-Epiphany.com • Judge @ SEMY (German Search Awards) • Dad @ Home Who’s Talking?
5. @bloomarty First off: No Worries • I’m not a mathematician. • This is not a math lecture. • I brought some data, though. iStock.com/digitalgenetics
6. @bloomarty In the Beginning: Optimize for Clicks % Served Impressions Clicks CTR Ad 1 96% 12,323 594 4.82% Ad 2 4% 536 37 6.90%
7. @bloomarty Alternative Approaches • Gut feeling • Rules („wait 100 clicks, then decide“) • Statistical Significance (usually: 95%)
8. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
9. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
10. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Ad 1 Impressions Clicks Ad 1 200 10 Ad 2 200 20 Ad 2
11. Statistical Significance in a Nutshell 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 10% 11% 12% 13% 14% 15% 16% CTR Impressions Clicks Ad 1 2000 100 Ad 2 2000 200 Ad 1 Ad 2
12. @bloomarty Pros and Cons • You have the power. • You know what to do. • You look good. • It doesn‘t work.
13. Waiting for Significance Problem #1
14. @bloomarty
15. @bloomarty
17. @bloomarty Is it significant yet? no no no no no no no no no yes „We are 95% certain that this ad is better than the other one.“
18. @bloomarty An Experiment • 576 ad pairs • Rotated evenly • Left untouched for 12 months
19. @bloomarty Analyzing the Data • Script to evaluate the data • Calculate level of significance for each day • Visualization: – Ad 1 reaches statistical significance (95%) – Ad 2 reaches statistical significance (95%)
20. @bloomarty The Result (a small part) statistically significant no longer significant still significant… waiting for significance
21. @bloomarty The Result (zoomed out)
22. @bloomarty The Result (zoomed out)
23. @bloomarty Results • Most tests reached a significance level of 95% at some point Minimum total Impressions Tests to reach significance Still significant in the end 1,000 55% 13% 10,000 62% 12% 100,000 81% 11%
24. @bloomarty Additional Slide: The Twist • These were actually 576 A/A tests • With enough time and data you can almost always find a point where one ad is significantly better than the other • Just don‘t take „no“ for an answer and stop when you get the „yes“ • … which is precisely our industry‘s best practice approach to ad testing
25. Not Looking At All The Pieces Problem #2
26. @bloomarty So this is what we see … Impressions Clicks CTR Ad 1 2,000 200 10% Ad 2 3,000 240 8%
27. @bloomarty How we tend to think of CTR CTR Your Ad
28. @bloomarty What drives CTR? CTR Ad Network (Google vs. Search Partners)
29. @bloomarty Search Partners?
30. Search Partners
31. @bloomarty Additional Slide • You can actually target search partner sites like eBay • For how to do this, read https://www.ppc-epiphany.com/2012/04/02/targeting-search-partners/ • However, CTR and conversion rate on sites like eBay are abysmal, so this probably won‘t be worth your time • Search partners are often profitable, but your priority in ad testing should be Google
32. @bloomarty Segmented by Network Impressions Clicks CTR Ad 1 2,000 200 10% Google Search Partners Ad 2 3,000 240 8% Google Search Partners
33. @bloomarty Segmented by Network Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 240 8% Google 1,000 220 22% Search Partners 2,000 20 1%
34. @bloomarty Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 240 8% Google 1,000 220 22% Search Partners 2,000 20 1% Also possible …
35. @bloomarty Impressions Clicks CTR Ad 1 2,000 200 10% Google 1,000 180 18% Search Partners 1,000 20 2% Ad 2 3,000 270 9% Google 1,000 220 22% Search Partners 2,000 50 2.5% Also possible …
36. @bloomarty How common is this? Based on a study of 6,500 ad pairs, compared with an AdWords Script • Overall winner loses on Google • Overall winner loses on Google & Search Partners 32.74% 12.23%
37. @bloomarty Quick Win: Ignore Search Partners • Well…
38. @bloomarty What drives CTR? CTR Ad Network (Google vs. Search Partners) Slot (top vs. other)
39. @bloomarty Same Thing with Slots • Overall winner loses in the top slot • Overall winner loses on top & other 18.46% 6.30% Based on a study of 6,500 ad pairs, compared with an AdWords Script
40. @bloomarty What drives CTR? CTR Ad Device Ad positionNetwork (Google vs. Search Partners) Slot (top vs. other)
41. @bloomarty CTR by avg. Position (Google top) 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR
42. @bloomarty Exact Positions‘ CTR‘s (Google top) 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR Exact Pos. CTR 18.05% 13.64% 12.05% 10.09%
43. @bloomarty Interpolated CTR‘s 0% 2% 4% 6% 8% 10% 12% 14% 16% 18% 20% 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 Avg. Pos. CTR Exact Pos. CTR interpolated Exact Pos. CTR 18.05% 13.64% 12.05% 10.09%
44. @bloomarty What Happens at the Impression Level … • Most impression-level data is hidden from us • How would you assess the following scenarios?
48. @bloomarty Additional Slide • What did these three scenarios say about the performance of your ad? • Actually, don‘t bother • As search marketers, all we see is what‘s on the next slide
49. @bloomarty How would you evaluate these? • 3 impressions • 2 clicks • CTR: 67%
50. Ignoring Feedback Problem #3
52. @bloomarty CTR The Position Feedback • Positive feedback • No loop: position effects do not affect QS Ad position Ad Auction Ad rankingHigher CTR Higher Quality Score Better Position Higher CTR
53. @bloomarty The Impressions Feedback
54. @bloomarty Additional Slide • I probably shouldn‘t repeat what I said there ...
55. @bloomarty The Impressions Feedback • Negative feedback: • No loop: low relevance impressions evaluated separately Higher CTR Higher Quality Score More less relevant impressions Lower CTR CTR Ad Ad Auction
57. Thinking You Are Better Motivated Than Google Problem #4
59. @bloomarty The AdWords Business Model „How much would you give us if we gave you the click?“ „How much would you give us if we showed your ad?“ Sell ad clicks Sell ad impressions Convert bids No control over clicks Advertisers want clicks
60. @bloomarty The Ad Auction • 𝐴𝑑 𝑅𝑎𝑛𝑘 = 𝐶𝑃𝐶 ∗ 𝑄𝑢𝑎𝑙𝑖𝑡𝑦 𝑆𝑐𝑜𝑟𝑒 • = 𝐶𝑃𝐶 ∗ 𝐶𝑇𝑅 • = 𝐶𝑜𝑠𝑡 𝐶𝑙𝑖𝑐𝑘𝑠 ∗ 𝐶𝑙𝑖𝑐𝑘𝑠 𝐼𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 • = 𝐶𝑜𝑠𝑡 𝐼𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛 = „How much would you give us if we showed your ad?“
61. @bloomarty Getting CTR wrong
62. @bloomarty Additional Slide • Quality Score is basically CTR • Google needs to know CTR to calculate cost-per-impression bids • Based on this, Google can maximize their revenue from showing ads • Getting CTR wrong means leaving money on the table – this adds up • Google is well-motivated to get CTR (and therefore ad testing) right
63. The Concept Itself Problem #5
64. @bloomarty Conflicting Mindsets Have a dedicated ad for everything. Find the best ad and use only that. [keywords] ? ? ? ? ? ? ? ?
65. @bloomarty Example: Search History • Have they searched for this before? • Did they interact with ads? • Did they interact with organic results? • Have they seen our ad before?
66. @bloomarty Example: Personality • Do they take their time to read the entire ad? • How do they respond to – discounts – reassurances – testimonials
67. @bloomarty Testing for Things We Can‘t See? • Manually: Impossible • Otherwise:
68. What Should We Do Instead? So…
69. @bloomarty New Mindset • You don‘t have control over ad testing. Let it go. • There can be multiple winners. • Use Google‘s optimized ad rotation by default.
70. @bloomarty Let the Machines Do Their Job • Google is well motivated • Google is really good with data and algorithms[citation needed] • Let Google decide which ads to show
71. @bloomarty Know Google‘s Limits • CTR & conversion rate • Historical data • Semantics
72. @bloomarty Know Your Strengths • Provide meaningful ads • Focus on the big stuff
73. @bloomarty Keep an Eye on the Machines • If necessary, force data collection • Rotate at adgroup level • Consider the cost of even rotation • Alternative: Add the ad again iStock.com/RichVintage
74. @bloomarty To Sum Up… • No more micromanaging ad tests • Focus on messaging and supervising the machines • Your job just became more interesting – congrats!
75. @bloomarty Thank You! • Agency Blog: Die Internetkapitäne • Advanced AdWords Blog: PPC-Epiphany.com @bloomarty Martin Röttgerding Head of SEA BloofusionGermany GmbH martin@bloofusion.de
Publicité