From simple user segmentation to a complex interconnected system, we will follow Crazy Panda`s progress in developing personalized monetization. The key message of the session is not only about when or who to show those offers to, but also about how to make them truly personal. Presentation delivered by Ivan Kozyev, Head of Analytics of Crazy Panda at 8th edition of GameCamp (www.GameCamp.io).
3. OUR PRODUCTS
World Poker Club
Casual Poker
72M registrations Stellar Age
MMO Strategy
2.1M registrations
The Household
Social Farm
30M registrations Pirate Tales
Party Battler
1.4M registrations
9. Personalization Basics
User segmentation
Segmentation through user`s payment behavior:
1 2 3 4
⊳ Converting non-payers into payers
⊳ Establishing a regular payment relationship
⊳ Regular payers group
⊳ Returning churning payers
10. Theory, p.3
AB testing
⊳ Not starting with a
hypothesis
⊳ Stopping the test too early
⊳ Stopping the test too late
⊳ Measuring the wrong
metric
⊳ Not taking into account
multiple testing
⊳ etc.
Audience
Control Test
change
Some mistakes in AB testing:
11. Advanced Techniques
Case 1: Small vs big offers
⊳ Conversion: 5.4% ⊳ Conversion: 3.8%
VS
* - in the first 30 days
⊳ ARPPU*: $36 ⊳ ARPPU*: $59
12. Advanced Techniques
Case 1: Small vs big offers
Hypothesis:
⊳ There will be more high-value payers for the bigger
starting offer than for the smaller one
Conversion into payer that spends at least $30*:
* - in the first 30 days
⊳ Conversion: 1.9% ⊳ Conversion: 2.8%
13. Advanced Techniques
Case 1: Small vs big offers
* - in the first 30 days
⊳ Conversion: 5.5%
⊳ $30 payer
conversion: 2.9%
⊳ ARPPU*: $60
+
14. Advanced Techniques
Case 2: Bad offers
⊳ Share of offers bought: 0.5%
⊳ Offer bonus 30% less
Payers
group
lambda
Solution:
⊳ Increasing the offer bonus by 30%
AB test summary:
⊳ Share of offers bought: 22%
⊳ Project revenue expected uplift: 3%
Starting conditions:
15. Advanced Techniques
Case 2: Bad offers
Investigating the case:
⊳ Next offer was 1.8 times bigger
⊳ Payments shifted towards smaller offers
⊳ Total revenue loss was about 44% for those users
Tests that followed:
⊳ Increasing lambda offer prices
⊳ Decreasing next group offer prices
⊳ Increasing next group offer bonuses
⊳ Changing content of lambda offers
⊳ etc.
Payers
group
lambda
Next
group
16. Advanced Techniques
Case 2: Bad offers
Bad offer upsides:
⊳ Making other offers more desirable
⊳ Can lead to more or bigger payments
⊳ Skipping the loop of constantly increasing offer value
⊳ Can be used as part of a double offer
17. Advanced Techniques
Case 3: Machine learning
⊳ Set up a goal
⊳ Check historical data
⊳ Prepare dataset
⊳ Some magic happens
⊳ And the model is done
⊳ Now, implement it
⊳ And, finally, test it
18. Advanced Techniques
Case 3: Machine learning
Predict which currency user needs the most:
⊳ Setting the correct target (there
were no soft currency offers)
or
Hard currency
exchange
CatBoost and
special data
handling
⊳ Making the model very fast (less
than 5s to collect data and make
a prediction)
Main issues:
19. Closing Words
Summary
⊳ Personalization is
awesome
⊳ Up to 80% of revenue from
personalized offers
⊳ Testing is a must ⊳ 150+ AB tests per year just
for World Poker Club
⊳ Tests will fail ⊳ 42% successful test rate
⊳ 14% tests yield 77% of effect
⊳ ML is challenging ⊳ Only the 3rd model proved to
be profitable
20. THANK YOU FOR YOUR ATTENTION!
ANY QUESTIONS?
IVAN KOZYEV
Head of Analytics at Crazy Panda
i.kozyev@crazypanda.ru
telegram: @IvanKozyev
https://www.linkedin.com/in/ivan-kozyev/