Fabio Mora - http://fabiomora.com
PHP User Group Milano / 21 June 2017, 19.30
In large scale web applications, design, draft, write and validate the value of a feature is a process that requires attention and care. Developers risk tech-debt, products managers need lo learn fast and have feedback on tests’ results, users need an available site and a consistent UX.
In a practical way I’ll present one of the many ways to balance all stakeholder needs. The mix includes Feature Branching, Feature Toggling, a PHP solution to control rollouts (by quotes, switch, percentage) and the monitoring, telemetry with an analytics platform and a backend time-series databases. All together with some very quick A/B testing statistical background. Plus: quick overview of lesson learned.
5. Different roles, different needs
Product Owner
● Give users features
● Validate ideas
● Short feedback loop
● Design right prototypes
Software Engineer
● Build cheap prototypes
● Put dev time in the
right features
● Keep tech debt low
User
● Have the site available
● Use a coherent service
● See interesting features
7. Trade-off? Or even make the cheapest thing...
https://martinfowler.com/bliki/DesignStaminaHypothesis.html
“If the functionality for your initial release is below the design payoff line, then it may be worth trading off design quality for speed; but...”
9. A look to the metrics using Google Analytics
● Google Analytics out of the box
○ Sessions, Page Views, Users, Bounce Rate, Session Duration…
■ Reports: Audience, Overview. Then “Select a metric” section
○ Custom Events
■ Reports: Behaviour, Events, Overview. Then browse by “Event Action”
● Click-Through Rate (CTR) definition might help us
○ Ratio of the whole user traffic who click on a specific Call To Action (CTA)
○ CTR = Clicks / Page Views * 100
15. A really really simple way to A/B test
A {0, 1} random toggle and a X-Feature HTTP Header
16. A really really simple way to A/B test
A Template Engine condition and GA events
17. A really really simple way to A/B test
50% vs 50% ...test it!
18. Going further with A/B Testing
Unit of
Division
Example Pro Cons
Event Page View
Simplicity
Randomness
UX Inconsistent
Fingerprint
PHPSESSID,
Cookies
UX consistent by
channel
Low randomness
Condition User Login UX Consistent
Low randomness
Not always applicable
19. A/B Testing: key points
● Unit Of Division
○ Is the only thing we’re allowed to design
○ Indipendent, homogeneus, representative (of the whole traffic)
● Evaluation Metric
○ Depends on the feature (e.g. CTR)
○ Other: time on page, loading time, first byte (avg, median, quantiles)
○ Focus on a few metrics!
20. A/B Testing: key points
● Significant Improvement
○ Tangible improvement from business point of view
● Sample Size
○ Repeatability: statistical phenomenon, variability, robustness, multiple runs
○ Significance: regarding a probability model
21. Refer to Math
● A/B Testing
○ Bernoulli Distribution fits {0, 1} events
○ Parameters: confidence, power, sample size, tangible improvement
○ Here Statistical Confidence (interval) is close to Statistical Repeatability
○ http://www.evanmiller.org/ab-testing/sample-size.html
● A/B/C/D… Testing
○ Way more complex
○ https://en.wikipedia.org/wiki/Bonferroni_correction
31. Feature Toggling
Switch
● Traditional turned on/off
● Helps you to keep a
feature in master
● Can be used as override
Percentage
● Useful for A/B tests
● Minimize load risks
Condition
● Very specific
● The “logged in” test
● Split users by clusters
38. Key Learnings
● Track all the things!
● Trust statistics, don’t DYI
● Pay attention to team interactions
● Use a circuit breaker
● Define a fallback strategy
● Allocate cleanup time in advice
● Build packages, split things
● Handle errors properly
● Consider vendors (Mixpanel, Optimizely...)