Making informed decisions on which features to prioritize in a developer portal can be a daunting task. In this session, we'll show you how to leverage experiments, data, and user feedback to evaluate their potential and refine your approach. We'll explore how testing ideas with minimal investment, akin to an MVP, can help you avoid building features that don't meet your users' needs.
2. 2
2
To optimize a product with analytics and feedback, we
must follow practices that emphasis the importance of
analytics and feedback.
3. 3
Overview
• Unique challenge of opportunity for Dev Portals
• Side effects of having too many opportunities
• How might we use experiments to help us maintain good practices and better
optimize the experience
4. 4
Many Opportunities
Partner Integrations
Accounts
Blog
Catalog
Newsletter
Documentation
FAQ
Analytics
What’s New
Try it Now
Security
Community
Notifications
Support
Events
Search
Feedback
API Management
Forums
Reporting
Use Cases
Success Stories
Terms of Service
Payments
RBAC
Governance
Health & Status
About Us
Standards
SDKs
Tutorials
Policies
Getting Started
Dev Tools Social Media Release Notes
Roadmap
Pricing
Administration
Videos
Scalability
Reliability & Performance
Developer Experience
Product Pages
API Access
Test Automation
Frameworks
Dependencies
Database Management
Tech Debt
Insights
Best Practices
Design
Open Source
Monitoring & Alerting
Infrastructure
Accessibility
API Reference
Careers Marketing
News & Press
Testing
Monetization
5. 5
Prioritization
Partner Integrations
Accounts
Blog
Catalog
Newsletter
Documentation
FAQ
Analytics
What’s New
Try it Now
Security Community
Notifications
Support
Events
Search Feedback
API Management
Forums
Reporting
Use Cases
Success Stories
Terms of Service
Payments
RBAC Governance
Health & Status
About Us
Standards
SDKs
Tutorials
Policies
Getting Started Dev Tools
Social Media
Release Notes
Roadmap
Pricing
Administration
Videos
Scalability
Reliability & Performance
Developer Experience
Product Pages
API Access
Test Automation
Frameworks
Dependencies Database Management
Tech Debt
Insights
Best Practices
Design
Open Source
Monitoring & Alerting
Infrastructure Accessibility
API Reference
Careers
Marketing
News & Press
Now Next & Later
Testing
Monetization
6. 6
Side effects of many opportunities
• Creating a detailed long-term roadmap
• The habit of “checking things off the list”
• Not questioning return on investment
• Building to meet stakeholder requirements
• Focusing on outputs, not outcomes
• Less emphasis on analytics and feedback
7. 7
Traditional vs. Modern Product Development
Product Development
(Modern)
Feature Factory
(Traditional)
9. 9
What's the difference?
Foundation & Features Experiments
• Minimal uncertainty
• Starts with “We know by doing.. We will..”
• May take weeks, months, or even years
• Typically checked off “the list” because they
have less uncertainty
• Only revisited when impacted by another
feature
• Moderate-high uncertainty
• Starts with “We believe by doing.. We expect..
• Timeboxed to emphasis return on investment
• Helps us refine “the list” with data
• Structured to emphasis assessing outcomes,
learning, and iteration
10. 10
Example Experiment
Hypothesis
We believe that by providing use cases for our API Products, that we will
increase in active apps (consumer).
Scope/Requirements
• Limit development and testing to 1 sprint
• Experiment will last 2 months (no changes to use cases)
• No automated testing required
• Accessible in top 20% of homepage
• Use cases must have CTAs
Key Measures
• Number of active apps
• Unique Views
• Time on Page
• CTA Engagement
Success Criteria
• 6% increase in active apps over previous 2 months
• 25% of new users engage with use cases
• Time on page = at least 70% of read time
• 30% of users click CTA
11. 11
Traditional vs. Modern Product Development
Product Development
(Modern)
Feature Factory
(Traditional)
12. 12
Assessing Results
Success Criteria
• 6% increase in active apps over previous 2 months
• 25% of new users engage with use cases
• Time on page = at least 70% of read time
• 30% of users click CTA
Scenario 1
• 8% increase in active apps
• 60% of new users engage with use cases
• Time on page = 90%
• 50% of users click CTA
Experiment was successful – it is obvious that use
cases contributed to outcome.
Scenario 2
• 1% increase in active apps
• 8% of new users engage with use cases
• Time on page = 20%
• 5% of users click CTA
Experiment was unsuccessful – unlikely that
will have an impact. Should discuss removing use
from application to avoid a cluttered experience.
Scenario 3
• 3% increase in active apps
• 10% of new users engage with use cases
• Time on page = 100%
• 80% of users click CTA
Experiment did not achieve expected results -
that use case could be effective. Consider
experiment with placement.
13. 13
Recommendations
• Determine how much of your time you can afford to spend on
experimentation (20% or less to start)
• Bring the experimental mindset back into new feature development
• Always have multiple key measures (3-4 is typically best)
• Focus on experiments as a learning opportunity
• Embrace learning – avoid output as a measure of success
• Unsuccessful experiments are a success if you minimize investment and
learn