App developers are constantly competing against each other to win more downloads for their apps. With hundreds of thousands of apps in these online stores, what strategy should a developer use to be successful? Should they innovate, make many similar apps, optimise their own apps or just copy the apps of others? Looking more deeply, how does a complex app ecosystem perform when developers choose to use different strategies? We investigate these questions using AppEco, the first Artificial Life model of mobile application ecosystems. In AppEco, developer agents build and upload apps to the app store; user agents browse the store and download the apps. A distinguishing feature of AppEco is the explicit modelling of apps as artefacts. In this work we use AppEco to simulate Apple’s iOS app ecosystem and investigate common developer strategies, evaluating them in terms of downloads received, app diversity, and adoption rate.
8. AppEco Model
App Store
Developer App User
builds and downloaded by
uploads
9. Developer Agent
• represents a solo developer or a team of
developers
• each developer records:
• development duration
• number of days taken
• is_active?
• number of apps developed
• number of downloads received
11. App Artefact
• app features are abstracted as a 10x10
grid
• if a cell is filled, then the app offers that
feature
• for ranking purposes, each app records:
• total number of downloads x
received
• number of downloads received on
each of the previous 7 days
12. • App cells are filled probabilistically (each cell in
the grid has a probability PFeat of being filled.).
Innovator
• If this is the developer’s first app, the cells are
filled probabilistically.
• Otherwise, the developer copies the features
of his own latest app with mutation P(m) = 0.5.
Milker
• If this is the developer’s first app, the cells are
filled probabilistically.
• Otherwise, the developer copies the features in
Optimiser his own best app with mutation P(m) = 0.5.
• An app is randomly selected from the Top
Apps Chart and its features are copied
with mutation P(m) = 0.5.
Copycat
13. User Agent
• user preferences are abstracted as a
Empty to model features
10x10 grid
offered by apps that are
• if a cell in P is filled, then the user undesirable to users
agent desires the feature
represented by that cell
• each user records:
• the apps it has downloaded
• number of days between each
browse
• days_elapsed (to know when
to browse the app store next)
14. 4 out of 4 matches
App 1
User
2 out of 4 matches
App 2
User downloads App 1 but not App 2
15. Angry Birds ICSE 2011 App
User 1 User 2 Soo Ling
16. App Store
Top Apps New Apps Keyword Search:
1. App App
2. App App
3. App App
......
......
17. AppEco Algorithm
Developer
Initialise
agents build and Update app store
ecosystem
upload apps
loop for N timesteps
User agents
Increase agent
Exit browse and
population
download apps
1 timestep in AppEco = 1 day in the real-world
18. Calibrating AppEco for iOS
Number of iOS App Users (2008-2011)
Source: Apple Press Release
20. Number of Apps
and Downloads
(2008-2011)
Apple Events
Source: Apple Press Release
21. AppEco Simulation
• Takes about 20 seconds to simulate 3 years
• After 3 years, the simulation has more than
• 100,000 developer agents
• 500,000 apps
• 20,000 user agents (due to memory constraints,
1 user agent represents 10,000 real users)
• 1.5 million downloads (corresponding to 15
billion downloads)
23. Experiment 1
• RQ1: Which strategy gets the highest average number of
downloads?
• RQ2: Which strategy produces the most diverse apps?
• RQ3: Which strategy improves over time?
• Method:
• Developers were randomly assigned one of the four strategies in
equal proportions
• AppEco was run with the calibrated settings
• 1080 timesteps (i.e., 3 years in the real world)
• Repeated 100 times
• Results averaged over 100 runs
24. RQ1: Average Downloads
Average Downloads
(std. dev. in brackets)
(0.41)
(0.14) (0.14) (0.15)
Strategy
Innovator Milker Optimiser Copycat
25. RQ2: App Diversity
• Measured using Feature Coefficient of Variation (FeatCV)
• For each cell in the desired region of feature grid F, we
calculated the number of apps that offer that feature,
forming a combined feature grid, FC.
• σ is the standard deviation and μ is the mean of values in
grid FC.
• The lower the FeatCV, the more diverse the apps (the
better the strategy - in combination its apps better meet
all the users’ needs).
28. RQ3: Improvement Over Time
• Categorised the apps into classes corresponding to their
developers’ first apps, second apps, third apps, and so on => apps
created by the developers at experience level 1, 2, 3, and so on.
• “Surveyed” the users and asked if they would download each app: if
all the features in the app match the user’s preferences then they
would download the app.
• For each strategy, the Fitness of the strategy at experience level L is
• AvgDlL is the number of potential downloads as reported by users in
the survey for all the apps in experience level L divided by the
number of apps in L, and NumUsers is the number of users who
participated in the survey.
• FitnessL ranges from 0 to 1. The higher the value, the fitter the
strategy.
30. Experiment 2
• RQ4: When strategies compete, how often is each strategy
chosen by developers?
• Method:
• Developers begin with one of the four strategies.
• Each developer then has a 0.99 probability to randomly select an
app from the Top Apps Chart and change strategy to be the same as
the developer of the selected app. There is a 0.01 probability that a
strategy is randomly selected.
• AppEco was run with the calibrated settings
• 1080 timesteps (i.e., 3 years in the real world)
• Repeated 100 times
• Results averaged over 100 runs
31. RQ4: Strategy Popularity
Proportion of Developers
(std. dev. in brackets)
(19.7%)
(18.9%)
(17.2%)
(7.7%)
Strategy
Innovator Milker Optimiser Copycat
32. An Example Run Over 3 Years
Proportion of Developers
Innovator
Optimiser
Milker
Copycat
33. Conclusions
• No strategy is a guaranteed winner.
• Innovators produce diverse apps, but they are hit or miss – some
apps will be popular, some will not.
• Milkers may dwell on average or bad apps as they churn out new
variations of the same idea.
• Optimisers produce diverse apps and tailor their development
towards users’ needs.
• Copycats may seem like the best strategy to guarantee downloads,
but the strategy can only work when there are enough other
strategies to copy. In addition, this strategy can only exist in a
minority, otherwise app diversity will decrease (many duplicated apps
result in a scarcity of some features desired by users) and the fitness
of the ecosystem will suffer.
37. Effects of Publicity on App Downloads
Excellent app, broadcasted Excellent app, appearing on new
apps chart
SL Lim & P Bentley. App epidemics: modelling the effects of publicity in a mobile app ecosystem. ALIFE'13, in press.