Publicité
Publicité

Contenu connexe

Similaire à Qrowd and the city: designing people-centric smart cities(20)

Publicité
Publicité

Qrowd and the city: designing people-centric smart cities

  1. QROWD AND THE CITY: DESIGNING PEOPLE-CENTRIC SMART CITIES Elena Simperl OnTheMove 2019 @esimperl
  2. DIGITALISATION IS TRANSFORMING CITIES Cities have access to more data than ever to improve urban services, create efficiencies and reduce their environmental footprint Technology is transforming the public sector, from decision making to democratic processes
  3. SMART CITIES ARE ABOUT PEOPLE Citizen, rather than technology-centric Participatory and fair Using data responsibly
  4. TECHNOLOGY IS FUELLED BY PEOPLE Applications require more, better data e.g. from mobile or IoT devices AI algorithms learn from human labellers
  5. How do we bring together human and computational intelligence
  6. ABOUT QROWD H2020 innovation action in the Big Data Value PPP Started in 12/2016, 3 years, 3.9M € 8 partners, 5 European countries, coordinated by the University of Southampton Smart city solutions Combining crowd and computational intelligence Piloted in smart transportation with A medium-sized city in Italy A leading navigation and traffic management service provider
  7. OUR APPROACH Mix of open-innovation methods to co-design pilots and encourage stakeholder participation Value-centric technology design: personal data empowerment, open source, building upon existing standards Human-in-the-loop extensions to data collection and analysis with participatory sensing, paid crowdsourcing and mobile volunteers
  8. THE QROWD PLATFORM: MORE THAN JUST TECHNOLOGY Open-source technology stack Supports deployment of human-AI workflows Complemented by methodology and guidelines to use crowdsourcing effectively Provenance and co-ownership of data
  9. EXAMPLE: MORE AND BETTER MODAL SPLIT DATA City planners lack detailed mobility information about their residents Human-AI workflow supported through the Qrowd platform Bespoke data collection app Combination of symbolic and numerical ML classifiers to match trip segments to modes of transport Active learning approach to ask travellers to validate trips the machine is unsure about
  10. CHALLENGE: USER EXPERIENCE
  11. EXAMPLE: URBAN AUDITING ON DEMAND Mobility data on large areas of cities is often outdated  Survey methodologies: expensive, error-prone, no validation  VGI (e.g. OpenStreetMap): no control over data updates, coverage etc. Online tool using paid microtask crowdsourcing  Uses digital street view imagery  Task performed remotely  Participants recruited from online marketplaces
  12. VIRTUAL CITY EXPLORER QROWD- POI.HEROKUAPP.COM/ Urban planner defines an area and the instructions for the participants Participants explore an area virtually and identify points of interest Urban planner monitors task execution, quality and rewards
  13. CHALLENGE: CROWDSOURCING DESIGN TASK DESIGN DATA QUALITY INCENTIVES FAIRNESS
  14. EVALUATION A TALE OF TWO CITIES: TRENTO & NANTES 150 participants per city, random starting positions 5 PoIs (bike racks) per participant for $0.15 Total cost per city: $45 (7 days) Mixed methods approach, including metrics and manual inspection  RQ1: Feasibility and precision as task progresses  RQ2: Completeness (overlap with benchmark datasets)  RQ3: Coverage (percentage of visited nodes on explorable path)  RQ4: Crowd experience (interface errors triggered, number of escapes) Trento Nantes Area 0.347km2 0.336km2 Nodes 906 1177 Explorable distance 9127m 12104m StreetView coverage 93% 92%
  15. RQ1: TASK FEASIBILITY AND PRECISION AS TASK PROGRESSES UX supports discovery of PoIs Photoshoot paradigm and triangulation method help identify low-quality answers Precision drops as all PoIs are submitted
  16. RQ2: DATA COMPLETENESS Approach complements existing data sources and is able to find new PoIs Highly customisable (area of interest, budget, questions, timing) 52 54
  17. RQ3: COVERAGE
  18. RQ4: CROWD EXPERIENCE
  19. FINDINGS VCE adds value to urban auditing methods  Accuracy comparable to OpenStreetMap  Additional resources upon demand (at a cost)  Easier to manage than VGI Free exploration achieves good coverage Taboo mechanism helps reduce costs and avoid duplicated work Ongoing work  Allocating starting positions: randomly, centre, to confirm item, to cover new area etc.  Coordinating among participants: map showing progress of other participants
  20. CHALLENGE: MOTIVATION AND INCENTIVES Love and glory keep costs down Money and glory deliver results faster LOVE MONEY GLORY
  21. PAID MICROTASKS Money makes the crowd work faster* How about love and glory? *[Mason &Watts, 2009] 21
  22. GAMIFYING WORK Make paid microtasks more cost-effective w/ gamification People will perform better if tasks are more engaging  Increased accuracy through higher inter-annotator agreement  Cost savings through reduced unit costs Micro-targeting incentives when people attempt to quit improves retention 22 Improving paid microtasks through gamification and adaptive furtherance incentives. O Feyisetan, E Simperl, M Van Kleek, N Shadbolt. WWW2015, 333-343
  23. QROWDSMITH: EXPERIMENTAL SANDBOX Labelling tasks, published on microtask platform  Free-text labels, varying numbers of labels per image, taboo words  Users can skip images, play as much as they want Probabilistic reasoning to predict exit and personalize furtherance incentives Baseline: ‘standard’ tasks w/ basic spam control vs Gamified: same requirements & rewards, but crowd asked to complete tasks in Wordsmith vs Gamified & furtherance incentives: additional rewards to stay (random, personalised) 23
  24. FINDINGS More and better labels  41k vs 1.2k labels in the control condition Larger tasks help with retention  50% dropout reduction Increased participation  People come back (20 times) and play longer (43 hours vs 3 hours without incentives), but financial incentives play important role Targeted incentives work  77% players stayed vs. 27% in the randomised condition, 19% more labels compared to no-incentives
  25. FASTER RESPONSES THROUGH CONTESTS Make real-time crowdsourcing affordable Participants compete against each other in a live contest Only top contestants receive prize  Contest produces accurate answers faster  Task thresholds and reward spreads affect volume of work and retention Beyond monetary incentives: experiments in paid microtask contests. O Feyisetan, E Simperl. ACM Transactions on Social Computing (TSOC), to appear.
  26. FINDINGS With twice the task speed, contests could potentially serve as a real-time task model An increase in reward spread leads to more tasks completed by the best contestants Increasing the task threshold within a reward spread reduces the number of tasks completed Participants exit a task when they perceive an overall loss of utility accrued by remaining  Tasks with high rewards and low task thresholds attract participants to stay on longer
  27. How do we bring together human and computational intelligence
  28. Mix of crowdsourcing approaches Iterative design Data science to understand and predict crowd behaviour Aligned motivation and incentives
Publicité