1. Ten ways to make your semantic
app addicted - REVISITED
Elena Simperl
Tutorial at the ISWC2011, Bonn, Germany
10/24/2011 www.insemtives.eu 1
2. Executive summary
• Many aspects of semantic content authoring naturally rely
on human contribution.
• Motivating users to contribute is essential for semantic
technologies to reach critical mass and ensure sustainable
growth.
• This tutorial is about
– Methods and techniques to study incentives and motivators
applicable to semantic content authoring scenarios.
– How to implement the results of such studies through
technology design, usability engineering, and game mechanics.
www.insemtives.eu 2
3. Incentives and motivators
• Motivation is the driving • Incentives can be related
force that makes humans to both extrinsic and
achieve their goals. intrinsic motivations.
• Incentives are ‘rewards’ • Extrinsic motivation if
assigned by an external task is considered boring,
‘judge’ to a performer for dangerous, useless,
undertaking a specific socially undesirable,
task. dislikable by the
– Common belief (among performer.
economists): incentives • Intrinsic motivation is
can be translated into a
sum of money for all driven by an interest or
practical purposes. enjoyment in the task
itself.
5. Extrinsic vs intrinsic motivations
• Successful volunteer crowdsourcing is difficult
to predict or replicate.
– Highly context-specific.
– Not applicable to arbitrary tasks.
• Reward models often easier to study and
control.*
– Different models: pay-per-time, pay-per-unit, winner-
takes-it-all…
– Not always easy to abstract from social aspects (free-
riding, social pressure…).
– May undermine intrinsic motivation.
* in cases when performance can be reliably measured
6. Examples (ii)
Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.
7. Amazon‘s Mechanical Turk
• Types of tasks: transcription, classification, and content
generation, data collection, image tagging, website feedback,
usability tests.*
• Increasingly used by academia.
• Vertical solutions built on top.
• Research on extensions for complex tasks.
* http://behind-the-enemy-lines.blogspot.com/2010/10/what-tasks-are-posted-on-mechanical.html
8. Tasks amenable to crowdsourcing
• Tasks that are decomposable into simpler
tasks that are easy to perform.
• Performance is measurable.
• No specific skills or expertise are required.
9. Patterns of tasks*
• Solving a task • Example: open-scale tasks
– Generate answers in Mturk
– Find additional information – Generate, then vote.
– Improve, edit, fix – Introduce random noise to
• Evaluating the results of a identify potential issues in
the second step
task
– Vote for accept/reject
Label Correct
Vote answers
Generate answer
– Vote up/down to rank
potentially correct answers
image or not?
– Vote best/top-n results
• Flow control
– Split the task
– Aggregate partial results
* „Managing Crowdsourced Human Computation“@WWW2011, Ipeirotis
11. What makes game mechanics
successfull?*
• Accelerated feedback cycles.
– Annual performance appraisals vs immediate feedback to
maintain engagement.
• Clear goals and rules of play.
– Players feel empowered to achieve goals vs fuzzy, complex
system of rules in real-world.
• Compelling narrative.
– Gamification builds a narrative that engages players to
participate and achieve the goals of the activity.
• But in the end it’s about what task users want to get
better at.
*http://www.gartner.com/it/page.jsp?id=1629214
Images from http://gapingvoid.com/2011/06/07/pixie-dust-the-mountain-of-mediocrity/ and http://www.hideandseek.net/wp-
content/uploads/2010/10/gamification_badges.jpg
12. Guidelines
• Focus on the actual goal and incentivize related
actions.
– Write posts, create graphics, annotate pictures, reply
to customers in a given time…
• Build a community around the intended actions.
– Reward helping each other in performing the task and
interaction.
– Reward recruiting new contributors.
• Reward repeated actions.
– Actions become part of the daily routine.
Image from http://t1.gstatic.com/images?q=tbn:ANd9GcSzWEQdtagJy6lxiR2focH2D01Wpz7dzAilDuPsWnL0i4GAHgnm_0hyw3upqw
13. What tasks can be gamified?*
• Tasks that are decomposable into simpler
tasks, nested tasks.
• Performance is measurable.
• Obvious rewarding scheme.
• Skills can be arranged in a smooth learning
curve.
*http://www.lostgarden.com/2008/06/what-actitivies-that-can-be-turned-into.html
Image from http://www.powwownow.co.uk/blog/wp-content/uploads/2011/06/gamification.jpeg
14. What is different about semantic
systems?
• It‘s still about the context
of the actual application.
• User engagement with
semantic tasks in order to
– Ensure knowledge is
relevant and up-to-date.
– People accept the new
solution and understand its
benefits.
– Avoid cold-start problems.
– Optimize maintenance
costs.
15. Tasks in knowledge engineering
• Definition of vocabulary
• Conceptualization
– Based on competency questions
– Identifying instances, classes, attributes,
relationships
• Documentation
– Labeling and definitions.
– Localization
• Evaluation and quality assurance
– Matching conceptualization to documentation
• Alignment
• Validating the results of automatic methods
www.insemtives.eu 15
17. OntoGame API
• API that provides several methods that are
shared by the OntoGame games, such as:
– Different agreement types (e.g. selection
agreement).
– Input matching (e.g. , majority).
– Game modes (multi-player, single player).
– Player reliability evaluation.
– Player matching (e.g., finding the optimal
partner to play).
– Resource (i.e., data needed for games)
management.
– Creating semantic content.
• http://insemtives.svn.sourceforge.net/vie
wvc/insemtives/generic-gaming-toolkit
10/24/2011 www.insemtives.eu 17
19. Case studies
• Methods applied
– Mechanism design.
– Participatory design.
– Games with a purpose.
– Crowdsourcing via MTurk.
• Semantic content
authoring scenarios
– Extending and populating
an ontology.
– Aligning two ontologies.
– Annotation of text, media
and Web APIs.
20. Lessons learned
• Approach is feasible for mainstream domains, where a
(large-enough) knowledge corpus is available.
• Advertisement is important.
• Game design vs useful content.
– Reusing well-kwown game paradigms.
– Reusing game outcomes and integration in existing workflows
and tools.
• But, the approach is per design less applicable because
– Knowledge-intensive tasks that are not easily nestable.
– Repetitive tasks players‘ retention?
• Cost-benefit analysis.
21. Using Mechanical Turk for
semantic content authoring
• Many design decisions similar to GWAPs.
– But clear incentives structures.
– How to reliably compare games and MTurk results?
• Automatic generation of HITs depending on the
types of tasks and inputs.
• Integration in productive environments.
– Protégé plug-in for managing and using crowdsourcing
results.
22. Outline of the tutorial
Time Presentation
14:00 – Human contributions in semantic content authoring
14:45
14:45 – Case study: motivating employees to annotate enterprise
15:30 content semantically at Telefonica
15:30 – Coffee break
16:00
16:00 – Case study: Crowdsourcing the annotation of dynamic Web
16:45 content at seekda
16:45 – Case study: Content tagging at MoonZoo and
17:30 MyTinyPlanets
17:30 – Ten ways to make your semantic app addicted - revisited
18:00 www.insemtives.eu 22
23. Realizing the Semantic Web by
encouraging millions of end-users to
create semantic content.
10/24/2011 www.insemtives.eu 23