1. The Role of
Design in
Crowdsourcin
g
Javed Khan
Assistant Professor
Industrial Design
v.j.khan@tue.nl
@v_j_khan
khan.gr
25 NOV 2016
2. • Following good design practices can have up
to a 1: 48 cost-benefit ratio
• Nielsen (1995). How to Conduct a Heuristic Evaluation.
At: https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
Good Design Matters
2
4. • Design New Concepts
• Design for Evaluating Platforms
• Design for Experience
Role of Design in
Crowdsourcing
4
5. • Application to design has only scratched the surface of its
potential
Design for New Concepts
5
Luther, Tolentino, Wu, Pavel, Bailey, Agrawala, Hartmann & Dow (2015). Structuring, aggregating, and
evaluating crowdsourced design critique. ACM CSCW
6. • Application to design has only scratched the surface of its
potential
Design for New Concepts
6
stems
nswer
ne [1],
oup of
egion
rk to-
tively
e end-
tional
m and
e con-
lso be
n the
swith
ng the
draw-
crowd
ulting
ams”,
were
yping
an mi-
video
vents.
sourc-
Lasecki, Kim, Rafter, Sen, Bigham & Bernstein. (2015). Apparition: Crowdsourced User Interfaces
That Come To Life As You Sketch Them. ACM CHI
ourcing
computation has been explored in systems
which showed that the crowd could answer
n less than 30 seconds, and Adrenaline [1],
the retainer model for bringing a group of
ether in two seconds. Systems likeLegion
d how a synchronous crowd can work to-
once recruited (in that case to collectively
ser interfaces). Apparition allows the end-
the crowd to produce an artifact.
ged a group of workers in a conversational
user, using an incentive mechanism and
encourage the crowd to act as a single con-
Workerspowering Apparition must also be
e prototypes must reliably function in the
ultiple interactions with end-users.
Drawing and Rapid Evaluation
herently providesaccessto individualswith
ch provides guidance to users during the
2] and improves specific skills like draw-
et al. collected sketch data from a crowd
DrawAFriend game and used the resulting
ser-drawn lines in real-time [19].
ows end-users to create “flash teams”,
d teams of expert crowd workers that were
complex tasks such as design prototyping
t are produced by experts, rather than mi-
om thecrowd.
the crowd to help users explore video
y marking arbitrary user-specified events.
onquer process is common in crowdsourc-
ws the inherent parallelism of the crowd to
msfaster. Apparition uses asimilar division
omplex, parallel control tasks possible, but
in crowdsourcing, it supports worker self-
ask-finding in real-time. Figure 2. Designers roughly sketch their interface while describing it in
Real-Time Crowdsourcing
Real-time human computation has been explored in systems
like VizWiz [2], which showed that the crowd could answer
visual questions in less than 30 seconds, and Adrenaline [1],
which formalized the retainer model for bringing a group of
crowd workers together in two seconds. Systems likeLegion
[17] have explored how a synchronous crowd can work to-
gether effectively once recruited (in that case to collectively
control existing user interfaces). Apparition allows the end-
user to work with the crowd to produce an artifact.
Chorus [18] engaged a group of workers in a conversational
interaction with a user, using an incentive mechanism and
memory space to encourage the crowd to act as a single con-
sistent individual. Workerspowering Apparition must also be
consistent because prototypes must reliably function in the
same way over multiple interactions with end-users.
Crowdsourcing for Drawing and Rapid Evaluation
Crowdsourcing inherently providesaccessto individualswith
diverse skills, which provides guidance to users during the
design process [22] and improves specific skills like draw-
ing. Limpaechter et al. collected sketch data from a crowd
of players of their DrawAFriend game and used the resulting
model to correct user-drawn lines in real-time [19].
Foundry [23] allows end-users to create “flash teams”,
computer-mediated teams of expert crowd workers that were
able to complete complex tasks such as design prototyping
and animation that are produced by experts, rather than mi-
crotask workers, from thecrowd.
Glance [15] uses the crowd to help users explore video
datasets by quickly marking arbitrary user-specified events.
This divide-and-conquer process is common in crowdsourc-
ing because it allows the inherent parallelism of the crowd to
help solveproblemsfaster. Apparition uses asimilar division
of labor to makecomplex, parallel control tasks possible, but
unlike prior work in crowdsourcing, it supports worker self-
coordination and task-finding in real-time.
Whileexisting Wizard-of-Oz approachesrely on asingleuser
Figure 2. Designers roughly sketch their interface while describing it in
natural language (top). As they do, crowd workers collaboratively up-
date the interface into a low-fidelity prototype (middle). Over time, the
crowd workers further improveits quality (bottom). Asthese interfaces
7. • How “good” is my crowdsourcing platform?
Design for Evaluating Platforms
7
Figure courtesy of Simon à Campo
17. • Design for New Concepts
• Design for Evaluating
• Design for Experience
• Khan, à Campo & Nekaj (2016). Design
for Crowdsourcing: Mechanics,
Semantics and Guidelines. CSW
• Khan, Dhillon, Piso, & Schelle, (2016).
Crowdsourcing User and Design
Research. Springer
Summary & Further Reading
17
v.j.khan@tue.nl
@v_j_khan
khan.gr
Editor's Notes
We intuitively understand a better designed product is more valuable and this is also evidenced by research.
Prior research shows that following good design practices …
In other words for every 1Euro you invest in the design of your product you should expect up to 48Euros in return.
With the growing number of platforms and the diverse roles they try to have one wonders what does it mean to have a better designed platform, how do we design
Although the number of crowdsourcing systems has been increasing, the application to design itself has only scratched the surface of its potential. In the near future, we foresee the development of new systems for design-related purposes.
One example from the academic domain is CrowdCrit (Luther et al., 2014). CrowdCrit organizes (“scaffolds”) in several dimensions, design feedback that workers can provide to a certain design output. Workers can select from a list of both positive and negative critique statements. Those statements are based on well-known design principles. Furthermore, workers can visually annotate areas of the design that correspond to the provided feedback. This example, demonstrates that with the right tools workers who are not necessarily design experts can provide relevant and useful design feedback.
A recent example of such a novel system: Apparition, comes from the academic domain (Lasecki et al., 2015). Apparition is a system that allows users to implement design sketches into actual working prototypes. As a user sketches the interface of an application, workers’ input in combination with artificial intelligence algorithms translate the sketches into actual elements, add animations and make those elements functional. In this example, design and implementation activities are blended into one platform.
Existing crowdsourcing systems face the challenge of self-evaluation. Once developers have their system up and running, they eventually face the question: how good of a system have I actually built? Heuristic evaluation (Nielsen, 1994), is a quick and robust method for providing initial answers to the aforementioned question. Heuristics are in simple words, a checklist of salient items that a designer needs to take into account for their design.
At the Industrial Design Department of Eindhoven University of Technology, we have made the first step in this direction (Figure 1). Based on prior literature for building successful online communities (Gurzick & Lutters, 2009; Kim, 2000; Kraut et al., 2012) we have developed a tentative list of guidelines that would help crowdsourcing platforms to evaluate their platforms and in that way identify strengths and weaknesses. We argue that such a list is crucial for the sustainability of such platforms.
Does the platform have a positive reputation?
Certain rewards or acknowledgements by other websites.
Does the platform showcase its achievements?
For example the growth of community, amount of contributions, money paid to the workers etc.
Going beyond usability aspects, experiential aspects such as empathy and playfulness could significantly improve existing systems.
For instance, empathy induced altruism is likely to motivate people in a crowdsourcing environment to produce better quality work. Here the hypothesis is that if a worker is able to empathize with the requester and the requester’s cause, the worker will deliver higher quality work. However, there hasn’t been any considerable investigation regarding how empathy can be effectively conveyed through user interfaces.
playfulness in technology may help in creating a more meaningful experience. There is already a large number of people working online, spending significant time on multiple crowdsourcing platforms. Several tasks in these platforms are frequently occurring, are repetitive and can be perceived as tedious and boring.
Designers can address this issue by incorporating playful elements on crowdsourcing platforms. Those playful elements could benefit the overall user experiences not just in cases of tedious and boring tasks, but for any other crowdsourcing platform by introducing an element of surprise and fun.