Using crowdsourcing techniques, researchers have explored adapting existing user interfaces and developing new interface models. CoScripter allows crowds to capture and share task traces, which can then be used to generate mobile applications through tools like Highlight. CoCo leverages crowdsourced scripts to enable conversational interfaces. Researchers also experimented with using crowds to build abstract models of interfaces, though this work remains preliminary. Harnessing crowds offers potential for engineering new interfaces and adapting existing ones, but significant challenges remain in areas like model construction.
3. Benefits of
Model-Based
Engineering
Develop for multiple devices
simultaneously (e.g., web & mobile)
Create personalized interfaces
• Accessibility (physical, cognitive, etc.)
• Previous experience
Compose interfaces from
different components never
before used together
And many more…
4. “... the main shortcoming of the model-based
approach is that building models is difficult. A rich
model describing all the features of an interface is a
complex object, and hence non-trivial to specify.”
Puerta and Szekely, CHI’1994
, Vanderdonckt. i-com 2011
5. Why is building models difficult?
Requires humans (mostly)
• If no interface exists, must investigate existing practice to determine
tasks, etc.
• Interpreting and understanding human behavior is hard for computers
• Automation may be possible if an interface already exists
E.g., [Gimblett & Thimbleby, EICS 2010]
Requires experts
• Knowledge of abstract concepts and formalisms
• Knowledge of specific languages and conventions
• Tools may decrease the amount of expertise needed
6. Questions
If we use a crowd…
• Do we need models?
• Do we need experts to create models?
7. Outline
Brief overview of relevant
crowdsourcing research
Examples using the Crowd to
adapt & understand interfaces
• CoScripter
• Highlight & CoCo
• Crowd-Created Models
Conclusions
9. Origin of the term
“Taking [...] a function once performed by
employees and outsourcing it to an undefined
(and generally large) network of people in the
form of an open call.”
Jeff Howe
WIRED, 2006
10. Is this concept new?
• In 1760, the British Nautical
Almanac distributed work of
creating navigational charts
through postal mail.
• Work was computed by two
independent workers and
verified by a third
• Process repeated since for
other large calculations, e.g.
Mathematical Tables Project
16. Paid Crowdsourcing
Pay small amounts of money for short tasks
Example Site:
Amazon Mechanical Turk
• Roughly five million tasks
completed per year at
1-5¢ each [Ipeirotis 2010]
• Population: 40% U.S.,
40% India, 20% elsewhere
• Gender, education and
income are close mirrors
of overall population
distributions [Ross 2010]
17. Major Topics of Research
Crowd algorithms
[Little et al., HCOMP 2009]
Incentives and Quality
[Mason and Watts, HCOMP 2009]
[Dow et al., CSCW 2012]
Crowd-powered systems
[Bernstein et al., UIST 2010]
[Bigham et al., UIST 2010]
AI for HCOMP
[Dai, Mausam & Weld, AAAI 2010]
Complex Work
[Kittur et al., UIST 2011]
18. Major Topics of Research
Crowd algorithms
[Little et al., HCOMP 2009]
Incentives and Quality
[Mason and Watts, HCOMP 2009]
[Dow et al., CSCW 2012]
Crowd-powered systems
[Bernstein et al., UIST 2010]
[Bigham et al., UIST 2010]
AI for HCOMP
[Dai, Mausam & Weld, AAAI 2010]
Complex Work
[Kittur et al., UIST 2011]
19. Crowdsourcing Algorithms
Essentially a workflow where each step may be
performed by a different worker
Iterative algorithms
[Little et al. 2009]
Digital Assembly Line
CrowdFlower
23. Real-time Crowdsourcing
Using recent techniques, it is now possible to harness crowd
workers to solve tasks in near real-time
[Bernstein et al. UIST ’11, Lasecki et al. UIST ’11 and UIST ’12]
Example: Real-time captioning using shotgun gene sequencing
techniques [Lasecki et al. UIST ’12]
24. Legion [Lasecki et al. UIST ’11]
http://www.youtube.com/watch?v=P_Tqn-3BF_I
25. Crowdsourcing
wrap-up
There is a lot of power available
in the crowd…
How can we harness it to help
engineer new or improved
interfaces?
27. CoScripter:
Capture & Reuse Web Tasks
• Employees in large enterprises
need to share “how-to” knowledge
• This knowledge is typically kept
by a few knowledge hubs
• The CoScripter approach:
• Capture web tasks by watching
people do them
• Automate repetitive tasks to save time
• Use a natural-language scripting
language for understandability
[Leshed et al, CoScripter: Automating & Sharing
How-To Knowledge in the Enterprise, CHI 2008]
29. CoScripter Key Features
• Browser extension for recording
and playback
• Wiki for storing, sharing, and
collaboratively improving scripts
• Personal database allows use of
sensitive information within
scripts without sharing that
information
30. 30
CoScripter Adoption
Deployed inside IBM since Oct 2006
– ~4200 users, ~3500 scripts
Deployed on public internet 2007-2012
– ~13300 users, ~16,000 scripts
Interviews and analysis shows
it addresses pain points
– IBMers use it to automate
repetitive tasks and share
process knowledge with each
other
Time
#scripts
31. Conclusions
• CoScripter provides an inexpensive
and lightweight method to adapt
existing interfaces
• Allows a crowd of users to produce a
knowledge base of important
interaction traces
• Resulting knowledge base could be
used for many purposes
• Re-authoring user interfaces for
different user groups
• Generating models?
35. AA.com Flight Tracking: Speechified
User: “What is the status for my American Airlines
flight?”
System: “What is the flight number?”
User: “144”
System: “Flight Status – Arrived”
CoCo, Lau et al. UIST 2010
37. Goals
Allow end users to create their own mobile
“applications” for particular tasks
• No programming required
• Possible for any existing site
• All design decisions made by users
Allow programmers to extend capabilities of mobile
applications
38. mobile user
proxy server
proxy browserweb server
web
server
user
mobile app designer
(browser extension)
Highlight Architecture
39. How do end users create applications?
Highlight Designer
• Built using Firefox web
browser
• Allows user to demonstrate
a “trace” of interaction
• Direct manipulation tools
• Generalization allows
creation of mobile apps with
complex structure
Nichols & Lau, IUI 2008
46. Remote Control Metaphor Discussion
Benefits
• No need to understand
underlying code or describe
application with complex models
• Working at the interactive level
lets authors work with what they
can “see.”
• Possible for end users, extensible
by programmers
• If easy enough, allows users to
create user interfaces that reflect
their own needs and abilities
Drawbacks
• Always running original interface
in the background
• Constrained by original design
• How to communicate those
constraints to the author?
48. The CoCo research vision
Explore the use of conversational user interfaces to web tasks
Design and build intelligent agents that:
• Interact with the web on a user's behalf
• Converse with the user to clarify meaning
• Learn new knowledge over time
• Are personalized for a user's needs
Goal: improve user productivity and increase access to
information technology through simpler interfaces
49. Not shown in the talk, but instructive (ACM DL access required):
http://dl.acm.org/ft_gateway.cfm?id=1866067&type=swf
50. Alice: punch out 17 30
CoCo: Extracted this script from your logs:
Go to timecard.com/cocompany, enter your
password into the textbox, click Go...
Run it?
Alice: yes
CoCo: I don't know what “password” to use
Alice: punch out 17 30 using alice00 as password
CoCo: I will run your script using params
password=alice00
CoCo: 17:30 Exit
52. Two paths to determining process:
Automatic
• System finds existing script in database or infers script
from web history
• Content is clipped based on heuristics matching original
command
Manual (“re-authoring”)
• User creates a script in CoScripter
• Specifies parameters as “personal database” values
• Specifies “clip” commands to return information
53. Conclusions
• Relatively simple understanding is
used to facilitate substantial changes
to the UI
• CoCo leverages crowd-generated
scripts. Highlight could have
leveraged crowd similar to CoScripter
• Models are used to facilitate
re-authoring, but not using a typical
approach
• More robust underlying models
could lead to more robust results
55. Background
• Our initial re-authoring work relied
on interactions being the same in
each platform (Highlight), or a priori
known transformations (CoCo)
• To make deeper changes, we knew
that we would need deeper models
• Could the crowd help us build the
models we need?
Disclaimer: This is work is initial and incomplete. If you
would like to continue it, please let me know!
56. Process
1. Build domain model
• What are the objects that are viewed
and manipulated?
• What functions and parameters do they
have?
2. Build “task” model linked to the
domain model
• Primarily based on object functions
3. Collect traces for carrying out tasks
• Integrated into Mturk the CoScripter
variant PlayByPlay [Wiltse et al. CHI 2010]
57. Building a Domain Model
Task
• Give a link to web UI
• Ask a question
• Possibly use taboo words as in ESP
game
Goal
• Questions designed to elicit nouns or
verbs that would correspond to
object or function names
• Previously collected terms used in
questions
• Iterate between noun & verb
questions
Q: What can you search for?
taboo: restaurants, hairdressers
Q: What things can you manipulate?
taboo: restaurants, hairdressers
Q: What can you do with a restaurant?
taboo: make reservation, see reviews
Q: What can you make a reservation for?
taboo: restaurants, hairdressers
58. Results
Crowd workers don’t know abstract terms
• E.g., What is an object? What are things you can
manipulate?
• Phrasing in terms of concrete activities helps
• What can you search for?
• What can you do with?
Lack of a clear widget model on the web makes
interpreting demonstrations hard
• When is a user using a custom component vs. a standard
one?
59. Take-aways
• With some cleverness, it should be
possible to use novice crowd workers
to construct useful models
• It will be a significant undertaking;
probably at the level of a Ph.D. thesis
• Interns do not have time to complete
a second thesis during the summer
(even good ones!)
61. Harnessing the crowd offers tremendous potential
for engineering interactive systems…
…to provide new functionality
…to adapt UIs to specific use cases or platforms
…to understand UIs
62. Crowd Papers at EICS 2013
CrowdStudy: Extensible Toolkit for Crowdsourced Evaluation of
Web Interfaces
Michael Nebeling, Maximilian Speicher, Moira Norrie
CrowdAdapt: Enabling Crowdsourced Adaptation of Web Sites
for Individual Viewing Conditions and Preferences
Michael Nebeling, Maximilian Speicher, Moira Norrie
Echo: The Editor’s Wisdom with the Elegance of a Magazine
Joshua Hailpern, Bernardo Huberman
Crowdsourcing User Interface Adaptations for Minimizing the
Bloat in Enterprise Applications (poster)
Pierre Akiki, Arosha Bandara, Yijun Yu
66. Crowd Resources
• WWW 2011 Tutorial by Panos Ipeirotis & Praveen
Paritosh
http://www.behind-the-enemy-
lines.com/2011/04/tutorial-on-crowdsourcing-and-
human.html
• Michael Bernstein CS 276 Lecture
http://hci.stanford.edu/courses/cs376/2013/lectures
/2013-05-08-crowdsourcing/CS376-2013-
crowdsourcing.pdf
• Crowd Research blog
http://crowdresearch.org/blog/
Notes de l'éditeur
Very happy to be hereMy thesis research and much of the work that I’ve done since my Ph.D. has been in this area, more or less.Followed this conference since the beginning, had the honor of being one of the papers chairs in 2010
I recognize that not all engineering of interactive systems requires models, but from looking through the proceedings of EICS, I think that’s most of what this community does.
I recognize that not all engineering of interactive systems requires models, but from looking through the proceedings of EICS, I think that’s most of what this community does.
I recognize that not all engineering of interactive systems requires models, but from looking through the proceedings of EICS, I think that’s most of what this community does.
Charles Babbage was involved with the British Nautical AlmanacMathematical Table Project (or WPA project), begun 1938Calculated tables of mathematical functionsEmployed 450 human computersThe origin of the term computerWhat is new is the easy and affordable access to on-demand labor for tasks like these
Thousands, if not millions, of people collecting and categorizing knowledge – could a similar approach be useful for
Launched in July 20071M galaxies imaged50M classifications in first year, from 150,000 visitorsFocuses on classification as opposed to manipulation. Has discussion and other features to help new classifiers get up to speed and improve accuracy.
FoldIt: protein-folding gameBeyond classification, here crowd workers are playing with and manipulating models to solve real problemsAmateur scientists have found protein configurations that eluded scientists for years
Motivation can certainly be a problem with crowdsourcing tasks. One solution is to simply pay people
I recognize that not all engineering of interactive systems requires models, but from looking through the proceedings of EICS, I think that’s most of what this community does.
JAC: I upped the “Interact” bullet. I think the original ordering is the “systems ordering” on how thing happens, but this new ordering is more natural in understanding what we're doing. The most important fact, that it executes tasks, is on top, and then the clarification is, of course, on which task, etc. I felt it was a bit awkward to point out to the clarification before stating what would Coco need clarification forTL: That's great. My first version had it at the top too.
Given a mostly complete understanding of what was possible in the existing interface, we should be able to create a very different new interface that does the same thing, and translate it to the old interface