The document discusses using crowdsourcing and human computation to summarize and edit documents. It proposes a "Find-Fix-Verify" model that separates tasks into identifying areas for improvement, editing the content, and verifying the edits. When tested on sample paragraphs, this approach was able to shorten the paragraphs by an average of 83-90% while maintaining overall meaning. However, it was observed that crowdsworkers performed best when removing unnecessary text rather than requiring domain knowledge.
1. Data by the people, for the peoplePowering Interactions via the Social Web Michael Bernsteinmitcsail | user interface design group | haystack group mit human-computer interaction
2. Computer Science “In the most basic sense, a network is any collection of objects in which some pairs of these objects are connected by links.” - Easley and Kleinberg, page 2 [Zachary ‘77, via Easley and Kleinberg ‘10]
3. With the abstraction, we can: - Reason at high levels - Make predictions - Interact online - Model data http://www.flickr.com/marc_smith
4. Social Science “The analysis of patterns of social relationship in the group is then conducted on the graph, which is merely a shorthand representation of the ethnographic data.” - Zachary ‘77 [Zachary ‘77, via Easley and Kleinberg ‘10]
5. Methodological mismatch Many of you are sitting on terabytes of data about human interactions. The opportunities to scrape data – or more politely, leverage APIs – are also unprecedented. And folks are buzzing around wondering what they can do with all of the data they've got their hands on. But in our obsession with Big Data, we've forgotten to ask some of the hard critical questions about what all this data means and how we should be engaging with it. - danahboyd, WWW ‘10
6. Methodological mismatch Many of you are sitting on terabytes of data about human interactions. The opportunities to scrape data – or more politely, leverage APIs – are also unprecedented. And folks are buzzing around wondering what they can do with all of the data they've got their hands on. But in our obsession with Big Data, we've forgotten to ask some of the hard critical questions about what all this data means and how we should be engaging with it. - danahboyd, WWW ‘10
18. Blog – 83% Print publishers are in a tizzy over Apple’s new iPad because they hope to finally be able to charge for their digital editions. But in order to get people to pay for their magazine and newspaper apps, they are going to have to offer something different that readers cannot get at the newsstand or on the open Web. Classic uist– 87% The metaDESK effort is part of the larger Tangible Bits project. The Tangible Bits vision paper, which introduced the metaDESKalong withand two companion platforms, the transBOARD and ambientROOM. Draft uist– 90% In this paper we argue that it is possible and desirable to combine the easy input affordances of text with the powerful retrieval and visualization capabilities of graphical applications. We present WenSo, a tool thatwhich uses lightweight text input to capture richly structured information for later retrieval and navigation in a graphical environment.. Rambling E-mail – 78% A previous board member, Steve Burleigh, created our web site last year and gave me alot of ideas. For this year, I found a web site called eTeamZ that hosts web sites for sports groups. Check out our new page: […] Technical Computer Science – 82% Figure 3 shows the pseudocode that implements this design for Lookup. FAWN-DS extracts two fields from the 160-bit key: the i low order bits of the key(the index bits) and the next 15 low order bits (the key fragment).
19. Crowdproof: Human Proofreading Finds errors that AIs miss, explains the reason behind the problem in plain English, and suggests fixes
20. The Human Macro Macro scripting without programming ‘‘Please change text in document from past tense to present tense.’’ I gave one final glance around before descending from the barrow. As I did so, my eye caught something […] I give one final glance around before descending from the barrow. As I do so, my eye catches something […]
21. The Human Macro Macro scripting without programming ‘‘Pick out keywords from the paragrah like Yosemite, rock, half dome, park. Go to a site which hsa CC licensed images […]’’ When I first visited Yosemite State Park in California, I was a boy. I was amazed by how big everything was […] http://commons.wikimedia.org/wiki/File:03_yosemite_half_dome.jpg
22. The Human Macro Macro scripting without programming ‘‘Hi, please find the bibtex references for the 3 papers in brackets. You can located these by Google Scholar searches and clicking on bibtex.” Duncan and Watts [Duncan and watts HCOMP 09 anchoring] found that Turkers will do more work when you pay more, but that the quality is no higher. @conference { title={{Financial incentives […]}}, author={Mason, W. and Watts, D.J.}, booktitle={HCOMP ‘09}, […] }
23. Programming Crowd Workers Rule of Thumb: 30% of worker effort on open-ended tasks will have an error in it Two useful personas: The Lazy Turker and The Eager Beaver
24. The Lazy Turker Does as little work as necessary to be paid The theme of loneliness features throughout many scenes in Of Mice and Men and is often the dominant theme of sections during this story. This theme occurs during many circumstances but is not present from start to finish. In my mind for a theme to be pervasive is must be present during every element of the story. There are many themes that are present most of the way through such as sacrifice, friendship and comradship. But in my opinion there is only one theme that is present from beginning to end, this theme is pursuit of dreams.
25. The Lazy Turker Does as little work as necessary to be paid The theme of loneliness features throughout many scenes in Of Mice and Men and is often the dominant theme of sections during this story. This theme occurs during many circumstances but is not present from start to finish. In my mind for a theme to be pervasive is must be present during every element of the story. There are many themes that are present most of the way through such as sacrifice, friendship and comradeship. But in my opinion there is only one theme that is present from beginning to end, this theme is pursuit of dreams.
26. The Lazy Turker Does as little work as necessary to be paid The theme of loneliness features throughout many scenes in Of Mice and Men and is often the dominant theme of sections during this story. This theme occurs during many circumstances but is not present from start to finish. In my mind for a theme to be pervasive is must be present during every element of the story. There are many themes that are present most of the way through such as sacrifice, friendship and comradship. But in my opinion there is only one theme that is present from beginning to end, this theme is pursuit of dreams.
27. The Eager Beaver Go beyond task requirements to be helpful, but introduce errors in the process The theme of loneliness features throughout many scenes in Of Mice and Men and is often the dominant theme of sections during this story. This theme occurs during many circumstances but is not present from start to finish. In my mind for a theme to be pervasive is must be present during every element of the story. There are many themes that are present most of the way through such as sacrifice, friendship and comradship. But in my opinion there is only one theme that is present from beginning to end, this theme is pursuit of dreams.
28. The theme of loneliness features throughout many scenes in Of Mice and Men and is often the dominant theme of sections during this story. This theme occurs during many circumstances but is not present from start to finish. In my mind for a theme to be pervasive is must be present during every element of the story. There are many themes that are present most of the way through such as sacrifice, friendship and comradeship. But in my opinion there is only one theme that is present from beginning to end, this theme is pursuit of dreams. The Eager Beaver Go beyond task requirements to be helpful, but introduce errors in the process
29. Find-Fix-Verify A design pattern that controls the efforts of the Lazy Turker and the Eager Beaver Separates open-ended tasks into three stageswhere each worker makes a clear contribution
30. Find “Identify at least one area that can be shortened without changing the meaning of the paragraph.” Independent voting to identify patches Fix “Edit the highlighted section to shorten its length without changing the meaning of the paragraph.” Soylent, a prototype... Randomize order of suggestions Verify “Choose at least one rewrite that has significant style errors in it. Choose at least one rewrite that significantly changes the meaning of the sentence.”
31. Why Find-Fix-Verify? Why split Find and Fix? Force Lazy Turkers to work on a problem of our choice Allows us to merge work completed in parallel Why Add Verify? Quality raises when we put Turkers at odds with each other Trade off lag time with quality
32. Data is made of people, Data is made by people, Data is made for people.
33. Collaborators Rob Miller, David Karger, Greg Little, Katrina Panovich, David Crowell Mark Ackerman Björn Hartmann …and about 9000 Turkers. I am generously kept off the streets by an NSF GRFP and NSF award IIS-0712793.
34. Blog Print publishers are in a tizzy over Apple’s new iPad because they hope to finally be able to charge for their digital editions. But in order to get people to pay for their magazine and newspaper apps, they are going to have to offer something different that readers cannot get at the newsstand or on the open Web. Classic uist The metaDESK effort is part of the larger Tangible Bits project. The Tangible Bits vision paper introduced the metaDESK along with two companion platforms, the transBOARD and ambientROOM. Draft uist In this paper we argue that it is possible and desirable to combine the easy input affordances of text with the powerful retrieval and visualization capabilities of graphical applications. We present WenSo, a tool that uses lightweight text input to capture richly structured information for later retrieval and navigation in a graphical environment.. Rambling E-mail A previous board member, Steve Burleigh, created our web site last year and gave me alot of ideas. For this year, I found a web site called eTeamZ that hosts web sites for sports groups. Check out our new page: […] Highly Technical Writing Figure 3 shows the pseudocode that implements this design for Lookup. FAWN-DS extracts two fields from the 160-bit key: the i low order bits of the key (the index bits) and the next 15 low order bits (the key fragment).
35. Blog – 83% Print publishers are in a tizzy over Apple’s new iPad because they hope to finally be able to charge for their digital editions. But in order to get people to pay for their magazine and newspaper apps, they are going to have to offer something different that readers cannot get at the newsstand or on the open Web. Classic uist– 87% The metaDESK effort is part of the larger Tangible Bits project. The Tangible Bits vision paper, which introduced the metaDESKalong withand two companion platforms, the transBOARD and ambientROOM. Draft uist– 90% In this paper we argue that it is possible and desirable to combine the easy input affordances of text with the powerful retrieval and visualization capabilities of graphical applications. We present WenSo, a tool thatwhich uses lightweight text input to capture richly structured information for later retrieval and navigation in a graphical environment.. Rambling E-mail – 78% A previous board member, Steve Burleigh, created our web site last year and gave me alot of ideas. For this year, I found a web site called eTeamZ that hosts web sites for sports groups. Check out our new page: […] Technical Computer Science – 82% Figure 3 shows the pseudocode that implements this design for Lookup. FAWN-DS extracts two fields from the 160-bit key: the i low order bits of the key(the index bits) and the next 15 low order bits (the key fragment).
36. Average Performance Cost: $1.41 per paragraph $0.55 to Find an average of two patches $0.48 to Fix each patch $0.38 to Verify the results Time: Wait : median 18.5 minutes (Q1 = 8.3 min, Q3 = 41.6 min) Work: median 2.0 minutes (Q1 = 60 sec, Q3 = 3.6 min)
37. Qualitative Observations Works best with unnecessary text […] they are going to have to offer something different […] Lack of domain knowledge[…] In this paper we argue that tangible interfaces […] Parallel edits can be inconsistent FAWN-DS extracts two fields from the 160-bit key: the i low order bits of the key (the index bits) and the next 15 low order bits (the key fragment).
Notes de l'éditeur
When we're talking about social networks in computer science education, we have two methodological traditions to fuse. One is computer science, which we can see here through the lens of network science. It puts the network primary. Here is the first figure in the Easley and Kleinberg textbook, of a 34-person karate club.
This is an appealing definition and approach, because it provides a mathematical formalism that enables us to derive proofs, reason about groups at high levels and write interactive systems like Facebook. It doesn’t matter than friendship is a fuzzy concept: so long as both parties have agreed that it’s an undirected edge, we can do friend recommendation, build a news feed, and compute tie strengths (or as Facebook calls it, EdgeRank). It’s a very top-down approach, because computer scientists are good at dealing with lots of data.
The other strong tradition in this space is characterized by social science: social psychology, sociology, cultural anthropology, and the broad spectrum of ideas and methodologies encompassed by conferences like CSCW. Where computer science approach may put the network primary, social science puts the person primary. The goal of this approach is to understand why those links form, what they mean, and how they are utilized. This can be very bottom-up: social psychology, for instance, tends to take the individual as the unit of analysis. It asks questions like, “Why do groups form and split?”
When cultures collide, if we naively follow our methodological training, expectations get mismanaged. In her keynote at WWW, danahboyd critiqued the approach that many computer scientists take when they consider network problems:[quote]danah is referencing ethical and privacy questions largely, but there is an even bigger implication for computer science in my mind: we cannot write crowd programs without really knowing what it is that the crowd is doing.
When cultures collide, if we naively follow our methodological training, expectations get mismanaged. In her keynote at WWW, danahboyd critiqued the approach that many computer scientists take when they consider network problems:[quote]danah is referencing ethical and privacy questions largely, but there is an even bigger implication for computer science in my mind: we cannot write crowd programs without really knowing what it is that the crowd is doing.
danah would talk about the de-anonymization of the Netflix dataset. I have another angle on the situation: understanding humans was what ultimately won the million dollars. Basic collaborative filtering techniques can get you so far. But one of the techniques that BellKor’s Pragmatic Chaos used was temporality: it turns out that when people rate a bunch of movies at a time, they tend to be movies that they saw a long time ago. And those kinds of movies exhibit a specific kind of rating pattern. The authors speculate, but I think this has to do with cognitive psychology: that we are much more likely to remember events with high emotional arousal than those without, and more likely to remember remember positive events than negative events.
danah would talk about the de-anonymization of the Netflix dataset. I have another angle on the situation: understanding humans was what ultimately won the million dollars. Basic collaborative filtering techniques can get you so far. But one of the techniques that BellKor’s Pragmatic Chaos used was temporality: it turns out that when people rate a bunch of movies at a time, they tend to be movies that they saw a long time ago. And those kinds of movies exhibit a specific kind of rating pattern. The authors speculate, but I think this has to do with cognitive psychology: that we are much more likely to remember events with high emotional arousal than those without, and more likely to remember remember positive events than negative events.
So it is when we program systems involving networks and crowds. We have a lot of data, and even more interest in that data, as demonstrated by the number of influential and award-winning papers that have been written by the amazing people sitting in front of me right now. When we talk about data, we are fundamentally bridging the attractive networks abstraction and the equally attractive social science abstraction. When we’re successful like BellKor’s Pragmatic Chaos was, it takes us farther than either process in isolation.
I’m a social computing systems builder: I build interfaces that are powered by social data and interfaces that encourage social interaction. To do this well, I have to get this balance right. I want to share with you a few ways in which I’ve been using the social web to develop new tools, and the ways in which we have wrestled with humans and algorithms simultaneously to make them work.
For years, human-computer interaction researchers have used Wizard of Oz techniques to prototype interactive systems. This technique typically meant having one of the design team members behind a curtain simulating parts of an artificial intelligence that hadn’t been built yet. But, we now have artificial intelligence for hire via services like Amazon Mechanical Turk, where you can pay cents for workers largely in the U.S. and India to perform tasks for you. The Soylent project asks: what happens when you embed those workers inside of an interface -- when you have a Wizard of Turk? Can we help end users when interfaces aren’t necessarily bound by AI-hard problems any more, but by humans?Here are a few preliminary thoughts, which will show up at the ACM UIST conference this year.
For years, human-computer interaction researchers have used Wizard of Oz techniques to prototype interactive systems. This technique typically meant having one of the design team members behind a curtain simulating parts of an artificial intelligence that hadn’t been built yet. But, we now have artificial intelligence for hire via services like Amazon Mechanical Turk, where you can pay cents for workers largely in the U.S. and India to perform tasks for you. The Soylent project asks: what happens when you embed those workers inside of an interface -- when you have a Wizard of Turk? Can we help end users when interfaces aren’t necessarily bound by AI-hard problems any more, but by humans?Here are a few preliminary thoughts, which will show up at the ACM UIST conference this year.
We are focused on writing. We’ve learned to write since grade school; it’s the stock and trade of how most of us exchange ideas today. I think we can all agree that writing is hard. Even seasoned experts will make mistakes: non-parallel constructions, typos, or just plain being unclear. If we make a high level decision like changing a story from past tense to present tense or shifting references from ACM format to MLA format, we have to execute a daunting number of tasks. And of course, when we have that 10-page limit and our paper is 11 pages, we spend hours whittling our writing down to size.
Let me shift to the data aspects of this. To make these interfaces, we need algorithms with human callouts in them. But, we don’t really know how to do this yet. Turkers are people, and using an extrinsic motivation like payment can lead to weird effects. We’ve created two useful personas that guide our work: