Leveraging the Crowd

Paper

May 7, 2012 @ 11:30, Room: Ballroom F

Chair: Andrea Forte, Drexel University, USA
Human Computation Tasks with Global Constraints - Paper
Contribution & Benefit: Describes a system for crowdsourcing itinerary planning called Mobi. Illustrates a novel crowdware concept for tackling complex tasks with global constraints by using a shared, collaborative workspace.
Abstract » An important class of tasks that are underexplored in current
human computation systems are complex tasks with global constraints.
One example of such a task is itinerary planning, where solutions
consist of a sequence of activities that meet requirements specified
by the requester. In this paper, we focus on the crowdsourcing of such
plans as a case study of constraint-based human computation tasks and
introduce a collaborative planning system called Mobi that illustrates
a novel crowdware paradigm. Mobi presents a single interface
that enables crowd participants to view the current solution context
and make appropriate contributions based on current needs. We conduct
experiments that explain how Mobi enables a crowd to effectively and
collaboratively resolve global constraints, and discuss how the design
principles behind Mobi can more generally facilitate a crowd to tackle
problems involving global constraints.
ACM
Strategies for Crowdsourcing Social Data Analysis - Paper
Community: management
Contribution & Benefit: Introduces a workflow in which data analysts enlist crowds to help explore data visualizations and generate hypotheses, and demonstrates seven strategies for eliciting high-quality explanations of data at scale.
Abstract » Web-based social data analysis tools that rely on public discussion to produce hypotheses or explanations of the patterns and trends in data, rarely yield high-quality results in practice. Crowdsourcing offers an alternative approach in which an analyst pays workers to generate such explanations. Yet, asking workers with varying skills, backgrounds and motivations to simply "Explain why a chart is interesting" can result in irrelevant, unclear or speculative explanations of variable quality. To address these problems, we contribute seven strategies for improving the quality and diversity of worker-generated explanations. Our experiments show that using (S1) feature-oriented prompts, providing (S2) good examples, and including (S3) reference gathering, (S4) chart reading, and (S5) annotation subtasks increases the quality of responses by 28% for US workers and 196% for non-US workers. Feature-oriented prompts improve explanation quality by 69% to 236% depending on the prompt. We also show that (S6) pre-annotating charts can focus workers' attention on relevant details, and demonstrate that (S7) generating explanations iteratively increases explanation diversity without increasing worker attrition. We used our techniques to generate 910 explanations for 16 datasets, and found that 63% were of high quality. These results demonstrate that paid crowd workers can reliably generate diverse, high-quality explanations that support the analysis of specific datasets.
ACM
Direct Answers for Search Queries in the Long Tail - Paper
Contribution & Benefit: We introduce Tail Answers: a large collection of crowdsourced search results that are unpopular individually but together address a large proportion of search traffic.
Abstract » Web search engines now offer more than ranked results. Queries on topics like weather, definitions, and movies may return inline results called answers that can resolve a searcher's information need without any additional interaction. Despite the usefulness of answers, they are limited to popular needs because each answer type is manually authored. To extend the reach of answers to thousands of new information needs, we introduce Tail Answers: a large collection of direct answers that are unpopular individually, but together address a large proportion of search traffic. These answers cover long-tail needs such as the average body temperature for a dog, substitutes for molasses, and the keyboard shortcut for a right-click. We introduce a combination of search log mining and paid crowdsourcing techniques to create Tail Answers. A user study with 361 participants suggests that Tail Answers significantly improved users' subjective ratings of search quality and their ability to solve needs without clicking through to a result. Our findings suggest that search engines can be extended to directly respond to a large new class of queries.
ACM
Distributed Sensemaking: Improving Sensemaking by Leveraging the Efforts of Previous Users - Paper
Contribution & Benefit: We show that 'distributed sensemaking' -sensemaking while leveraging the sensemaking efforts of previous users- enables schema transfer between users, leading to improved sensemaking quality and helpfulness.
Abstract » We examine the possibility of distributed sensemaking: improving a user's sensemaking by leveraging previous users' work without those users directly collaborating or even knowing one another. We asked users to engage in sensemaking by organizing and annotating web search results into "knowledge maps," either with or without previous users' maps to work from. We also recorded gaze patterns as users examined others' knowledge maps. Our findings show the conditions under which distributed sensemaking can improve sensemaking quality; that a user's sensemaking process is readily apparent to a subsequent user via a knowledge map; and that the organization of content was more useful to subsequent users than the content itself, especially when those users had differing goals. We discuss the role distributed sensemaking can play in schema induction by helping users make a mental model of an information space and make recommendations for new tool and system development.
ACM