CrowdEval: A Cost-Efficient Strategy to Evaluate Crowdsourced Worker's Reliability
Conference Paper
Overview
Research
Identity
Additional Document Info
Other
View All
Overview
abstract
2018 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. Crowdsourcing platforms depend on the quality of work provided by a distributed workforce. Yet, it is challenging to dependably measure the reliability of these workers, particularly in the face of strategic or malicious behavior. In this paper, we present a dynamic and efficient solution to keep tracking workers' reliability. In particular, we use both gold standard evaluation and peer consistency evaluation to measure each worker performance, and adjust the proportion of the two types of evaluation according to the estimated distribution of workers' behavior (e.g., being reliable or malicious). Through experiments over real Amazon Mechanical Turk traces, we find that our approach has a significant gain in terms of accuracy and cost compared to state-of-the-art algorithms.