CrowdEval: A Cost-Efficient Strategy to Evaluate Crowdsourced Worker's Reliability Conference Paper uri icon

abstract

  • 2018 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. Crowdsourcing platforms depend on the quality of work provided by a distributed workforce. Yet, it is challenging to dependably measure the reliability of these workers, particularly in the face of strategic or malicious behavior. In this paper, we present a dynamic and efficient solution to keep tracking workers' reliability. In particular, we use both gold standard evaluation and peer consistency evaluation to measure each worker performance, and adjust the proportion of the two types of evaluation according to the estimated distribution of workers' behavior (e.g., being reliable or malicious). Through experiments over real Amazon Mechanical Turk traces, we find that our approach has a significant gain in terms of accuracy and cost compared to state-of-the-art algorithms.

published proceedings

  • PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18)

author list (cited authors)

  • Qiu, C., Squicciarini, A., Khare, D. R., Carminati, B., & Caverlee, J.

complete list of authors

  • Qiu, Chenxi||Squicciarini, Anna||Khare, Dev Rishi||Carminati, Barbara||Caverlee, James

editor list (cited editors)

  • AndrĂ©, E., Koenig, S., Dastani, M., & Sukthankar, G.

publication date

  • January 2018