An online approach for joint task assignment and worker evaluation in crowd-sourcing (original) (raw)

2018, Pervasive and Mobile Computing

The paper tackles the problem of finding the correct solution to a set of multiple choice questions or labeling tasks, by adaptively assigning them to workers in a crowdsourcing system. When we do not initially know anything (besides common a-priori statistics) about the workers and the questions involved, such problem becomes quite challenging and requires to jointly learn workers' abilities and questions' difficulties, while adaptively assigning questions to the most appropriate workers so as to maximize our chances to find which are the correct answers. To address such problem, we first cast it into a suitably constructed Bayesian framework which permits us to obtain an analytically tractable (closed form) single-question inference step, and then we address the more general framework via the Expectation Propagation algorithm, an approximated message-passing iterative technique. We then exploit the (time-varying) information gathered by the inference framework as adaptive weights for a maximum weight matching task assignment policy, proposing a computationally efficient algorithm which maximizes the entropy reduction for the questions assigned at each step. Experimental results both on

Sign up for access to the world's latest research.

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

Cross-task crowdsourcing

Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013

CrowdSelect

Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, 2016