Improving crowdsourced label quality using noise correction

Jing Zhang, Victor S. Sheng, Tao Li, Xindong Wu

Research output: Contribution to journalArticlepeer-review

43 Scopus citations


Crowdsourcing systems provide a cost effective and convenient way to collect labels, but they often fail to guarantee the quality of the labels. This paper proposes a novel framework that introduces noise correction techniques to further improve the quality of integrated labels that are inferred from the multiple noisy labels of objects. In the proposed general framework, information about the qualities of labelers estimated by a front-end ground truth inference algorithm is utilized to supervise subsequent label noise filtering and correction. The framework uses a novel algorithm termed adaptive voting noise correction (AVNC) to precisely identify and correct the potential noisy labels. After filtering out the instances with noisy labels, the remaining cleansed data set is used to create multiple weak classifiers, based on which a powerful ensemble classifier is induced to correct these noises. Experimental results on eight simulated data sets with different kinds of features and two real-world crowdsourcing data sets in different domains consistently show that: 1) the proposed framework can improve label quality regardless of inference algorithms, especially under the circumstance that each instance has a few repeated labels and 2) since the proposed AVNC algorithm considers both the number of and the probability of potential label noises, it outperforms the state-of-the-art noise correction algorithms.

Original languageEnglish
Pages (from-to)1675-1688
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number5
StatePublished - May 2018


  • Crowdsourcing
  • Ground truth inference
  • Label integration
  • Label noise correction
  • Label quality


Dive into the research topics of 'Improving crowdsourced label quality using noise correction'. Together they form a unique fingerprint.

Cite this