A study on multi-label classification

Clifford A. Tawiah, Victor S. Sheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Scopus citations


Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost.MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of existing correlations among multi-labels. These insights shed light for future research directions on multi-label classifications.

Original languageEnglish
Title of host publicationAdvances in Data Mining
Subtitle of host publicationApplications and Theoretical Aspects - 13th Industrial Conference, ICDM 2013, Proceedings
Number of pages14
StatePublished - 2013
Event13th Industrial Conference on Advances in Data Mining, ICDM 2013 - New York, NY, United States
Duration: Jul 16 2013Jul 21 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7987 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference13th Industrial Conference on Advances in Data Mining, ICDM 2013
Country/TerritoryUnited States
CityNew York, NY


  • Adaboost.MH
  • Binary Relevance
  • Classifier Chain
  • Multi-Label K-Nearest Neighbor
  • Pruned Problem Transformation
  • Random k-Label Set
  • multi-label classification


Dive into the research topics of 'A study on multi-label classification'. Together they form a unique fingerprint.

Cite this