Empirical comparison of Multi-Label classification algorithms

Clifford A. Tawiah, Victor S. Sheng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost. MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of the correlations among multilabels. These insights shed light for future research directions on multi-label classifications.

Original languageEnglish
Title of host publicationProceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013
Pages1645-1646
Number of pages2
StatePublished - 2013
Event27th AAAI Conference on Artificial Intelligence, AAAI 2013 - Bellevue, WA, United States
Duration: Jul 14 2013Jul 18 2013

Publication series

NameProceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013

Conference

Conference27th AAAI Conference on Artificial Intelligence, AAAI 2013
CountryUnited States
CityBellevue, WA
Period07/14/1307/18/13

Fingerprint Dive into the research topics of 'Empirical comparison of Multi-Label classification algorithms'. Together they form a unique fingerprint.

Cite this