Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost. MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of the correlations among multilabels. These insights shed light for future research directions on multi-label classifications.