Does one-against-all or one-against-one improve the performance of multiclass classifications?

R. Kyle Eichelberger, Victor S. Sheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

12 Scopus citations

Abstract

One-against-all and one-against-one are two popular methodologies for reducing multiclass classification problems into a set of binary classifications. In this paper, we are interested in the performance of both one-against-all and one-against-one for classification algorithms, such as decision tree, naïve bayes, support vector machine, and logistic regression. Since both one-against-all and oneagainst-one work like creating a classification committee, they are expected to improve the performance of classification algorithms. However, our experimental results surprisingly show that one-against-all worsens the performance of the algorithms on most datasets. One-against-one helps, but performs worse than the same iterations of bagging these algorithms. Thus, we conclude that both one-against-all and one-against-one should not be used for the algorithms that can perform multiclass classifications directly. Bagging is better approach for improving their performance.

Original languageEnglish
Title of host publicationProceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013
Pages1609-1610
Number of pages2
StatePublished - 2013
Event27th AAAI Conference on Artificial Intelligence, AAAI 2013 - Bellevue, WA, United States
Duration: Jul 14 2013Jul 18 2013

Publication series

NameProceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013

Conference

Conference27th AAAI Conference on Artificial Intelligence, AAAI 2013
Country/TerritoryUnited States
CityBellevue, WA
Period07/14/1307/18/13

Fingerprint

Dive into the research topics of 'Does one-against-all or one-against-one improve the performance of multiclass classifications?'. Together they form a unique fingerprint.

Cite this