Learning weighted naive bayes with accurate ranking

Harry Zhang, Shengli Sheng

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

123 Scopus citations

Abstract

Naive Bayes is one of most effective classification algorithms. In many applications, however, a ranking of examples are more desirable than just classification. How to extend naive Bayes to improve its ranking performance is an interesting and useful question in practice. Weighted naive Bayes is an extension of naive Bayes, in which attributes have different weights. This paper investigates how to learn a weighted naive Bayes with accurate ranking from data, or more precisely, how to learn the weights of a weighted naive Bayes to produce accurate ranking. We explore various methods: the gain ratio method, the hill climbing method, and the Markov Chain Monte Carlo method, the hill climbing method combined with the gain ratio method, and the Markov Chain Monte Carlo method combined with the gain ratio method. Our experiments show that a weighted naive Bayes trained to produce accurate ranking outperforms naive Bayes.

Original languageEnglish
Title of host publicationProceedings - Fourth IEEE International Conference on Data Mining, ICDM 2004
EditorsR. Rastogi, K. Morik, M. Bramer, X. Wu
Pages567-570
Number of pages4
DOIs
StatePublished - 2004
EventProceedings - Fourth IEEE International Conference on Data Mining, ICDM 2004 - Brighton, United Kingdom
Duration: Nov 1 2004Nov 4 2004

Publication series

NameProceedings - Fourth IEEE International Conference on Data Mining, ICDM 2004

Conference

ConferenceProceedings - Fourth IEEE International Conference on Data Mining, ICDM 2004
CountryUnited Kingdom
CityBrighton
Period11/1/0411/4/04

Fingerprint Dive into the research topics of 'Learning weighted naive bayes with accurate ranking'. Together they form a unique fingerprint.

Cite this