Estimating continuous distributions in Bayesian classifiers. John, G. H. & Langley, P. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, of UAI'95, pages 338–345, San Francisco, CA, USA, August, 1995. Morgan Kaufmann Publishers Inc..
abstract   bibtex   
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.
@inproceedings{john_estimating_1995,
	address = {San Francisco, CA, USA},
	series = {{UAI}'95},
	title = {Estimating continuous distributions in {Bayesian} classifiers},
	isbn = {978-1-55860-385-1},
	abstract = {When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparametric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.},
	urldate = {2022-03-16},
	booktitle = {Proceedings of the {Eleventh} conference on {Uncertainty} in artificial intelligence},
	publisher = {Morgan Kaufmann Publishers Inc.},
	author = {John, George H. and Langley, Pat},
	month = aug,
	year = {1995},
	pages = {338--345},
}

Downloads: 0