Learning Approximately Objective Priors. Nalisnick, E. & Smyth, P. arXiv:1704.01168 [stat], August, 2017. arXiv: 1704.01168
Learning Approximately Objective Priors [link]Paper  abstract   bibtex   
Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors. However, objective priors such as the Jeffreys and reference priors are not tractable to derive for many models of interest. We address this issue by proposing techniques for learning reference prior approximations: we select a parametric family and optimize a black-box lower bound on the reference prior objective to find the member of the family that serves as a good approximation. We experimentally demonstrate the method’s effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder’s reference prior.
@article{nalisnick_learning_2017,
	title = {Learning {Approximately} {Objective} {Priors}},
	url = {http://arxiv.org/abs/1704.01168},
	abstract = {Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors. However, objective priors such as the Jeffreys and reference priors are not tractable to derive for many models of interest. We address this issue by proposing techniques for learning reference prior approximations: we select a parametric family and optimize a black-box lower bound on the reference prior objective to find the member of the family that serves as a good approximation. We experimentally demonstrate the method’s effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder’s reference prior.},
	language = {en},
	urldate = {2022-01-19},
	journal = {arXiv:1704.01168 [stat]},
	author = {Nalisnick, Eric and Smyth, Padhraic},
	month = aug,
	year = {2017},
	note = {arXiv: 1704.01168},
	keywords = {/unread, Statistics - Computation, Statistics - Machine Learning, ⛔ No DOI found},
}

Downloads: 0