Estimating Predictive Variances with Kernel Ridge Regression. Cawley, G.&nbsp;C., Talbot, N.&nbsp;L.<nbsp>C., & Chapelle, O. In Quiñonero-Candela, J., Dagan, I., Magnini, B., & d’Alché-Buc, F., editors, Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, of Lecture Notes in Computer Science, pages 56--77. Springer Berlin Heidelberg, 2006. 00012
Estimating Predictive Variances with Kernel Ridge Regression [link]Paper  abstract   bibtex   
In many regression tasks, in addition to an accurate estimate of the conditional mean of the target distribution, an indication of the predictive uncertainty is also required. There are two principal sources of this uncertainty: the noise process contaminating the data and the uncertainty in estimating the model parameters based on a limited sample of training data. Both of them can be summarised in the predictive variance which can then be used to give confidence intervals. In this paper, we present various schemes for providing predictive variances for kernel ridge regression, especially in the case of a heteroscedastic regression, where the variance of the noise process contaminating the data is a smooth function of the explanatory variables. The use of leave-one-out cross-validation is shown to eliminate the bias inherent in estimates of the predictive variance. Results obtained on all three regression tasks comprising the predictive uncertainty challenge demonstrate the value of this approach.
@incollection{ cawley_estimating_2006,
  series = {Lecture {Notes} in {Computer} {Science}},
  title = {Estimating {Predictive} {Variances} with {Kernel} {Ridge} {Regression}},
  copyright = {©2006 Springer Berlin Heidelberg},
  isbn = {978-3-540-33427-9, 978-3-540-33428-6},
  url = {http://link.springer.com/chapter/10.1007/11736790_5},
  abstract = {In many regression tasks, in addition to an accurate estimate of the conditional mean of the target distribution, an indication of the predictive uncertainty is also required. There are two principal sources of this uncertainty: the noise process contaminating the data and the uncertainty in estimating the model parameters based on a limited sample of training data. Both of them can be summarised in the predictive variance which can then be used to give confidence intervals. In this paper, we present various schemes for providing predictive variances for kernel ridge regression, especially in the case of a heteroscedastic regression, where the variance of the noise process contaminating the data is a smooth function of the explanatory variables. The use of leave-one-out cross-validation is shown to eliminate the bias inherent in estimates of the predictive variance. Results obtained on all three regression tasks comprising the predictive uncertainty challenge demonstrate the value of this approach.},
  language = {en},
  number = {3944},
  urldate = {2015-05-10TZ},
  booktitle = {Machine {Learning} {Challenges}. {Evaluating} {Predictive} {Uncertainty}, {Visual} {Object} {Classification}, and {Recognising} {Tectual} {Entailment}},
  publisher = {Springer Berlin Heidelberg},
  author = {Cawley, Gavin C. and Talbot, Nicola L. C. and Chapelle, Olivier},
  editor = {Quiñonero-Candela, Joaquin and Dagan, Ido and Magnini, Bernardo and d’Alché-Buc, Florence},
  year = {2006},
  note = {00012},
  pages = {56--77}
}

Downloads: 0