List Learning with Attribute Noise. Cheraghchi, M., Grigorescu, E., Juba, B., Wimmer, K., & Xie, N. In Banerjee, A. & Fukumizu, K., editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 130, of Proceedings of Machine Learning Research, pages 2215–2223, 13–15 Apr, 2021. PMLR.
List Learning with Attribute Noise [link]Link  List Learning with Attribute Noise [link]Paper  abstract   bibtex   
We introduce and study the model of list learning with attribute noise. Learning with attribute noise was introduced by Shackelford and Volper (COLT 1988) as a variant of PAC learning, in which the algorithm has access to noisy examples and uncorrupted labels, and the goal is to recover an accurate hypothesis. Sloan (COLT 1988) and Goldman and Sloan (Algorithmica 1995) discovered information-theoretic limits to learning in this model, which have impeded further progress. In this article we extend the model to that of list learning, drawing inspiration from the list-decoding model in coding theory, and its recent variant studied in the context of learning. On the positive side, we show that sparse conjunctions can be efficiently list learned under some assumptions on the underlying ground-truth distribution. On the negative side, our results show that even in the list-learning model, efficient learning of parities and majorities is not possible regardless of the representation used.
@INPROCEEDINGS{ref:CGJWX21,
  author =	 {Mahdi Cheraghchi and Elena Grigorescu and Brendan
                  Juba and Karl Wimmer and Ning Xie},
  title =	 {List Learning with Attribute Noise},
  year =	 2021,
  booktitle = 	 {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS)},
  pages = 	 {2215--2223},
  editor = 	 {Banerjee, Arindam and Fukumizu, Kenji},
  volume = 	 {130},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {13--15 Apr},
  publisher =    {PMLR},
  url_Link = {http://proceedings.mlr.press/v130/cheraghchi21a.html},
  url_Paper =	 {https://arxiv.org/abs/2006.06850},
  abstract =	 {We introduce and study the model of list learning
                  with attribute noise. Learning with attribute noise
                  was introduced by Shackelford and Volper (COLT 1988)
                  as a variant of PAC learning, in which the algorithm
                  has access to noisy examples and uncorrupted labels,
                  and the goal is to recover an accurate
                  hypothesis. Sloan (COLT 1988) and Goldman and Sloan
                  (Algorithmica 1995) discovered information-theoretic
                  limits to learning in this model, which have impeded
                  further progress. In this article we extend the
                  model to that of list learning, drawing inspiration
                  from the list-decoding model in coding theory, and
                  its recent variant studied in the context of
                  learning.  On the positive side, we show that sparse
                  conjunctions can be efficiently list learned under
                  some assumptions on the underlying ground-truth
                  distribution.  On the negative side, our results
                  show that even in the list-learning model, efficient
                  learning of parities and majorities is not possible
                  regardless of the representation used.}
}

Downloads: 0