List Learning with Attribute Noise. Cheraghchi, M., Grigorescu, E., Juba, B., Wimmer, K., & Xie, N. In Banerjee, A. & Fukumizu, K., editors, Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 130, of Proceedings of Machine Learning Research, pages 2215–2223, 13–15 Apr, 2021. PMLR.
Link
Paper abstract bibtex 13 downloads We introduce and study the model of list learning with attribute noise. Learning with attribute noise was introduced by Shackelford and Volper (COLT 1988) as a variant of PAC learning, in which the algorithm has access to noisy examples and uncorrupted labels, and the goal is to recover an accurate hypothesis. Sloan (COLT 1988) and Goldman and Sloan (Algorithmica 1995) discovered information-theoretic limits to learning in this model, which have impeded further progress. In this article we extend the model to that of list learning, drawing inspiration from the list-decoding model in coding theory, and its recent variant studied in the context of learning. On the positive side, we show that sparse conjunctions can be efficiently list learned under some assumptions on the underlying ground-truth distribution. On the negative side, our results show that even in the list-learning model, efficient learning of parities and majorities is not possible regardless of the representation used.
@INPROCEEDINGS{ref:CGJWX21,
author = {Mahdi Cheraghchi and Elena Grigorescu and Brendan
Juba and Karl Wimmer and Ning Xie},
title = {List Learning with Attribute Noise},
year = 2021,
booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS)},
pages = {2215--2223},
editor = {Banerjee, Arindam and Fukumizu, Kenji},
volume = {130},
series = {Proceedings of Machine Learning Research},
month = {13--15 Apr},
publisher = {PMLR},
url_Link = {http://proceedings.mlr.press/v130/cheraghchi21a.html},
url_Paper = {https://arxiv.org/abs/2006.06850},
abstract = {We introduce and study the model of list learning
with attribute noise. Learning with attribute noise
was introduced by Shackelford and Volper (COLT 1988)
as a variant of PAC learning, in which the algorithm
has access to noisy examples and uncorrupted labels,
and the goal is to recover an accurate
hypothesis. Sloan (COLT 1988) and Goldman and Sloan
(Algorithmica 1995) discovered information-theoretic
limits to learning in this model, which have impeded
further progress. In this article we extend the
model to that of list learning, drawing inspiration
from the list-decoding model in coding theory, and
its recent variant studied in the context of
learning. On the positive side, we show that sparse
conjunctions can be efficiently list learned under
some assumptions on the underlying ground-truth
distribution. On the negative side, our results
show that even in the list-learning model, efficient
learning of parities and majorities is not possible
regardless of the representation used.}
}
Downloads: 13
{"_id":"ftASaij9RWmC6wLHd","bibbaseid":"cheraghchi-grigorescu-juba-wimmer-xie-listlearningwithattributenoise-2021","authorIDs":["F3Y934eNyTeEJsg6E"],"author_short":["Cheraghchi, M.","Grigorescu, E.","Juba, B.","Wimmer, K.","Xie, N."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["Mahdi"],"propositions":[],"lastnames":["Cheraghchi"],"suffixes":[]},{"firstnames":["Elena"],"propositions":[],"lastnames":["Grigorescu"],"suffixes":[]},{"firstnames":["Brendan"],"propositions":[],"lastnames":["Juba"],"suffixes":[]},{"firstnames":["Karl"],"propositions":[],"lastnames":["Wimmer"],"suffixes":[]},{"firstnames":["Ning"],"propositions":[],"lastnames":["Xie"],"suffixes":[]}],"title":"List Learning with Attribute Noise","year":"2021","booktitle":"Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS)","pages":"2215–2223","editor":[{"propositions":[],"lastnames":["Banerjee"],"firstnames":["Arindam"],"suffixes":[]},{"propositions":[],"lastnames":["Fukumizu"],"firstnames":["Kenji"],"suffixes":[]}],"volume":"130","series":"Proceedings of Machine Learning Research","month":"13–15 Apr","publisher":"PMLR","url_link":"http://proceedings.mlr.press/v130/cheraghchi21a.html","url_paper":"https://arxiv.org/abs/2006.06850","abstract":"We introduce and study the model of list learning with attribute noise. Learning with attribute noise was introduced by Shackelford and Volper (COLT 1988) as a variant of PAC learning, in which the algorithm has access to noisy examples and uncorrupted labels, and the goal is to recover an accurate hypothesis. Sloan (COLT 1988) and Goldman and Sloan (Algorithmica 1995) discovered information-theoretic limits to learning in this model, which have impeded further progress. In this article we extend the model to that of list learning, drawing inspiration from the list-decoding model in coding theory, and its recent variant studied in the context of learning. On the positive side, we show that sparse conjunctions can be efficiently list learned under some assumptions on the underlying ground-truth distribution. On the negative side, our results show that even in the list-learning model, efficient learning of parities and majorities is not possible regardless of the representation used.","bibtex":"@INPROCEEDINGS{ref:CGJWX21,\n author =\t {Mahdi Cheraghchi and Elena Grigorescu and Brendan\n Juba and Karl Wimmer and Ning Xie},\n title =\t {List Learning with Attribute Noise},\n year =\t 2021,\n booktitle = \t {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS)},\n pages = \t {2215--2223},\n editor = \t {Banerjee, Arindam and Fukumizu, Kenji},\n volume = \t {130},\n series = \t {Proceedings of Machine Learning Research},\n month = \t {13--15 Apr},\n publisher = {PMLR},\n url_Link = {http://proceedings.mlr.press/v130/cheraghchi21a.html},\n url_Paper =\t {https://arxiv.org/abs/2006.06850},\n abstract =\t {We introduce and study the model of list learning\n with attribute noise. Learning with attribute noise\n was introduced by Shackelford and Volper (COLT 1988)\n as a variant of PAC learning, in which the algorithm\n has access to noisy examples and uncorrupted labels,\n and the goal is to recover an accurate\n hypothesis. Sloan (COLT 1988) and Goldman and Sloan\n (Algorithmica 1995) discovered information-theoretic\n limits to learning in this model, which have impeded\n further progress. In this article we extend the\n model to that of list learning, drawing inspiration\n from the list-decoding model in coding theory, and\n its recent variant studied in the context of\n learning. On the positive side, we show that sparse\n conjunctions can be efficiently list learned under\n some assumptions on the underlying ground-truth\n distribution. On the negative side, our results\n show that even in the list-learning model, efficient\n learning of parities and majorities is not possible\n regardless of the representation used.}\n}\n\n","author_short":["Cheraghchi, M.","Grigorescu, E.","Juba, B.","Wimmer, K.","Xie, N."],"editor_short":["Banerjee, A.","Fukumizu, K."],"key":"ref:CGJWX21","id":"ref:CGJWX21","bibbaseid":"cheraghchi-grigorescu-juba-wimmer-xie-listlearningwithattributenoise-2021","role":"author","urls":{" link":"http://proceedings.mlr.press/v130/cheraghchi21a.html"," paper":"https://arxiv.org/abs/2006.06850"},"metadata":{"authorlinks":{"cheraghchi, m":"https://mahdi.ch/writings/"}},"downloads":13},"bibtype":"inproceedings","biburl":"http://mahdi.ch/writings/cheraghchi.bib","creationDate":"2021-01-23T04:11:28.601Z","downloads":13,"keywords":[],"search_terms":["list","learning","attribute","noise","cheraghchi","grigorescu","juba","wimmer","xie"],"title":"List Learning with Attribute Noise","year":2021,"dataSources":["YZqdBBx6FeYmvQE6D"]}