generated by bibbase.org
  2020 (5)
Deep Double Descent: Where Bigger Models and More Data Hurt. Preetum Nakkiran; Gal Kaplun; Yamini Bansal; Tristan Yang; Boaz Barak; and Ilya Sutskever. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, 2020. OpenReview.net
Deep Double Descent: Where Bigger Models and More Data Hurt [link]Paper   link   bibtex   28 downloads  
Optimal Regularization Can Mitigate Double Descent. Preetum Nakkiran; Prayaag Venkat; Sham M. Kakade; and Tengyu Ma. CoRR, abs/2003.01897. 2020.
Optimal Regularization Can Mitigate Double Descent [link]Paper   link   bibtex  
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems. Preetum Nakkiran. OPT 2020 Workshop. 2020.
Learning Rate Annealing Can Provably Help Generalization, Even for Convex Problems [link]Paper   link   bibtex  
Distributional Generalization: A New Kind of Generalization. Preetum Nakkiran; and Yamini Bansal. CoRR, abs/2009.08092. 2020.
Distributional Generalization: A New Kind of Generalization [link]Paper   link   bibtex  
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers. Preetum Nakkiran; Behnam Neyshabur; and Hanie Sedghi. CoRR, abs/2010.08127. 2020.
The Deep Bootstrap: Good Online Learners are Good Offline Generalizers [link]Paper   link   bibtex  
  2019 (6)
Tracking the l\(_\mbox2\) Norm with Constant Update Time. Chi-Ning Chou; Zhixian Lei; and Preetum Nakkiran. In APPROX/RANDOM 2019, September 20-22, 2019, volume 145, of LIPIcs, pages 2:1–2:15, 2019. Schloss Dagstuhl - Leibniz-Zentrum für Informatik
Tracking the l\(_\mbox2\) Norm with Constant Update Time [link]Paper   doi   link   bibtex  
Computational Limitations in Robust Classification and Win-Win Results. Akshay Degwekar; Preetum Nakkiran; and Vinod Vaikuntanathan. In Alina Beygelzimer; and Daniel Hsu., editor(s), Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA, volume 99, of Proceedings of Machine Learning Research, pages 994–1028, 2019. PMLR
Computational Limitations in Robust Classification and Win-Win Results [link]Paper   link   bibtex  
Algorithmic Polarization for Hidden Markov Models. Venkatesan Guruswami; Preetum Nakkiran; and Madhu Sudan. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, volume 124, of LIPIcs, pages 39:1–39:19, 2019. Schloss Dagstuhl - Leibniz-Zentrum für Informatik
Algorithmic Polarization for Hidden Markov Models [link]Paper   doi   link   bibtex  
SGD on Neural Networks Learns Functions of Increasing Complexity. Preetum Nakkiran; Gal Kaplun; Dimitris Kalimeris; Tristan Yang; Benjamin L. Edelman; Fred Zhang; and Boaz Barak . In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 3491–3501, 2019.
SGD on Neural Networks Learns Functions of Increasing Complexity [link]Paper   link   bibtex  
Adversarial Robustness May Be at Odds With Simplicity. Preetum Nakkiran. CoRR, abs/1901.00532. 2019.
Adversarial Robustness May Be at Odds With Simplicity [link]Paper   link   bibtex  
More Data Can Hurt for Linear Regression: Sample-wise Double Descent. Preetum Nakkiran. CoRR, abs/1912.07242. 2019.
More Data Can Hurt for Linear Regression: Sample-wise Double Descent [link]Paper   link   bibtex  
  2018 (3)
General strong polarization. Jaroslaw Blasiok; Venkatesan Guruswami; Preetum Nakkiran; Atri Rudra; and Madhu Sudan. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 485–492, 2018. ACM
General strong polarization [link]Paper   doi   link   bibtex  
The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science. Preetum Nakkiran; and Jaroslaw Blasiok. CoRR, abs/1809.05596. 2018.
The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science [link]Paper   link   bibtex  
Predicting Positive and Negative Links with Noisy Queries: Theory & Practice. Charalampos E. Tsourakakis; Michael Mitzenmacher; Jaroslaw Blasiok; Ben Lawson; Preetum Nakkiran; and Vasileios Nakos. Allerton 2018. 2018.
Predicting Positive and Negative Links with Noisy Queries: Theory & Practice [link]Paper   link   bibtex  
  2016 (2)
Near-Optimal UGC-hardness of Approximating Max k-CSP_R. Pasin Manurangsi; Preetum Nakkiran; and Luca Trevisan. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2016, September 7-9, 2016, Paris, France, volume 60, of LIPIcs, pages 15:1–15:28, 2016. Schloss Dagstuhl - Leibniz-Zentrum für Informatik
Near-Optimal UGC-hardness of Approximating Max k-CSP_R [link]Paper   doi   link   bibtex  
Optimal systematic distributed storage codes with fast encoding. Preetum Nakkiran; K. V. Rashmi; and Kannan Ramchandran. In IEEE International Symposium on Information Theory, ISIT 2016, Barcelona, Spain, July 10-15, 2016, pages 430–434, 2016. IEEE
Optimal systematic distributed storage codes with fast encoding [link]Paper   doi   link   bibtex  
  2015 (3)
Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for I/O, Storage, and Network-bandwidth. K. V. Rashmi; Preetum Nakkiran; Jingyan Wang; Nihar B. Shah; and Kannan Ramchandran. In Jiri Schindler; and Erez Zadok., editor(s), Proceedings of the 13th USENIX Conference on File and Storage Technologies, FAST 2015, Santa Clara, CA, USA, February 16-19, 2015, pages 81–94, 2015. USENIX Association
Having Your Cake and Eating It Too: Jointly Optimal Erasure Codes for I/O, Storage, and Network-bandwidth [link]Paper   link   bibtex  
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks. Rohit Prabhavalkar; Raziel Alvarez; Carolina Parada; Preetum Nakkiran; and Tara N. Sainath. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015, pages 4704–4708, 2015. IEEE
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks [link]Paper   doi   link   bibtex  
Compressing deep neural networks using a rank-constrained topology. Preetum Nakkiran; Raziel Alvarez; Rohit Prabhavalkar; and Carolina Parada. In INTERSPEECH 2015, 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, September 6-10, 2015, pages 1473–1477, 2015. ISCA
Compressing deep neural networks using a rank-constrained topology [link]Paper   link   bibtex  
  2014 (1)
Fundamental limits on communication for oblivious updates in storage networks. Preetum Nakkiran; Nihar B. Shah; and K. V. Rashmi. In IEEE Global Communications Conference, GLOBECOM 2014, Austin, TX, USA, December 8-12, 2014, pages 2363–2368, 2014. IEEE
Fundamental limits on communication for oblivious updates in storage networks [link]Paper   doi   link   bibtex