generated by bibbase.org
  2024 (1)
Pandora's White-Box: Increased Training Data Leakage in Open LLMs. Wang, J. G.; Wang, J.; Li, M.; and Neel, S. CoRR, abs/2402.17012. 2024.
Pandora's White-Box: Increased Training Data Leakage in Open LLMs [link]Paper   doi   link   bibtex  
  2023 (8)
On the Privacy Risks of Algorithmic Recourse. Pawelczyk, M.; Lakkaraju, H.; and Neel, S. In Ruiz, F. J. R.; Dy, J. G.; and van de Meent, J., editor(s), International Conference on Artificial Intelligence and Statistics, 25-27 April 2023, Palau de Congressos, Valencia, Spain, volume 206, of Proceedings of Machine Learning Research, pages 9680–9696, 2023. PMLR
On the Privacy Risks of Algorithmic Recourse [link]Paper   link   bibtex  
MoPe: Model Perturbation based Privacy Attacks on Language Models. Li, M.; Wang, J.; Wang, J. G.; and Neel, S. In Bouamor, H.; Pino, J.; and Bali, K., editor(s), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 13647–13660, 2023. Association for Computational Linguistics
MoPe: Model Perturbation based Privacy Attacks on Language Models [link]Paper   doi   link   bibtex  
Model Explanation Disparities as a Fairness Diagnostic. Chang, P. W.; Fishman, L.; and Neel, S. CoRR, abs/2303.01704. 2023.
Model Explanation Disparities as a Fairness Diagnostic [link]Paper   doi   link   bibtex  
PRIMO: Private Regression in Multiple Outcomes. Neel, S. CoRR, abs/2303.04195. 2023.
PRIMO: Private Regression in Multiple Outcomes [link]Paper   doi   link   bibtex  
In-Context Unlearning: Language Models as Few Shot Unlearners. Pawelczyk, M.; Neel, S.; and Lakkaraju, H. CoRR, abs/2310.07579. 2023.
In-Context Unlearning: Language Models as Few Shot Unlearners [link]Paper   doi   link   bibtex  
Black-Box Training Data Identification in GANs via Detector Networks. Olagoke, L.; Vadhan, S.; and Neel, S. CoRR, abs/2310.12063. 2023.
Black-Box Training Data Identification in GANs via Detector Networks [link]Paper   doi   link   bibtex  
MoPe: Model Perturbation-based Privacy Attacks on Language Models. Li, M.; Wang, J.; Wang, J. G.; and Neel, S. CoRR, abs/2310.14369. 2023.
MoPe: Model Perturbation-based Privacy Attacks on Language Models [link]Paper   doi   link   bibtex  
Privacy Issues in Large Language Models: A Survey. Neel, S.; and Chang, P. W. CoRR, abs/2312.06717. 2023.
Privacy Issues in Large Language Models: A Survey [link]Paper   doi   link   bibtex  
  2022 (1)
On the Privacy Risks of Algorithmic Recourse. Pawelczyk, M.; Lakkaraju, H.; and Neel, S. CoRR, abs/2211.05427. 2022.
On the Privacy Risks of Algorithmic Recourse [link]Paper   doi   link   bibtex  
  2021 (5)
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning. Neel, S.; Roth, A.; and Sharifi-Malvajerdi, S. In Feldman, V.; Ligett, K.; and Sabato, S., editor(s), Algorithmic Learning Theory, 16-19 March 2021, Virtual Conference, Worldwide, volume 132, of Proceedings of Machine Learning Research, pages 931–962, 2021. PMLR
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning [link]Paper   link   bibtex  
An Algorithmic Framework for Fairness Elicitation. Jung, C.; Kearns, M.; Neel, S.; Roth, A.; Stapleton, L.; and Wu, Z. S. In Ligett, K.; and Gupta, S., editor(s), 2nd Symposium on Foundations of Responsible Computing, FORC 2021, June 9-11, 2021, Virtual Conference, volume 192, of LIPIcs, pages 2:1–2:19, 2021. Schloss Dagstuhl - Leibniz-Zentrum für Informatik
An Algorithmic Framework for Fairness Elicitation [link]Paper   doi   link   bibtex  
Adaptive Machine Unlearning. Gupta, V.; Jung, C.; Neel, S.; Roth, A.; Sharifi-Malvajerdi, S.; and Waites, C. In Ranzato, M.; Beygelzimer, A.; Dauphin, Y. N.; Liang, P.; and Vaughan, J. W., editor(s), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 16319–16330, 2021.
Adaptive Machine Unlearning [link]Paper   link   bibtex  
A new analysis of differential privacy's generalization guarantees (invited paper). Jung, C.; Ligett, K.; Neel, S.; Roth, A.; Sharifi-Malvajerdi, S.; and Shenfeld, M. In Khuller, S.; and Williams, V. V., editor(s), STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 9, 2021. ACM
A new analysis of differential privacy's generalization guarantees (invited paper) [link]Paper   doi   link   bibtex  
Adaptive Machine Unlearning. Gupta, V.; Jung, C.; Neel, S.; Roth, A.; Sharifi-Malvajerdi, S.; and Waites, C. CoRR, abs/2106.04378. 2021.
Adaptive Machine Unlearning [link]Paper   link   bibtex  
  2020 (4)
Optimal, truthful, and private securities lending. Diana, E.; Kearns, M.; Neel, S.; and Roth, A. In Balch, T., editor(s), ICAIF '20: The First ACM International Conference on AI in Finance, New York, NY, USA, October 15-16, 2020, pages 48:1–48:8, 2020. ACM
Optimal, truthful, and private securities lending [link]Paper   doi   link   bibtex  
Oracle Efficient Private Non-Convex Optimization. Neel, S.; Roth, A.; Vietri, G.; and Wu, Z. S. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119, of Proceedings of Machine Learning Research, pages 7243–7252, 2020. PMLR
Oracle Efficient Private Non-Convex Optimization [link]Paper   link   bibtex  
A New Analysis of Differential Privacy's Generalization Guarantees. Jung, C.; Ligett, K.; Neel, S.; Roth, A.; Sharifi-Malvajerdi, S.; and Shenfeld, M. In Vidick, T., editor(s), 11th Innovations in Theoretical Computer Science Conference, ITCS 2020, January 12-14, 2020, Seattle, Washington, USA, volume 151, of LIPIcs, pages 31:1–31:17, 2020. Schloss Dagstuhl - Leibniz-Zentrum für Informatik
A New Analysis of Differential Privacy's Generalization Guarantees [link]Paper   doi   link   bibtex  
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning. Neel, S.; Roth, A.; and Sharifi-Malvajerdi, S. CoRR, abs/2007.02923. 2020.
Descent-to-Delete: Gradient-Based Methods for Machine Unlearning [link]Paper   link   bibtex  
  2019 (10)
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM. Wu, Z. S.; Roth, A.; Ligett, K.; Waggoner, B.; and Neel, S. J. Priv. Confidentiality, 9(2). 2019.
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM [link]Paper   doi   link   bibtex  
An Empirical Study of Rich Subgroup Fairness for Machine Learning. Kearns, M. J.; Neel, S.; Roth, A.; and Wu, Z. S. In danah boyd ; and Morgenstern, J. H., editor(s), Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019, pages 100–109, 2019. ACM
An Empirical Study of Rich Subgroup Fairness for Machine Learning [link]Paper   doi   link   bibtex   1 download  
Fair Algorithms for Learning in Allocation Problems. Elzayn, H.; Jabbari, S.; Jung, C.; Kearns, M. J.; Neel, S.; Roth, A.; and Schutzman, Z. In danah boyd ; and Morgenstern, J. H., editor(s), Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29-31, 2019, pages 170–179, 2019. ACM
Fair Algorithms for Learning in Allocation Problems [link]Paper   doi   link   bibtex  
How to Use Heuristics for Differential Privacy. Neel, S.; Roth, A.; and Wu, Z. S. In Zuckerman, D., editor(s), 60th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2019, Baltimore, Maryland, USA, November 9-12, 2019, pages 72–93, 2019. IEEE Computer Society
How to Use Heuristics for Differential Privacy [link]Paper   doi   link   bibtex  
The Role of Interactivity in Local Differential Privacy. Joseph, M.; Mao, J.; Neel, S.; and Roth, A. In Zuckerman, D., editor(s), 60th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2019, Baltimore, Maryland, USA, November 9-12, 2019, pages 94–105, 2019. IEEE Computer Society
The Role of Interactivity in Local Differential Privacy [link]Paper   doi   link   bibtex  
The Role of Interactivity in Local Differential Privacy. Joseph, M.; Mao, J.; Neel, S.; and Roth, A. CoRR, abs/1904.03564. 2019.
The Role of Interactivity in Local Differential Privacy [link]Paper   link   bibtex  
Eliciting and Enforcing Subjective Individual Fairness. Jung, C.; Kearns, M. J.; Neel, S.; Roth, A.; Stapleton, L.; and Wu, Z. S. CoRR, abs/1905.10660. 2019.
Eliciting and Enforcing Subjective Individual Fairness [link]Paper   link   bibtex  
Differentially Private Objective Perturbation: Beyond Smoothness and Convexity. Neel, S.; Roth, A.; Vietri, G.; and Wu, Z. S. CoRR, abs/1909.01783. 2019.
Differentially Private Objective Perturbation: Beyond Smoothness and Convexity [link]Paper   link   bibtex  
A New Analysis of Differential Privacy's Generalization Guarantees. Jung, C.; Ligett, K.; Neel, S.; Roth, A.; Sharifi-Malvajerdi, S.; and Shenfeld, M. CoRR, abs/1909.03577. 2019.
A New Analysis of Differential Privacy's Generalization Guarantees [link]Paper   link   bibtex  
Optimal, Truthful, and Private Securities Lending. Diana, E.; Kearns, M. J.; Neel, S.; and Roth, A. CoRR, abs/1912.06202. 2019.
Optimal, Truthful, and Private Securities Lending [link]Paper   link   bibtex  
  2018 (7)
Meritocratic Fairness for Infinite and Contextual Bandits. Joseph, M.; Kearns, M. J.; Morgenstern, J.; Neel, S.; and Roth, A. In Furman, J.; Marchant, G. E.; Price, H.; and Rossi, F., editor(s), Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2018, New Orleans, LA, USA, February 02-03, 2018, pages 158–163, 2018. ACM
Meritocratic Fairness for Infinite and Contextual Bandits [link]Paper   doi   link   bibtex  
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. Kearns, M. J.; Neel, S.; Roth, A.; and Wu, Z. S. In Dy, J. G.; and Krause, A., editor(s), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80, of Proceedings of Machine Learning Research, pages 2569–2577, 2018. PMLR
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness [link]Paper   link   bibtex  
Mitigating Bias in Adaptive Data Gathering via Differential Privacy. Neel, S.; and Roth, A. In Dy, J. G.; and Krause, A., editor(s), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80, of Proceedings of Machine Learning Research, pages 3717–3726, 2018. PMLR
Mitigating Bias in Adaptive Data Gathering via Differential Privacy [link]Paper   link   bibtex  
Mitigating Bias in Adaptive Data Gathering via Differential Privacy. Neel, S.; and Roth, A. CoRR, abs/1806.02329. 2018.
Mitigating Bias in Adaptive Data Gathering via Differential Privacy [link]Paper   link   bibtex  
An Empirical Study of Rich Subgroup Fairness for Machine Learning. Kearns, M. J.; Neel, S.; Roth, A.; and Wu, Z. S. CoRR, abs/1808.08166. 2018.
An Empirical Study of Rich Subgroup Fairness for Machine Learning [link]Paper   link   bibtex  
Fair Algorithms for Learning in Allocation Problems. Elzayn, H.; Jabbari, S.; Jung, C.; Kearns, M. J.; Neel, S.; Roth, A.; and Schutzman, Z. CoRR, abs/1808.10549. 2018.
Fair Algorithms for Learning in Allocation Problems [link]Paper   link   bibtex  
How to Use Heuristics for Differential Privacy. Neel, S.; Roth, A.; and Wu, Z. S. CoRR, abs/1811.07765. 2018.
How to Use Heuristics for Differential Privacy [link]Paper   link   bibtex  
  2017 (4)
Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM. Ligett, K.; Neel, S.; Roth, A.; Waggoner, B.; and Wu, Z. S. In Guyon, I.; von Luxburg, U.; Bengio, S.; Wallach, H. M.; Fergus, R.; Vishwanathan, S. V. N.; and Garnett, R., editor(s), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 2566–2576, 2017.
Accuracy First: Selecting a Differential Privacy Level for Accuracy Constrained ERM [link]Paper   link   bibtex  
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM. Ligett, K.; Neel, S.; Roth, A.; Waggoner, B.; and Wu, Z. S. CoRR, abs/1705.10829. 2017.
Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM [link]Paper   link   bibtex  
A Convex Framework for Fair Regression. Berk, R.; Heidari, H.; Jabbari, S.; Joseph, M.; Kearns, M. J.; Morgenstern, J.; Neel, S.; and Roth, A. CoRR, abs/1706.02409. 2017.
A Convex Framework for Fair Regression [link]Paper   link   bibtex  
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. Kearns, M. J.; Neel, S.; Roth, A.; and Wu, Z. S. CoRR, abs/1711.05144. 2017.
Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness [link]Paper   link   bibtex  
  2016 (1)
Rawlsian Fairness for Machine Learning. Joseph, M.; Kearns, M. J.; Morgenstern, J.; Neel, S.; and Roth, A. CoRR, abs/1610.09559. 2016.
Rawlsian Fairness for Machine Learning [link]Paper   link   bibtex