Statistical inference for individual fairness. Maity, S., Xue, S., Yurochkin, M., & Sun, Y. 2021. doi abstract bibtex As we rely on machine learning (ML) models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g., gender and racial biases) has come to the fore of the public's attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study.
@article{maity_statistical_2021,
title = {Statistical inference for individual fairness},
doi = {10.48550/arxiv.2103.16714},
abstract = {As we rely on machine learning (ML) models to make more consequential
decisions, the issue of ML models perpetuating or even exacerbating undesirable
historical biases (e.g., gender and racial biases) has come to the fore of the
public's attention. In this paper, we focus on the problem of detecting
violations of individual fairness in ML models. We formalize the problem as
measuring the susceptibility of ML models against a form of adversarial attack
and develop a suite of inference tools for the adversarial cost function. The
tools allow auditors to assess the individual fairness of ML models in a
statistically-principled way: form confidence intervals for the worst-case
performance differential between similar individuals and test hypotheses of
model fairness with (asymptotic) non-coverage/Type I error rate control. We
demonstrate the utility of our tools in a real-world case study.},
author = {Maity, Subha and Xue, Songkai and Yurochkin, Mikhail and Sun, Yuekai},
year = {2021},
}
Downloads: 0
{"_id":"htYoWmaBLHZFjjmTi","bibbaseid":"maity-xue-yurochkin-sun-statisticalinferenceforindividualfairness-2021","author_short":["Maity, S.","Xue, S.","Yurochkin, M.","Sun, Y."],"bibdata":{"bibtype":"article","type":"article","title":"Statistical inference for individual fairness","doi":"10.48550/arxiv.2103.16714","abstract":"As we rely on machine learning (ML) models to make more consequential decisions, the issue of ML models perpetuating or even exacerbating undesirable historical biases (e.g., gender and racial biases) has come to the fore of the public's attention. In this paper, we focus on the problem of detecting violations of individual fairness in ML models. We formalize the problem as measuring the susceptibility of ML models against a form of adversarial attack and develop a suite of inference tools for the adversarial cost function. The tools allow auditors to assess the individual fairness of ML models in a statistically-principled way: form confidence intervals for the worst-case performance differential between similar individuals and test hypotheses of model fairness with (asymptotic) non-coverage/Type I error rate control. We demonstrate the utility of our tools in a real-world case study.","author":[{"propositions":[],"lastnames":["Maity"],"firstnames":["Subha"],"suffixes":[]},{"propositions":[],"lastnames":["Xue"],"firstnames":["Songkai"],"suffixes":[]},{"propositions":[],"lastnames":["Yurochkin"],"firstnames":["Mikhail"],"suffixes":[]},{"propositions":[],"lastnames":["Sun"],"firstnames":["Yuekai"],"suffixes":[]}],"year":"2021","bibtex":"@article{maity_statistical_2021,\n\ttitle = {Statistical inference for individual fairness},\n\tdoi = {10.48550/arxiv.2103.16714},\n\tabstract = {As we rely on machine learning (ML) models to make more consequential\ndecisions, the issue of ML models perpetuating or even exacerbating undesirable\nhistorical biases (e.g., gender and racial biases) has come to the fore of the\npublic's attention. In this paper, we focus on the problem of detecting\nviolations of individual fairness in ML models. We formalize the problem as\nmeasuring the susceptibility of ML models against a form of adversarial attack\nand develop a suite of inference tools for the adversarial cost function. The\ntools allow auditors to assess the individual fairness of ML models in a\nstatistically-principled way: form confidence intervals for the worst-case\nperformance differential between similar individuals and test hypotheses of\nmodel fairness with (asymptotic) non-coverage/Type I error rate control. We\ndemonstrate the utility of our tools in a real-world case study.},\n\tauthor = {Maity, Subha and Xue, Songkai and Yurochkin, Mikhail and Sun, Yuekai},\n\tyear = {2021},\n}\n\n","author_short":["Maity, S.","Xue, S.","Yurochkin, M.","Sun, Y."],"key":"maity_statistical_2021","id":"maity_statistical_2021","bibbaseid":"maity-xue-yurochkin-sun-statisticalinferenceforindividualfairness-2021","role":"author","urls":{},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/f/Ceciz2iNjTZgQNtDc/mypubs_mar_2024.bib","dataSources":["m8Y57GfgnRrMKZTQS","epk5yKhDyD37NAsSC"],"keywords":[],"search_terms":["statistical","inference","individual","fairness","maity","xue","yurochkin","sun"],"title":"Statistical inference for individual fairness","year":2021}