Training individually fair ML models with Sensitive Subspace Robustness. Yurochkin, M., Bower, A., & Sun, Y. 2019.
doi  abstract   bibtex   
We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
@article{yurochkin_training_2019,
	title = {Training individually fair {ML} models with {Sensitive} {Subspace} {Robustness}},
	doi = {10.48550/arxiv.1907.00020},
	abstract = {We consider training machine learning models that are fair in the sense that
their performance is invariant under certain sensitive perturbations to the
inputs. For example, the performance of a resume screening system should be
invariant under changes to the gender and/or ethnicity of the applicant. We
formalize this notion of algorithmic fairness as a variant of individual
fairness and develop a distributionally robust optimization approach to enforce
it during training. We also demonstrate the effectiveness of the approach on
two ML tasks that are susceptible to gender and racial biases.},
	author = {Yurochkin, Mikhail and Bower, Amanda and Sun, Yuekai},
	year = {2019},
}

Downloads: 0