Data decisions and theoretical implications when adversarially learning fair representations. Beutel, A., Chen, J., Zhao, Z., & Chi, E. H July 2017. ISBN: 1707.00075 Publication Title: arXiv [cs.LG]
Data decisions and theoretical implications when adversarially learning fair representations [link]Paper  abstract   bibtex   
How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
@unpublished{beutel_data_2017,
	title = {Data decisions and theoretical implications when adversarially learning fair representations},
	url = {http://arxiv.org/abs/1707.00075},
	abstract = {How can we learn a classifier that is "fair" for a protected or sensitive
group, when we do not know if the input to the classifier belongs to the
protected group? How can we train such a classifier when data on the
protected group is difficult to attain? In many settings, finding out the
sensitive input attribute can be prohibitively expensive even during model
training, and sometimes impossible during model serving. For example, in
recommender systems, if we want to predict if a user will click on a given
recommendation, we often do not know many attributes of the user, e.g.,
race or age, and many attributes of the content are hard to determine,
e.g., the language or topic. Thus, it is not feasible to use a different
classifier calibrated based on knowledge of the sensitive attribute. Here,
we use an adversarial training procedure to remove information about the
sensitive attribute from the latent representation learned by a neural
network. In particular, we study how the choice of data for the
adversarial training effects the resulting fairness properties. We find
two interesting results: a small amount of data is needed to train these
adversarial models, and the data distribution empirically drives the
adversary's notion of fairness.},
	author = {Beutel, Alex and Chen, Jilin and Zhao, Zhe and Chi, Ed H},
	month = jul,
	year = {2017},
	note = {ISBN: 1707.00075
Publication Title: arXiv [cs.LG]},
}

Downloads: 0