Data decisions and theoretical implications when adversarially learning fair representations. Beutel, A., Chen, J., Zhao, Z., & Chi, E. H July 2017. ISBN: 1707.00075 Publication Title: arXiv [cs.LG]
Paper abstract bibtex How can we learn a classifier that is "fair" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.
@unpublished{beutel_data_2017,
title = {Data decisions and theoretical implications when adversarially learning fair representations},
url = {http://arxiv.org/abs/1707.00075},
abstract = {How can we learn a classifier that is "fair" for a protected or sensitive
group, when we do not know if the input to the classifier belongs to the
protected group? How can we train such a classifier when data on the
protected group is difficult to attain? In many settings, finding out the
sensitive input attribute can be prohibitively expensive even during model
training, and sometimes impossible during model serving. For example, in
recommender systems, if we want to predict if a user will click on a given
recommendation, we often do not know many attributes of the user, e.g.,
race or age, and many attributes of the content are hard to determine,
e.g., the language or topic. Thus, it is not feasible to use a different
classifier calibrated based on knowledge of the sensitive attribute. Here,
we use an adversarial training procedure to remove information about the
sensitive attribute from the latent representation learned by a neural
network. In particular, we study how the choice of data for the
adversarial training effects the resulting fairness properties. We find
two interesting results: a small amount of data is needed to train these
adversarial models, and the data distribution empirically drives the
adversary's notion of fairness.},
author = {Beutel, Alex and Chen, Jilin and Zhao, Zhe and Chi, Ed H},
month = jul,
year = {2017},
note = {ISBN: 1707.00075
Publication Title: arXiv [cs.LG]},
}
Downloads: 0
{"_id":"aKr8QX3tkMtNDwwSK","bibbaseid":"beutel-chen-zhao-chi-datadecisionsandtheoreticalimplicationswhenadversariallylearningfairrepresentations-2017","author_short":["Beutel, A.","Chen, J.","Zhao, Z.","Chi, E. H"],"bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Data decisions and theoretical implications when adversarially learning fair representations","url":"http://arxiv.org/abs/1707.00075","abstract":"How can we learn a classifier that is \"fair\" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.","author":[{"propositions":[],"lastnames":["Beutel"],"firstnames":["Alex"],"suffixes":[]},{"propositions":[],"lastnames":["Chen"],"firstnames":["Jilin"],"suffixes":[]},{"propositions":[],"lastnames":["Zhao"],"firstnames":["Zhe"],"suffixes":[]},{"propositions":[],"lastnames":["Chi"],"firstnames":["Ed","H"],"suffixes":[]}],"month":"July","year":"2017","note":"ISBN: 1707.00075 Publication Title: arXiv [cs.LG]","bibtex":"@unpublished{beutel_data_2017,\n\ttitle = {Data decisions and theoretical implications when adversarially learning fair representations},\n\turl = {http://arxiv.org/abs/1707.00075},\n\tabstract = {How can we learn a classifier that is \"fair\" for a protected or sensitive\ngroup, when we do not know if the input to the classifier belongs to the\nprotected group? How can we train such a classifier when data on the\nprotected group is difficult to attain? In many settings, finding out the\nsensitive input attribute can be prohibitively expensive even during model\ntraining, and sometimes impossible during model serving. For example, in\nrecommender systems, if we want to predict if a user will click on a given\nrecommendation, we often do not know many attributes of the user, e.g.,\nrace or age, and many attributes of the content are hard to determine,\ne.g., the language or topic. Thus, it is not feasible to use a different\nclassifier calibrated based on knowledge of the sensitive attribute. Here,\nwe use an adversarial training procedure to remove information about the\nsensitive attribute from the latent representation learned by a neural\nnetwork. In particular, we study how the choice of data for the\nadversarial training effects the resulting fairness properties. We find\ntwo interesting results: a small amount of data is needed to train these\nadversarial models, and the data distribution empirically drives the\nadversary's notion of fairness.},\n\tauthor = {Beutel, Alex and Chen, Jilin and Zhao, Zhe and Chi, Ed H},\n\tmonth = jul,\n\tyear = {2017},\n\tnote = {ISBN: 1707.00075\nPublication Title: arXiv [cs.LG]},\n}\n\n","author_short":["Beutel, A.","Chen, J.","Zhao, Z.","Chi, E. H"],"key":"beutel_data_2017","id":"beutel_data_2017","bibbaseid":"beutel-chen-zhao-chi-datadecisionsandtheoreticalimplicationswhenadversariallylearningfairrepresentations-2017","role":"author","urls":{"Paper":"http://arxiv.org/abs/1707.00075"},"metadata":{"authorlinks":{}}},"bibtype":"unpublished","biburl":"https://api.zotero.org/users/6655/collections/PXABRENG/items?key=f7dGmoR42f4Se5vp1eOrI4Kf&format=bibtex&limit=100","dataSources":["iXpvky2uc5Bb3jNYX"],"keywords":[],"search_terms":["data","decisions","theoretical","implications","adversarially","learning","fair","representations","beutel","chen","zhao","chi"],"title":"Data decisions and theoretical implications when adversarially learning fair representations","year":2017}