Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal. Gupta, U., Dhamala, J., Kumar, V., Verma, A., Pruksachatkun, Y., Krishna, S., Gupta, R., Chang, K., Ver Steeg, G., & Galstyan, A. In Muresan, S., Nakov, P., & Villavicencio, A., editors, Findings of the Association for Computational Linguistics: ACL 2022, pages 658–678, Dublin, Ireland, May, 2022. Association for Computational Linguistics. Paper doi abstract bibtex Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model's biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal—modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT–2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.
@inproceedings{gupta-etal-2022-mitigating,
title = "Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal",
author = "Gupta, Umang and
Dhamala, Jwala and
Kumar, Varun and
Verma, Apurv and
Pruksachatkun, Yada and
Krishna, Satyapriya and
Gupta, Rahul and
Chang, Kai-Wei and
Ver Steeg, Greg and
Galstyan, Aram",
editor = "Muresan, Smaranda and
Nakov, Preslav and
Villavicencio, Aline",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.55",
doi = "10.18653/v1/2022.findings-acl.55",
pages = "658--678",
abstract = "Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model{'}s biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal{---}modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT{--}2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.",
}
Downloads: 0
{"_id":"NaHKfxuXi56dcqFzf","bibbaseid":"gupta-dhamala-kumar-verma-pruksachatkun-krishna-gupta-chang-etal-mitigatinggenderbiasindistilledlanguagemodelsviacounterfactualrolereversal-2022","author_short":["Gupta, U.","Dhamala, J.","Kumar, V.","Verma, A.","Pruksachatkun, Y.","Krishna, S.","Gupta, R.","Chang, K.","Ver Steeg, G.","Galstyan, A."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal","author":[{"propositions":[],"lastnames":["Gupta"],"firstnames":["Umang"],"suffixes":[]},{"propositions":[],"lastnames":["Dhamala"],"firstnames":["Jwala"],"suffixes":[]},{"propositions":[],"lastnames":["Kumar"],"firstnames":["Varun"],"suffixes":[]},{"propositions":[],"lastnames":["Verma"],"firstnames":["Apurv"],"suffixes":[]},{"propositions":[],"lastnames":["Pruksachatkun"],"firstnames":["Yada"],"suffixes":[]},{"propositions":[],"lastnames":["Krishna"],"firstnames":["Satyapriya"],"suffixes":[]},{"propositions":[],"lastnames":["Gupta"],"firstnames":["Rahul"],"suffixes":[]},{"propositions":[],"lastnames":["Chang"],"firstnames":["Kai-Wei"],"suffixes":[]},{"propositions":[],"lastnames":["Ver","Steeg"],"firstnames":["Greg"],"suffixes":[]},{"propositions":[],"lastnames":["Galstyan"],"firstnames":["Aram"],"suffixes":[]}],"editor":[{"propositions":[],"lastnames":["Muresan"],"firstnames":["Smaranda"],"suffixes":[]},{"propositions":[],"lastnames":["Nakov"],"firstnames":["Preslav"],"suffixes":[]},{"propositions":[],"lastnames":["Villavicencio"],"firstnames":["Aline"],"suffixes":[]}],"booktitle":"Findings of the Association for Computational Linguistics: ACL 2022","month":"May","year":"2022","address":"Dublin, Ireland","publisher":"Association for Computational Linguistics","url":"https://aclanthology.org/2022.findings-acl.55","doi":"10.18653/v1/2022.findings-acl.55","pages":"658–678","abstract":"Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model's biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal—modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT–2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.","bibtex":"@inproceedings{gupta-etal-2022-mitigating,\n title = \"Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal\",\n author = \"Gupta, Umang and\n Dhamala, Jwala and\n Kumar, Varun and\n Verma, Apurv and\n Pruksachatkun, Yada and\n Krishna, Satyapriya and\n Gupta, Rahul and\n Chang, Kai-Wei and\n Ver Steeg, Greg and\n Galstyan, Aram\",\n editor = \"Muresan, Smaranda and\n Nakov, Preslav and\n Villavicencio, Aline\",\n booktitle = \"Findings of the Association for Computational Linguistics: ACL 2022\",\n month = may,\n year = \"2022\",\n address = \"Dublin, Ireland\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2022.findings-acl.55\",\n doi = \"10.18653/v1/2022.findings-acl.55\",\n pages = \"658--678\",\n abstract = \"Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model{'}s biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal{---}modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT{--}2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.\",\n}\n\n","author_short":["Gupta, U.","Dhamala, J.","Kumar, V.","Verma, A.","Pruksachatkun, Y.","Krishna, S.","Gupta, R.","Chang, K.","Ver Steeg, G.","Galstyan, A."],"editor_short":["Muresan, S.","Nakov, P.","Villavicencio, A."],"key":"gupta-etal-2022-mitigating","id":"gupta-etal-2022-mitigating","bibbaseid":"gupta-dhamala-kumar-verma-pruksachatkun-krishna-gupta-chang-etal-mitigatinggenderbiasindistilledlanguagemodelsviacounterfactualrolereversal-2022","role":"author","urls":{"Paper":"https://aclanthology.org/2022.findings-acl.55"},"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/network/files/c9YT3bPQbbKhRpNjQ","dataSources":["WKAs5u54exPpcdrJw","cHsbGgduWECnNiuXH","xyMawYs8pT6NpYmyA","oeAiTHqrkizzWaYvm","zBsRPq756yvs86a9i"],"keywords":[],"search_terms":["mitigating","gender","bias","distilled","language","models","via","counterfactual","role","reversal","gupta","dhamala","kumar","verma","pruksachatkun","krishna","gupta","chang","ver steeg","galstyan"],"title":"Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal","year":2022}