On Measuring Social Biases in Sentence Encoders. May, C., Wang, A., Bordia, S., Bowman, S. R., & Rudinger, R. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics.
On Measuring Social Biases in Sentence Encoders [link]Paper  doi  abstract   bibtex   
The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017). Meanwhile, research on learning reusable text representations has begun to explore sentence-level texts, with some sentence encoders seeing enthusiastic adoption. Accordingly, we extend the Word Embedding Association Test to measure bias in sentence encoders. We then test several sentence encoders, including state-of-the-art methods such as ELMo and BERT, for the social biases studied in prior work and two important biases that are difficult or impossible to test at the word level. We observe mixed results including suspicious patterns of sensitivity that suggest the test's assumptions may not hold in general. We conclude by proposing directions for future work on measuring bias in sentence encoders.
@inproceedings{May2019,
abstract = {The Word Embedding Association Test shows that GloVe and word2vec word embeddings exhibit human-like implicit biases based on gender, race, and other social constructs (Caliskan et al., 2017). Meanwhile, research on learning reusable text representations has begun to explore sentence-level texts, with some sentence encoders seeing enthusiastic adoption. Accordingly, we extend the Word Embedding Association Test to measure bias in sentence encoders. We then test several sentence encoders, including state-of-the-art methods such as ELMo and BERT, for the social biases studied in prior work and two important biases that are difficult or impossible to test at the word level. We observe mixed results including suspicious patterns of sensitivity that suggest the test's assumptions may not hold in general. We conclude by proposing directions for future work on measuring bias in sentence encoders.},
address = {Stroudsburg, PA, USA},
archivePrefix = {arXiv},
arxivId = {1903.10561},
author = {May, Chandler and Wang, Alex and Bordia, Shikha and Bowman, Samuel R. and Rudinger, Rachel},
booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
doi = {10.18653/v1/N19-1063},
eprint = {1903.10561},
file = {:Users/shanest/Documents/Library/May et al/Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologie./May et al. - 2019 - On Measuring Social Biases in Sentence Encoders.pdf:pdf},
keywords = {method: association test,phenomenon: social bias},
pages = {622--628},
publisher = {Association for Computational Linguistics},
title = {{On Measuring Social Biases in Sentence Encoders}},
url = {http://aclweb.org/anthology/N19-1063},
year = {2019}
}

Downloads: 0