Classification and Its Consequences for Online Harassment: Design Insights from HeartMob. Blackwell, L., Dimond, J., Schoenebeck, S., & Lampe, C. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW):24:1–24:19, December, 2017.
Paper doi abstract bibtex Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star's classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users' needs into the design and moderation of online platforms.
@article{blackwell_classification_2017,
title = {Classification and {Its} {Consequences} for {Online} {Harassment}: {Design} {Insights} from {HeartMob}},
volume = {1},
shorttitle = {Classification and {Its} {Consequences} for {Online} {Harassment}},
url = {https://doi.org/10.1145/3134659},
doi = {10.1145/3134659},
abstract = {Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star's classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users' needs into the design and moderation of online platforms.},
number = {CSCW},
urldate = {2022-03-07},
journal = {Proceedings of the ACM on Human-Computer Interaction},
author = {Blackwell, Lindsay and Dimond, Jill and Schoenebeck, Sarita and Lampe, Cliff},
month = dec,
year = {2017},
keywords = {bystanders, classification, intersectionality, labeling, moderation, online harassment, social norms, support},
pages = {24:1--24:19},
}
Downloads: 0
{"_id":"vN4Mfceg3c62sG6CN","bibbaseid":"blackwell-dimond-schoenebeck-lampe-classificationanditsconsequencesforonlineharassmentdesigninsightsfromheartmob-2017","author_short":["Blackwell, L.","Dimond, J.","Schoenebeck, S.","Lampe, C."],"bibdata":{"bibtype":"article","type":"article","title":"Classification and Its Consequences for Online Harassment: Design Insights from HeartMob","volume":"1","shorttitle":"Classification and Its Consequences for Online Harassment","url":"https://doi.org/10.1145/3134659","doi":"10.1145/3134659","abstract":"Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star's classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users' needs into the design and moderation of online platforms.","number":"CSCW","urldate":"2022-03-07","journal":"Proceedings of the ACM on Human-Computer Interaction","author":[{"propositions":[],"lastnames":["Blackwell"],"firstnames":["Lindsay"],"suffixes":[]},{"propositions":[],"lastnames":["Dimond"],"firstnames":["Jill"],"suffixes":[]},{"propositions":[],"lastnames":["Schoenebeck"],"firstnames":["Sarita"],"suffixes":[]},{"propositions":[],"lastnames":["Lampe"],"firstnames":["Cliff"],"suffixes":[]}],"month":"December","year":"2017","keywords":"bystanders, classification, intersectionality, labeling, moderation, online harassment, social norms, support","pages":"24:1–24:19","bibtex":"@article{blackwell_classification_2017,\n\ttitle = {Classification and {Its} {Consequences} for {Online} {Harassment}: {Design} {Insights} from {HeartMob}},\n\tvolume = {1},\n\tshorttitle = {Classification and {Its} {Consequences} for {Online} {Harassment}},\n\turl = {https://doi.org/10.1145/3134659},\n\tdoi = {10.1145/3134659},\n\tabstract = {Online harassment is a pervasive and pernicious problem. Techniques like natural language processing and machine learning are promising approaches for identifying abusive language, but they fail to address structural power imbalances perpetuated by automated labeling and classification. Similarly, platform policies and reporting tools are designed for a seemingly homogenous user base and do not account for individual experiences and systems of social oppression. This paper describes the design and evaluation of HeartMob, a platform built by and for people who are disproportionately affected by the most severe forms of online harassment. We conducted interviews with 18 HeartMob users, both targets and supporters, about their harassment experiences and their use of the site. We examine systems of classification enacted by technical systems, platform policies, and users to demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2) labeling motivates bystanders to provide support; and 3) labeling content as harassment is critical for surfacing community norms around appropriate user behavior. We discuss these results through the lens of Bowker and Star's classification theories and describe implications for labeling and classifying online abuse. Finally, informed by intersectional feminist theory, we argue that fully addressing online harassment requires the ongoing integration of vulnerable users' needs into the design and moderation of online platforms.},\n\tnumber = {CSCW},\n\turldate = {2022-03-07},\n\tjournal = {Proceedings of the ACM on Human-Computer Interaction},\n\tauthor = {Blackwell, Lindsay and Dimond, Jill and Schoenebeck, Sarita and Lampe, Cliff},\n\tmonth = dec,\n\tyear = {2017},\n\tkeywords = {bystanders, classification, intersectionality, labeling, moderation, online harassment, social norms, support},\n\tpages = {24:1--24:19},\n}\n\n","author_short":["Blackwell, L.","Dimond, J.","Schoenebeck, S.","Lampe, C."],"key":"blackwell_classification_2017","id":"blackwell_classification_2017","bibbaseid":"blackwell-dimond-schoenebeck-lampe-classificationanditsconsequencesforonlineharassmentdesigninsightsfromheartmob-2017","role":"author","urls":{"Paper":"https://doi.org/10.1145/3134659"},"keyword":["bystanders","classification","intersectionality","labeling","moderation","online harassment","social norms","support"],"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://api.zotero.org/users/8624787/collections/2E94N2H7/items?key=sdk0A5EfeqFrUf0qdM3drE1j&format=bibtex&limit=100","dataSources":["2z8XPCYNQoEuS4A58","SAR4ts6mkZyPFyjXN"],"keywords":["bystanders","classification","intersectionality","labeling","moderation","online harassment","social norms","support"],"search_terms":["classification","consequences","online","harassment","design","insights","heartmob","blackwell","dimond","schoenebeck","lampe"],"title":"Classification and Its Consequences for Online Harassment: Design Insights from HeartMob","year":2017}