FARE: Diagnostics for Fair Ranking Using Pairwise Error Metrics. Kuhlman, C., VanValkenburg, M., & Rundensteiner, E. In The World Wide Web Conference, of WWW '19, pages 2936–2942. ACM. event-place: San Francisco, CA, USA
FARE: Diagnostics for Fair Ranking Using Pairwise Error Metrics [link]Paper  doi  abstract   bibtex   
Ranking, used extensively online and as a critical tool for decision making across many domains, may embed unfair bias. Tools to measure and correct for discriminatory bias are required to ensure that ranking models do not perpetuate unfair practices. Recently, a number of error-based criteria have been proposed to assess fairness with regard to the treatment of protected groups (as determined by sensitive data attributes, e.g., race, gender, or age). However this has largely been limited to classification tasks, and error metrics used in these approaches are not applicable for ranking. Therefore, in this work we propose to broaden the scope of fairness assessment to include error-based fairness criteria for rankings. Our approach supports three criteria: Rank Equality, Rank Calibration, and Rank Parity, which cover a broad spectrum of fairness considerations from proportional group representation to error rate similarity. The underlying error metrics are formulated to be rank-appropriate, using pairwise discordance to measure prediction error in a model-agnostic fashion. Based on this foundation, we then design a fair auditing mechanism which captures group treatment throughout the entire ranking, generating in-depth yet nuanced diagnostics. We demonstrate the efficacy of our error metrics using real-world scenarios, exposing trade-offs among fairness criteria and providing guidance in the selection of fair-ranking algorithms.
@inproceedings{kuhlman_fare:_2019,
	location = {New York, {NY}, {USA}},
	title = {{FARE}: Diagnostics for Fair Ranking Using Pairwise Error Metrics},
	isbn = {978-1-4503-6674-8},
	url = {http://doi.acm.org/10.1145/3308558.3313443},
	doi = {10.1145/3308558.3313443},
	series = {{WWW} '19},
	shorttitle = {{FARE}},
	abstract = {Ranking, used extensively online and as a critical tool for decision making across many domains, may embed unfair bias. Tools to measure and correct for discriminatory bias are required to ensure that ranking models do not perpetuate unfair practices. Recently, a number of error-based criteria have been proposed to assess fairness with regard to the treatment of protected groups (as determined by sensitive data attributes, e.g., race, gender, or age). However this has largely been limited to classification tasks, and error metrics used in these approaches are not applicable for ranking. Therefore, in this work we propose to broaden the scope of fairness assessment to include error-based fairness criteria for rankings. Our approach supports three criteria: Rank Equality, Rank Calibration, and Rank Parity, which cover a broad spectrum of fairness considerations from proportional group representation to error rate similarity. The underlying error metrics are formulated to be rank-appropriate, using pairwise discordance to measure prediction error in a model-agnostic fashion. Based on this foundation, we then design a fair auditing mechanism which captures group treatment throughout the entire ranking, generating in-depth yet nuanced diagnostics. We demonstrate the efficacy of our error metrics using real-world scenarios, exposing trade-offs among fairness criteria and providing guidance in the selection of fair-ranking algorithms.},
	pages = {2936--2942},
	booktitle = {The World Wide Web Conference},
	publisher = {{ACM}},
	author = {Kuhlman, Caitlin and {VanValkenburg}, {MaryAnn} and Rundensteiner, Elke},
	urldate = {2019-07-10},
	date = {2019},
	note = {event-place: San Francisco, {CA}, {USA}},
	keywords = {Fairness, Fair Ranking, Fairness Auditing, Pairwise Fairness}
}
Downloads: 0