Authors
Aditya Krishna Menon, Robert C Williamson
Publication date
2018/1/21
Conference
Conference on Fairness, accountability and transparency
Pages
107-118
Publisher
PMLR
Description
Binary classifiers are often required to possess fairness in the sense of not overly discriminating with respect to a feature deemed sensitive eg race. We study the inherent tradeoffs in learning classifiers with a fairness constraint in the form of two questions: what is the best accuracy we can expect for a given level of fairness?, and what is the nature of these optimal fairness-aware classifiers? To answer these questions, we provide three main contributions. First, we relate two existing fairness measures to cost-sensitive risks. Second, we show that for such cost-sensitive fairness measures, the optimal classifier is an instance-dependent thresholding of the class-probability function. Third, we relate the tradeoff between accuracy and fairness to the alignment between the target and sensitive features’ class-probabilities. A practical implication of our analysis is a simple approach to the fairness-aware problem which involves suitably thresholding class-probability estimates.
Total citations
2018201920202021202220232024736637710610469
Scholar articles
AK Menon, RC Williamson - Conference on Fairness, accountability and …, 2018