Authors
Robert Williamson, Aditya Menon
Publication date
2019/5/24
Conference
International conference on machine learning
Pages
6786-6797
Publisher
PMLR
Description
Ensuring that classifiers are non-discriminatory or fair with respect to a sensitive feature (eg, race or gender) is a topical problem. Progress in this task requires fixing a definition of fairness, and there have been several proposals in this regard over the past few years. Several of these, however, assume either binary sensitive features (thus precluding categorical or real-valued sensitive groups), or result in non-convex objectives (thus adversely affecting the optimisation landscape). In this paper, we propose a new definition of fairness that generalises some existing proposals, while allowing for generic sensitive features and resulting in a convex objective. The key idea is to enforce that the expected losses (or risks) across each subgroup induced by the sensitive feature are commensurate. We show how this relates to the rich literature on risk measures from mathematical finance. As a special case, this leads to a new convex fairness-aware objective based on minimising the conditional value at risk (CVaR).
Total citations
20192020202120222023202431928403120
Scholar articles
R Williamson, A Menon - International conference on machine learning, 2019