Delayed Impact of Fair Machine Learning
ICML• 2018
Abstract
Fairness in machine learning has predominantly been studied in static
classification settings without concern for how decisions change the underlying
population over time. Conventional wisdom suggests that fairness criteria
promote the long-term well-being of those groups they aim to protect.
We study how static fairness criteria interact with temporal indicators of
well-being, such as long-term improvement, stagnation, and decline in a
variable of interest. We demonstrate that even in a one-step feedback model,
common fairness criteria in general do not promote improvement over time, and
may in fact cause harm in cases where an unconstrained objective would not.
We completely characterize the delayed impact of three standard criteria,
contrasting the regimes in which these exhibit qualitatively different
behavior. In addition, we find that a natural form of measurement error
broadens the regime in which fairness criteria perform favorably.
Our results highlight the importance of measurement and temporal modeling in
the evaluation of fairness criteria, suggesting a range of new challenges and
trade-offs.