AI Heap
Published on

Cherry on the Cake: Fairness is NOT an Optimization Problem

arXiv:2406.16606 - [arXiv,PDF]
Authors
  • Name
    Marco Favier
  • Name
    Toon Calders
  • Affiliation
    University of Antwerp
In Fair AI literature, the practice of maliciously creating unfair models that nevertheless satisfy fairness constraints is known as "cherry-picking". A cherry-picking model is a model that makes mistakes on purpose, selecting bad individuals from a minority class instead of better candidates from the same minority. The model literally cherry-picks whom to select to superficially meet the fairness constraints while making minimal changes to the unfair model. This practice has been described as "blatantly unfair" and has a negative impact on already marginalized communities, undermining the intended purpose of fairness measures specifically designed to protect these communities. A common assumption is that cherry-picking arises solely from malicious intent and that models designed only to optimize fairness metrics would avoid this behavior. We show that this is not the case: models optimized to minimize fairness metrics while maximizing performance are often forced to cherry-pick to some degree. In other words, cherry-picking might be an inevitable outcome of the optimization process itself. To demonstrate this, we use tools from fair cake-cutting, a mathematical subfield that studies the problem of fairly dividing a resource, referred to as the "cake," among a number of participants. This concept is connected to supervised multi-label classification: any dataset can be thought of as a cake that needs to be distributed among different labels, and the model is the function that divides the cake. We adapt these classical results for machine learning and demonstrate how this connection can be prolifically used for fairness and classification in general.