Late last year, the Justice Department joined the growing list of agencies to discover that algorithms don’t heed good intentions. An algorithm known as Pattern placed tens of thousands of federal prisoners into risk categories that could make them eligible for early release. The rest is sadly predictable: Like so many other computerised gatekeepers making life-altering decisions — presentencing decisions, resume screening, even healthcare needs — Pattern seems to be unfair, in this case to Black, Asian and Latino inmates.
A common explanation for these misfires is that humans, not equations, are the root of the problem. Algorithms mimic the data they are given. If that data reflect humanity’s sexism, racism and oppressive tendencies, those biases will be baked into the algorithm’s predictions.