It's not just private companies that are heavily reliant on algorithms. Governments are using algorithms to make or help make decisions that people used to make, including about you as an individual. Increasingly, an algorithm determines whether you do or don’t get a government benefit. In some places, an algorithm helps determine the length of a prison sentence. In others, an algorithm decides which high school you go to.
There is no question that government use of algorithms is going to expand dramatically in the next few years.
Government use of algorithms can be an effective, efficient use of your tax dollars—often better than humans! But not always.
As a task force, we’re concerned that a lack of transparency, accountability, and oversight in government use of algorithms could be problematic. We’re especially concerned about government use of algorithms that reinforces or even accelerates existing discrimination patterns. There are lots of ways this can happen.
Here are just a few examples and some explanations:
- Machine Bias | Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner | ProPublica
- An Algorithm that Grants Freedom or Takes it Away | Cade Metz and Adam Satariano | New York Times
- Criminalizing the Unemployed | Ryan Felton | Detroit Metro Times
- Automating NYC and (en)coding Inequality? | Aki Younge, Deepra Yusuf, Elyse Voegeli, and Jon Truong
- Why Algorithms can be Racist and Sexist | Rebecca Heilweil | Vox
- The Myth of the Impartial Machine | Alice Feng and Shuyan Wu | Parametric Press
In other words, poor design, bad data, poor implementation, and other flaws can easily lead to biased algorithms even where a government actor had good intentions. Digital discrimination is real and pervasive. But it can be difficult to identify—or fix.
As one of our community meeting attendees asked:
“[I]n a world where algorithms are on the rise, how will government leaders...ensure equity?”
The Pittsburgh Task Force on Public Algorithms hopes our work will provide some answers.