So, Now What?

The Pittsburgh Task Force on Public Algorithms is focused on developing recommendations for effective public oversight. This includes listening to our communities and learning what we think algorithms should—and shouldn’t—be used for and what meaningful transparency and accountability from our local governments could look like.

You don’t need a technical background to weigh in on what algorithms in our region should look like. In fact, we think that public deliberation and engagement is crucial in determining whether an algorithm is an appropriate tool in a particular context and whether appropriate conditions for its use have been met. You are an expert on the needs of you, your family, and your community. The Task Force is working to build a system for our region that will center the public in the development and approval of government algorithms.

Let’s walk through an example:  imagine a city or a large school district using an algorithm to place students in its high schools. Here are some questions you might want to know the answer to: (To be clear, this is not a real example from Pittsburgh or Allegheny County.) 

The goal might be to minimize commuting time, or it might look to achieve racial or economic diversity within schools, or perhaps it would aim for a roughly equal gender split in each school, or maybe it would look to balance schools with students of varying achievement records...the possibilities are endless.

Maybe an algorithm is the most fair and efficient way to achieve the policy goal. Or maybe it isn't actually an improvement over the current system. Or maybe the risks of bias from an algorithm are too great to overcome.

Even with publicly agreed-upon goals for a government algorithm, there are still questions about the development of the system: Will an agency develop it on its own? Will they procure the system from a private vendor? Does the team include diverse viewpoints? Are they seeking community input?

Data, in particular, can be plagued by issues such as non-representativeness, bias, sampling errors, and more. In fraught areas like education, which have legacies of inequity, these problems are especially prevalent.

Regardless of the answers here, the public and outside experts require access to the details of the system—its data, its training sets, and the like—to effectively scrutinize it. This isn’t a one-time shot; there should be continued evaluation.

The deployment of the system should also introduce another moment for heightened public scrutiny. People should know if—and how—they can challenge an algorithmic system’s decisions. And communities should be able to learn about systems’ effectiveness and impact. As one of our community meeting attendees said, “we need transparency in use and in creation.”


These thorny questions—which can have profound impact on peoples’ lives—require decisions by humans. An algorithm cannot make these value judgments on its own, and the humans who make such decisions should not point to the supposed veneer of technical neutrality offered by an algorithm to gloss over the import of their actions.

Human judgments are encoded at nearly every stage of the process of developing an algorithm. Weighing and resolving these competing values is precisely the kind of government action that demands public direction—and you should not need a background in artificial intelligence to make your voice heard in such a debate. With a framework in place that fosters meaningful public direction over government algorithms—like this hypothetical high school example—our region could have more democratic ownership and control over government algorithms and how they impact our lives.