Kleinberg, Ludwig, Mullainathan, and Sunstein note that when it comes to the elements that regulators need to scrutinize an algorithm, “at a minimum, these records and data should be stored for purposes of discovery.” In other words, they should be available to resolve legal questions, even if they’re not made public.
But for algorithms used in the public sector, transparency can go well beyond storing information for private inspection. The New York Criminal Justice Agency, which administers the pretrial-release algorithm developed in part by the University of Chicago Crime Lab, maintains a website where the general public can see data about its performance, read about how it’s used in practice, and even use the tool themselves. The site also describes the agency’s plan for assessing and updating the algorithm over time.
Given that public-sector use of algorithms is an issue of not only technical and regulatory competence but also popular acceptance, this kind of visibility into the function, performance, and maintenance of algorithms could play a key role in making them palatable to a skeptical public. Another key could be greater public oversight of how algorithmic products are selected for use. Given these tools’ potentially high impact, susceptibility to negative unintended consequences, and variation in quality, a thorough and transparent process for deciding which algorithm to use, and how to use it, may be appropriate. “We should be wary about the government procuring algorithms the same way we procure phones for the police department,” Ludwig says. “Having a private company say, ‘We can’t tell you how the algorithm works—that’s our [intellectual property]’ is not an acceptable answer for algorithms.”
The 2016 joint statement issued by the ACLU and its 16 cosigners echoes this sentiment:
Vendors must provide transparency, and the police and other users of these systems must fully and publicly inform public officials, civil society, community stakeholders, and the broader public on each of these points. Vendors must be subject to in-depth, independent, and ongoing scrutiny of their techniques, goals, and performance. Today, instead, many departments are rolling out these tools with little if any public input, and often, little if any disclosure.
Ludwig says one solution may be to rely less on the private sector and more on in-house or nonprofit development of algorithms. Mullainathan agrees that there’s still too little oversight of how A.I.-driven tools, from facial recognition systems to pretrial-decision algorithms, are selected by public decision makers, and that there should be far greater transparency about the performance of algorithms purchased by police departments and other public agencies. “The biggest gains in public governance that we’ve had in any country come from transparency and accountability, and we simply do not have that” when it comes to public-sector use of A.I., Mullainathan says.
Algorithms aren’t everything
As machine learning and other algorithms become more pervasive, their presence in and influence on criminal justice will likely continue to grow. Of course, it’s only part of the increasingly complicated picture of law and order in the US. Embracing algorithms, or abolishing them, will not take the place of broad and thoughtful reconsideration of how the police should function within a community, what sort of equipment and tactics they should use, and how they should be held accountable for their actions.
The question, then, is whether ML and other algorithms can be part of the future. If algorithms are to help improve American justice, the people adopting and using the tools must be fully aware of the potential dangers in order to avoid them.
Stanley concludes that while predictive policing and other examples of ML in criminal justice hold promise, it could take decades to work through the problems. Regulation is necessary, he says, but it is also a blunt tool, and legislators are rarely tech savvy.
He compares implementing ML to building the US transcontinental railroad system in the 19th century, which took many years and involved many train wrecks. “There’s no question there are ways that this could be socially useful and helpful, but it’s something that needs to be approached with great caution, great humility, and better transparency,” he says, adding that “a lot of the institutional patterns and incentives and cultures in law enforcement don’t lend themselves especially well to the kind of transparency that’s necessary. . . . It’s not foreordained that data and algorithms are going to bring some big social benefit compared to the nuts and bolts that need to be addressed to fix American policing.”
Kleinberg, Ludwig, Mullainathan, and Sunstein acknowledge that algorithms are fallible because the humans who build them are fallible. “The Achilles’ heel of all algorithms is the humans who build them and the choices they make about outcomes, candidate predictors for the algorithm to consider, and the training sample,” they write. “A critical element of regulating algorithms is regulating humans.”
Getting this regulation right could be the key to realizing the often striking performance benefits of algorithmic systems without aggravating existing inequalities—and perhaps even while reducing them. But it remains to be seen whether regulatory structures will develop that can meet this goal. Such structures are, after all, maintained by humans.