Machine learning can improve police departments' early-intervention systems.
- By
- March 01, 2021
- CBR - Public Policy
Machine learning can improve police departments' early-intervention systems.
Early-intervention systems, designed to identify officers likely to use excessive violence or engage in other behaviors harmful to others or themselves, have been popular with police departments across the country for decades. Both academic researchers and private-sector businesses have begun exploring machine learning’s potential to perform this task.
In 2017, a research team led by the Center for Data Science and Public Policy—formerly housed at the University of Chicago, now at Carnegie Mellon University—published the results of work it had conducted in North Carolina in collaboration with the Charlotte-Mecklenburg Police Department, in which it had used an ML system to identify at-risk officers. The department’s existing early-intervention system was triggered when “behavioral thresholds” were met; if officers were involved in complaints or use-of-force events a certain number of times within a given time period, they would be flagged for possible intervention. The thresholds were chosen on the basis of expert intuition, and the decision to intervene was subject to supervisor approval.
The researchers’ ML-based approach factored in events related to officer behaviors—things such as the discharge of a firearm, vehicle accidents, or citizen complaints—as well, but also drew upon the department’s millions of records related to its officers’ training, the traffic stops and arrests they made, the citations they issued, and their secondary employment. It factored in details about the neighborhoods in which the officers worked, as well as incidents that may have been particularly stressful, such as those involving young children or gang violence.
Using historical data to analyze the results of both the existing approach and the ML model, the researchers find the department’s existing system was only slightly better than chance at distinguishing high-risk officers from low-risk ones, but that the ML system could improve the true-positive rate (the rate at which it correctly identifies high-risk officers) by 75 percent while cutting down on false positives (the rate at which it flags low-risk officers) by 22 percent.
The University of Chicago has licensed the technology developed in the study of the Charlotte-Mecklenburg Police Department to Benchmark Analytics, a data-science company of which the university is part owner, and which specializes in helping police forces collect and analyze data about their officers. Benchmark’s early-intervention system, First Sign, is being used or tested in cities including Albuquerque, New Mexico; Dallas, Texas; Nashville, Tennessee; and San Jose, California.
In September 2020, the City of Chicago launched its own early-intervention system, this one created in partnership with the police department and the University of Chicago Crime Lab. Like the Mecklenburg system, the Officer Support System also leverages ML to help flag high-risk officers—with the goal of then connecting them with resources and support to prevent future problems. The system is being implemented gradually across the city.
This is an area of focus where ML proponents and critics appear to agree. “Police could use predictive tools to anticipate which officers might engage in misconduct, but most departments have not done so,” write the ACLU and 16 other organizations in a 2016 joint statement.
Early experiences from Chicago and elsewhere show that police misconduct follows consistent patterns, and that offering further training and support to officers who are at risk can help to avert problems. Police should be at least as eager to pilot new, data-driven approaches in the search for misconduct as they are in the search for crime, particularly given that interventions designed to reduce the chances of misconduct do not themselves pose risk to life and limb.
According to Jay Stanley, a senior policy analyst at the ACLU, the fact that some departments are experimenting with using algorithms to flag potential trouble spots is an encouraging development. He says that because of management practices, union contracts, and other factors, there haven’t been robust systems in place to identify problem officers. He cautions, however, that if algorithms are involved, “all the same fairness issues” that the organization has raised about algorithms in predictive policing still apply, for the protection of the police officers. “Decisions should not be made [entirely] algorithmically but subject to human review.”
Jennifer Helsby, Samuel Carton, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Andrea Navarrete, Klaus Ackermann, Joe Walsh, Lauren Haynes, Crystal Cody, Major Estella Patterson, and Rayid Ghani, "Early Intervention Systems: Predicting Adverse Interactions between Police and the Public," Criminal Justice Policy Review, March 2017.
Chicago Booth’s Jean-Pierre Dubé explains how retailers use hidden fees to obfuscate prices and avoid transparency.
Hidden Fees, Drip Pricing, and ShrinkflationIt’s even effective in identifying risks related to AI itself.
AI Reads between the Lines to Discover Corporate RiskThree experts discuss the sources of income inequality.
Why Are the Very Rich Getting Even Richer?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.