Narrator: In the US, recidivism remains a formidable challenge, with over 70 percent of formerly incarcerated individuals re-offending within five years of their release. This cycle of crime and punishment strains both communities and the criminal justice system. Incarceration-diversion programs such as Adult Redeploy Illinois, or ARI, offer an alternative path focused on rehabilitation through treatment, therapy, and community supervision rather than imprisonment. But these programs face a dilemma: With limited capacity, how should decision-makers decide who gets in?
Zhiqiang Zhang: Today, many admission decisions for diversion programs are supported by machine-learning algorithms. These tools are designed to predict things like whether these individuals are likely to re-offend or successfully complete the program and rehabilitate. However, no algorithm is perfect. There’s always some risk of error. This means that we cannot fully rely on the algorithm for the perfect insight into who will benefit the most from the program, which makes it challenging to decide whom to prioritize for admission.
Narrator: That’s Chicago Booth PhD candidate Zhiqiang Zhang. He and his coauthors identified which decisions the algorithm can make with confidence, and for which it is uncertain. Admission decisions for diversion programs must weigh immediate public safety against long-term reductions in recidivism. This means choosing between low-risk individuals who pose less short-term danger and high-risk individuals who might benefit more from rehabilitation.
Zhiqiang Zhang: Let me give an example of a case where the trade-off is straightforward. Imagine someone who has stable housing, steady employment, and supportive family members. A machine-learning algorithm might predict with high accuracy that this person is unlikely to re-offend during the program. At the same time, suppose this individual struggles with a substance-use disorder. A treatment-based diversion program could directly address this issue, leading to potential long-term benefits. So this person represents both low-risk and high-potential benefits. In such cases, we can have greater trust in the algorithm recommendation. However, most decisions are not this clear cut. For example, someone assessed as higher risk may benefit more significantly from the program, especially if their challenges are treatable and their environment can be stabilized. In these more uncertain situations, we want to apply additional interventions, such as collecting more data or involving human officers in reviewing the case to ensure these decisions are thoughtful and well-supported.
Narrator: The researchers emphasize that their goal is not to replace human decision-making, but to enable effective collaboration between humans and algorithms. While combining machine insights with human judgment can improve outcomes, they note that more data and oversight don’t always lead to better decisions. Data is only available for admitted individuals, creating selection bias. And limited human capacity means only some cases can be reviewed in depth. Improving decision quality requires targeted use of both resources and interventions.
Zhiqiang Zhang: Our research suggests that the most effective way to make admission decisions for diversion programs is to combine a machine-learning algorithm with human judgment. Algorithm decisions can save time and reduce the burden on human officers, but only a part of the algorithm recommendations can be trusted with high confidence. For more complex and uncertain cases, we need a human officer to step in. Their expertise and ability to gather and interpret additional information can potentially lead to better-informed decisions. By having sufficient human effort to review the cases that need it most, we can improve the overall decision quality while still benefiting from the efficiency of automation.