As data science has developed in recent decades, algorithms have come to play a role in assisting decision-making in a wide variety of contexts, making predictions that in some cases have enormous human consequences. Algorithms may help decide who is admitted to an elite school, approved for a mortgage, or allowed to await trial from home rather than behind bars.

But there are well-publicized concerns that algorithms may perpetuate or systematize biases. And research by University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, Christine Vogeli of Partners HealthCare, and Chicago Booth’s Sendhil Mullainathan finds that one algorithm, used to make an important health-care determination for millions of patients in the United States, produces racially biased results.

The algorithm in question is used to help identify candidates for enrollment in “high-risk care management” programs, which provide additional resources and attention to patients with complex health needs. Such programs, which can improve patient outcomes and reduce costs, are employed by many large US health systems, and therefore the decision of whom to enroll affects tens of millions of people. The algorithm assigns each patient a risk score that is used to guide enrollment decisions: a patient with a risk score in the 97th percentile and above is automatically identified for enrollment, while one with a score from the 55th to 96th percentiles is flagged for possible enrollment depending on input from the patient’s doctor.

Obermeyer, Powers, Vogeli, and Mullainathan find that black patients are on average far less healthy than white patients assigned the same score. For instance, for patients with risk scores in the 97th percentile of the researchers’ sample, black patients had on average 26 percent more chronic illnesses than white patients did. The result of this bias: black patients were significantly less likely to be identified for program enrollment than they would have been otherwise. Due to algorithmic bias, 17.7 percent of patients automatically identified for enrollment were black; without it, the researchers calculate, 46.5 percent would have been black.

The bias stems from what the algorithm is being asked to predict. Health is a concept that can’t be narrowly defined or encapsulated in a single metric, and this makes it difficult to observe directly in data. The algorithm uses health-care costs as a proxy for health needs, and risk scores reflect the algorithm’s prediction of who will have the highest future health-care costs. On this dimension, the researchers say, the algorithm is well calibrated across races: for any given risk score, the future health-care costs of black patients were similar to those of white patients.

The algorithm’s focus on cost as the outcome of interest is typical of other algorithms used in the same way, the researchers write.

The trouble is that black patients typically generate lower costs than white patients with similar health profiles, because black patients are less likely to receive treatment. “Whether it is communication, trust, or bias, something about the interactions of black patients with the health-care system itself leads to reduced use of health care,” the researchers write. A black patient who generates $5,000 in health-care costs is, on average, sicker than a white patient who generates the same costs.

The algorithm in question is not the only factor used to guide enrollment for care-management programs. However, its focus on cost as the outcome of interest is typical of other algorithms used in the same way, the researchers write. What’s more, they deem the choice to focus on cost reasonable on some level, and not just because most health-care systems want to minimize expenses: health-care costs and health needs do have a strong positive correlation. Sicker patients tend to spend more on health care. “The mechanism of bias is particularly pernicious because it can arise from reasonable choices” on the part of the algorithm’s designers, the researchers write.

Recommended Reading A.I. Is Only Human

Can we keep our flaws out of artificial intelligence and its decision-making?

A.I. Is Only Human

Obermeyer, Powers, Vogeli, and Mullainathan report that they took their findings to the algorithm’s manufacturer, which confirmed the finding of bias using a national data set of more than 3.5 million patients. The manufacturer then worked with the researchers to identify variables that could stand in for cost as a proxy for health needs; using a variable that combined cost predictions with health predictions (about the number of active chronic conditions patients would need to manage), they were able to create an algorithm that reduced bias by 86 percent, according to one measure.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.