What are artificial intelligence’s biggest advantages over human intelligence in terms of promoting equitable decisions?

There’s a lot of press around algorithms being promoters of inequity or of bias. But we know from the behavioral-science literature that human beings are quite biased. We don’t just look at objective data; we also add our own internal biases. Study after study has demonstrated that when viewing a man and a woman doing a task at the same level of performance, people will make inferences about the woman they don’t make about the man. The mind just adds its own bias. The algorithms, while they may have other problems, tend not to add their own biases. They tend to reflect whatever is in the data.

What are the settings in which you think AI is most poised to improve equity?

The places where people are most worried about bias are actually where algorithms have the greatest potential to reduce bias. Take hiring—an issue where we’re worried that the underlying data may be biased, so the algorithm may be biased. And that’s fair. But hiring is also the place where humans add a tremendous amount of bias in terms of which résumés to look at, which person to hire conditional on the résumé, etc. And that bias is in addition to whatever is in the data.

The same is true of a setting such as criminal justice. It’s reasonable that people are worried algorithms in the criminal-justice system might add bias, and we should worry about that and find ways to deal with it. But it’s ironically the places where we worry algorithms will be biased that they have the most potential to remove a lot of the biases of humans.

Given humans’ role in the design and implementation of AI, do you think it’s likely to be an equity-promoting technology?

People want to anthropomorphize technologies—especially AI technologies, I think in part because the term includes the word intelligence. People are going to imagine these tools will have their own intelligence, or humanity almost. But ultimately, they’re just tools. So whether in any given context AI promotes equity is simply going to be a consequence of the intentions of the people building these algorithms as well as their knowledge. The science is moving forward, and that means we can make the builders of these tools knowledgeable enough about bias and how to fix it, so that in 10 years what we’ll be left with is intention. It’s not going to be a technological problem; it’s going to be a sociological problem.

Sendhil Mullainathan is the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.