What Would Make Americans Eat Better?
Demand may be the key to healthier diets.
What Would Make Americans Eat Better?As life grows increasingly digital, Americans rely on algorithms in their daily decision-making—to pick their music on Pandora, guide their program choices on Netflix, and even suggest people to date on Tinder. But these computerized step-by-step processes also determine how self-driving cars navigate, who qualifies for an insurance policy or a mortgage, and when and even if someone will get parole. So where should we draw the line on automated decision-making?
For most people, it’s when the choices turn “morally relevant,” according to Chicago Booth’s Berkeley J. Dietvorst and Daniel Bartels. Decisions that “entail potential harm and/or the limitation of one or more persons’ resources, freedoms, or rights” often result in the need to make trade-offs, something that we think humans, not algorithms, are better equipped to do, their findings suggest.
Most people expect algorithms to make recommendations on the basis of maximizing some specific outcome, and many people are fine with that in amoral domains, according to the researchers. For example, more than 1 billion people trust Google Maps to get them where they’re going. But when the issue of moral relevance intrudes, people start to object, the researchers’ experiments demonstrate.
“I suspect people feel they don’t want to toss out human discretion in matters of right and wrong, in part because they likely feel they ‘know it when they see it,’” Bartels says. Moral questions are complicated, he says, especially because one person’s morals won’t always align with another’s.
To test this theory, Dietvorst and Bartels gave 700 study participants the scenario of a health-insurance company considering using an algorithm to make certain decisions about customers’ plans. After reading about which choices the company would outsource to an algorithm, participants were asked if they, as customers of the company, would switch to a different insurer because of the algorithm. The researchers find a strong correlation between how morally relevant participants found the choices to be and their intention to switch insurance companies.
From another study, which involved participants choosing whether they’d want a human or an algorithm to make various decisions, the researchers learn that people dislike algorithms in moral situations because they focus only on outcomes, not the moral trade-offs involved. People expect human decision makers to take into consideration values such as fairness and honesty and to make a judgment for each specific case.
Study participants preferred humans over algorithms when making decisions they felt were morally relevant.
When weighing these trade-offs takes a back seat to achieving an optimal outcome, people become wary of the decision maker, be it human or machine, according to Dietvorst and Bartels. In another experiment involving health-insurance choices, participants disliked it when human decision makers tried to maximize costs and benefits when deciding which medications to cover, although they still trusted humans over algorithms to make the choices.
As algorithm-based decision-making becomes more widespread in businesses and other organizations, as well as our daily lives, it seems unlikely that people will fully trust algorithms to make decisions that require moral trade-offs anytime soon, the researchers find.
“People have a ton of moral rules that often conflict,” Dietvorst says. “They have complicated prioritizations, where sometimes this is the most important rule, but in other contexts, that’s the most important rule.”
For now, Dietvorst advises that when considering using algorithms to make a set of choices, weigh the moral relevance involved. Navigating to a destination usually involves few moral implications, but prioritizing medical patients for treatment does—and many decisions, such as picking a restaurant or show to watch—occupy a gray area. If a decision has a clear moral impact, an organization should consider giving a human the final say. That way, the organization can capitalize on algorithms’ ability to often make better predictions than humans, but the final decision will feel more acceptable.
Berkeley J. Dietvorst and Daniel Bartels, “Consumers Object to Algorithms Making Morally Relevant Tradeoffs Because of Algorithms’ Consequentialist Decision Strategies,” Journal of Consumer Psychology, July 2021.
Demand may be the key to healthier diets.
What Would Make Americans Eat Better?Research finds a discrepancy between what people plan to do when trading—and what they actually do.
The Two Big Strategic Mistakes That Investors MakeThe spread of disinformation illuminates algorithms’ unique abilities and shortcomings.
Can A.I. Stop Fake News?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.