Automated systems can help us decide what products to purchase, which mates are most suitable, and what the fastest route is from A to B. The algorithms behind these systems outperform human judgment in most forecasting domains—but people aren’t always willing to use them, and Chicago Booth’s Berkeley J. Dietvorst suggests a reason for this. His research finds that people essentially hold algorithms to a higher standard than they do humans, expecting algorithms to meet performance goals that humans themselves may not reach.

In a series of studies, Dietvorst had people complete a forecasting task with a monetary reward at stake. In one study, he had people predict how well high-school students would do on a math test. Participants had the option of relying on their own estimates or using those produced by a statistical model built using data from thousands of high-school seniors. Dietvorst told participants that the model, in general, produced an estimate that was off by an average of 17.5 percentage points.

Dietvorst also manipulated participants’ performance goals by changing how much money they could earn for their predictions—the amounts ranged from 40 cents for estimates that were only 5 percentage points off to 10 cents for estimates that were 35 points off. Participants given more incentive to make better estimates were more likely to rely on their own judgment than on the algorithm.

In another study, participants stuck with human forecasting, even when they believed that the algorithm would outperform their own forecasts—but when also told that the algorithm didn’t usually perform well enough to achieve their performance goal.

Our tendency is to use human judgment as our default forecasting method, Dietvorst says. And when considering using an algorithm instead, we ask ourselves whether the algorithm will meet a specific performance target—when we should more reasonably ask whether it would produce better results than human judgment. “This leads people to use human judgment instead of algorithms, which usually outperform human judgment but often fail to meet our lofty performance goals,” says Dietvorst.

And we’re missing out by not using algorithms, the findings suggest. Consider self-driving cars, for example. “People may be hesitant to adopt self-driving cars that outperform human drivers but fail to meet their lofty goals for driving performance (i.e., perfection),” writes Dietvorst.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.