Dietvorst and Bharti also explored how the level of uncertainty in a decision-making setting interacts with diminishing sensitivity to error. The researchers assigned participants to forecasting tasks with varying levels of uncertainty and had them choose between their own predictions and those made by an algorithm. Even though the algorithm always followed the best possible forecasting strategy, its odds of a perfect forecast went down as uncertainty went up. This would be the case for any forecaster, algorithmic or otherwise—and yet participants became more likely to choose their own predictions as the level of uncertainty increased.
This held even when participants recognized that the algorithm performed better on average. A belief that their own predictions were more likely to be perfect—even if less accurate on average—was more important, the researchers find.
The findings have implications for business decision makers and the general public, who often face what the researchers call “irreducible uncertainty”—situations where complete certainty isn’t possible until the outcome is known. For example, there’s no way for managers to know next quarter’s product demand until the quarter ends. If they bypass a superior algorithmic forecast and trust a human instead, hoping for perfection, they’ll end up with a lower average forecast accuracy in the long run, which could lead to unnecessary inventory costs.
Similarly, beliefs that human drivers are more likely than an algorithm to make a perfect decision could cause us to delay adopting self-driving technology, even if such technology would dramatically improve traffic safety on average.
People’s diminishing sensitivity to error and preference for variance could penalize some algorithms less than others. Although he hasn’t studied whether or how these preferences vary across different types of algorithms, Dietvorst says the fact that machine-learning algorithms are able to improve their performance over time means that people may be more likely to believe they’re capable of perfect or near-perfect forecasts in the future.
When comparing an ML algorithm to one that’s constant, “you might believe that a machine-learning algorithm gives you a relatively higher chance of being near perfect,” Dietvorst says, “even if the past performance that you’ve seen from the two is identical.”