Faculty & Research

Berkeley J. Dietvorst

Assistant Professor of Marketing

Phone :
1-773-834-8781
Address :
5807 South Woodlawn Avenue
Chicago, IL 60637

Berkeley Dietvorst’s research focuses on understanding how consumers and managers make judgments and decisions, and how to improve them. His main focus, thus far, has been when and why forecasters fail to use algorithms that outperform human forecasters, and explores prescriptions that increase consumers’ and managers’ willingness to use algorithms.

Dietvorst’s other research looks at such topics as order effects on consumer choice, choice architecture, and consumers’ reactions to corporate experiments. His research has been published in the Journal of Experimental Psychology: General and Management Science, and has been referenced in such media outlets as the Financial Times, Harvard Business Review, and The Boston Globe.

Dietvorst earned both a BS in economics and a PhD in decision processes from The Wharton School, University of Pennsylvania.

 

2016 - 2017 Course Schedule

Number Name Quarter
37000 Marketing Strategy 2017 (Winter)
37601 Marketing Workshop 2017 (Spring)

REVISION: People Reject (Superior) Algorithms Because They Compare Them to Counter-Normative Reference Points
Date Posted: Dec  23, 2016
People often choose to use human forecasts instead of algorithmic forecasts that perform better on average; however, it is unclear what decision process leads people to rely on (inferior) human predictions instead of (superior) algorithmic predictions. In this paper, I propose that people choose between forecasting methods by (1) using their status quo forecasting method by default and (2) deciding whether or not to use the alternative forecasting method by comparing its performance to a counter-normative reference point that is often independent of the performance of the default. This process leads people to reject a superior algorithm when (1) the algorithm serves as their alternative forecasting method and (2) the algorithm performs better than their default forecasting method but fails to meet their reference point for forecasting performance. I present the results of five studies that are consistent with this decision process. In Studies 1 through 4, participants were less ...

REVISION: Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
Date Posted: Aug  06, 2016
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for ...

REVISION: Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err
Date Posted: Jun  11, 2015
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was ...