In medicine, money, and many other domains, surrogate decision makers such as caregivers or financial planners are entrusted to make choices—sometimes vital ones—for other people. But these surrogates tend to make choices that the recipients wouldn’t make themselves.

Why are we so lousy at choosing for someone else what they would choose for themselves? We let our own preferences bias our judgments, according to research by Chicago Booth’s Stephanie Smith and Ohio State’s Ian Krajbich.

The researchers devised two experiments designed to resolve methodological inconsistencies in prior research about predictions. Much of the previous research into choices and preferences conflicts with itself, Smith says, and variations within the study designs may be the culprit. Smith and Krajbich made sure the participants in the study did not know each other, only that they knew they were making predictions and choices for a previous participant.

In the first experiment, the participants (whom the researchers call surrogates) were presented with a scenario that’s a basic version of what investment managers do for clients: choose between a sure $5 or a 50–50 shot at earning $10. Do you take a more conservative approach, or take a risk in hopes of a higher payout? The surrogates learned what six algorithms had chosen to do when presented with similar scenarios, then were asked to predict what those algorithms would do in this one.

Correct answers translated into points earned, and surrogates received an incentive to make accurate predictions. They also had to report what they themselves would choose.

The surrogates proved quite good at predicting what an algorithm would choose, correctly doing so 82 percent of the time. The algorithms had been crafted with different levels of risk aversion, and surrogates generally made more accurate decisions when their risk tolerance (established by a standard test) matched that of the algorithm.

In the second experiment, another set of surrogates did much the same exercise, although this time they had to predict the behavior of people (the earlier surrogates) rather than algorithms. Even though humans are less predictable than an algorithm programmed to always deliver the same answer, this new set of surrogates still made correct predictions about 80 percent of the time. “Participants were just as accurate and well calibrated with other human participants as they were with computer algorithms,” the researchers write. “People are good at learning patterns, even when there’s some noise and inconsistencies,” Smith explains.

However, surrogates were better at predicting what people wanted than they were at acting on what they wanted, making choices consistent with what the recipient had expressed only 76 percent of the time.

This is in part because surrogates considered their own preferences early in the process and let those bias their choices, the researchers argue. They also tended to choose what they thought the recipient should do to earn the most points, the study reports. Mouse tracking, a well-established process-tracing method in decision research, allowed the researchers to see whether the surrogates changed their minds at any point by observing where the mouse cursor moved on the screen. Often, surrogates would move the mouse toward the option that they themselves wanted before correcting for it and moving toward the choice their recipient would want.

“Often, it is best for the surrogate to choose what the recipient would choose for themselves,” the researchers write. “Despite being able to learn others’ preferences, surrogates do not always follow this path.” Smith advises that people who are making decisions for others, in whatever realm, should start by recognizing their own preferences and the influence they could have.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.