And while their political views may be responsible for people’s skepticism toward experts, research suggests people are capable of trusting politicians who disregard experts without necessarily disregarding those experts themselves. Booth’s Levine finds in a series of studies on prosocial lying, including with University of Pennsylvania’s Maurice E. Schweitzer, that people can trust someone while at the same time believing that person is a liar. “There are different kinds of trust,” Levine says. “We can trust in someone’s benevolence and goodwill while believing they are lying to us. Many people who support Trump might not believe what he’s saying while still believing he is fighting for them.”
Personal experience, moreover, can move people toward taking expert advice. In a study set in the US, University of Texas’s Olivier Coibion, University of California at Berkeley’s Yuriy Gorodnichenko, and Weber used the staggered introduction of coronavirus lockdowns to examine their impact on consumers’ spending habits, economic expectations, and trust in institutions. They find that, even controlling for how people voted in the 2016 election, being under lockdown increased a person’s esteem for the CDC. “I’d guess people were really scared of the situation, and looking for guidance and leadership,” says Weber. “They found it in the CDC.”
Recognizing knowledge and remaining humble
Science, however, evolves—as does expert opinion, which it informs. The CDC updated its guidance about COVID-19 as health professionals gained more information and evidence. But it takes time for such evidence to accumulate, and then for early research findings to be reviewed, tested, and ultimately accepted as bankable knowledge—or rejected and replaced by other hypotheses. Thus, it’s not necessarily antiscience to be skeptical of new information, argues one economics expert—just a recognition of the scientific process.
Booth’s Kevin M. Murphy says it’s important to distinguish between short-term news and long-term, established knowledge. The former is the “flow” of knowledge, which can be provocative but may soon enough be proven incorrect. This includes new ideas that may never have solid support, even if they come from people positioned as experts. The latter is the “stock” of knowledge that is valuable and has taken years to establish.
“Our best answers are likely to come when we can apply proven ideas and principles to new problems. Things often seem ‘unprecedented’ when we look at them in terms of the details but much less so when you think harder,” he says. “To me, the key in applying economics is to see the commonality with problems you have studied before.” What can we learn from past problems, and existing ideas? What does that say about the current situation, including the limitations of new ideas? Murphy sees new theories as a last resort, and at convocations he has counseled new graduates to think “inside the box”—not just out of it, although that may be fashionable.
Of course, when time is of the essence—such as in the midst of a global pandemic, when scientific understanding changes quickly—there may be value in new data and ideas. But that is when it is particularly important to depend on established knowledge, he maintains. A theory created to address the current challenge may be enticing but also shaky when time is most precious.
“Given all of this, I think it is important to be humble up front,” he says. “We need to be clear that we are still learning about the situation and that our ideas may change. To maintain confidence, we should avoid the most speculative predictions and earn people’s confidence through honesty and making clear the limits of our expertise. We know some things but not everything. We often forget that policy decisions must consider more than our expertise. Most importantly, we should view what we provide as input not as answers. We often say that people who disagree with us ‘are not listening,’ but they may be listening and just disagree.” Experts make mistakes, and to say or imply otherwise undermines their credibility.
University of Chicago’s and Booth’s Lars Peter Hansen, a Nobel laureate, explains in a Chicago Booth Review essay how to make sense of new information as it comes in. Data, he writes, though vital to scientific understanding, are also frequently open to interpretation. “While we want to embrace evidence, the evidence seldom speaks for itself; typically, it requires a modeling or conceptual framework for interpretation. Put another way, economists—and everyone else—need two things to draw a conclusion: data, and some way of making sense of the data.” He and others seek to model how to “incorporate, meaningfully acknowledge, and capture the limits to our understanding,” and to understand the implications of these limits. (For more, read “Purely Evidence-Based Policy Doesn’t Exist.”)
It can be extremely valuable, he writes in a different essay, for decision makers to be confronted with existing scientific evidence from multiple disciplines, at the same time respecting the limits of this knowledge, in order to make informed decisions in the moment. In many settings, there is no single, agreed-upon model but rather a collection of alternative models with differing quantitative predictions. Take epidemiological models enriched to confront both health and economic considerations, for example, especially those used to inform policies such as lockdowns. “I worry when policy makers seemingly embrace one such model without a full appreciation of its underlying limitations. It can be harmful when decision makers embrace a specific model that delivers the findings that they prefer to see,” writes Hansen, expressing hope that we can stop pushing model uncertainties into the background of policy making. Hansen sees potential promise in efforts to integrate economics and epidemiology when building models aimed at guiding efforts to confront health and economic trade-offs in future pandemics. But there are knowledge limits in both fields that should be recognized when using such “integrated assessment models,” he says. (For more, read “How Quantitative Models Can Help Policy Makers Respond to COVID-19.”)
The gap between epidemiologists and economists, as between nonexperts and experts, can be wide—but much is to be gained by bridging it. Experts in any field can make mistakes by failing to recognize the reality in which others live, which affects how their advice will land.
Sapienza and Zingales conclude their paper on the divergent opinions of experts and nonexperts not by wagging fingers at ordinary Americans for not getting on board with expert opinion, but by urging experts to take off their professional blinders. “The context in which these [expert opinion poll] questions are asked induce economists to answer them in a literal sense,” they write. “Hopefully, the same economists, when they do policy advice, would answer the same questions very differently.”
Similarly, policy makers and the media can try hard to ask their questions from new angles, and with context in mind, much like the New York Times did in May in asking epidemiologists not when they think it will be safe to send children to school or step on an airplane—questions that alone might yield lots of “when there is a vaccine” responses. Instead, the newspaper asked epidemiologists when they plan to do these things themselves, forcing them to consider a wider range of factors than those contained in their area of expertise.
Still, as policy makers look not just to virologists, but economists, sociologists, historians, and others for guidance on how to cope with this pandemic and its fallout, responses couched in caution and hedged for public consumption may feel inadequate. Here, Penn’s Tetlock offers some succor. His team’s efforts to identify superforecasters uncovered several factors that boost performance, and one was public scrutiny. Which is to say, today’s skeptical citizens, if truly paying attention, may be improving the very advice they are scrutinizing.