How to Improve Randomized Trials
Identifying how experimental units interact, and testing how these relationships affect outcomes, can make experiments more informative and valuable.
How to Improve Randomized TrialsThere is no single, perfect economic forecasting model—one method to always reach for when forecasting GDP or modeling a company’s earnings. A forecasting model that outperforms others in good times might prove wildly inaccurate during a financial crisis or sudden pandemic. Plus new rules, changing rates, and stimulus packages can all change economic conditions.
But when do you use one model rather than another? It’s difficult to pinpoint and test for the inflection points that indicate when it makes sense to switch. University of Heidelberg postdoctoral scholar Stefan Richter and Chicago Booth’s Ekaterina Smetanina put forward a framework for analyzing how forecasting models perform as conditions change.
“Current methodology doesn’t allow for that,” says Smetanina. “The testing that is done assumes there is one model that always wins. But for this to be true, there would basically have to be no changes. Our tests allow for the possibility that the best models will change over time.”
Identifying how experimental units interact, and testing how these relationships affect outcomes, can make experiments more informative and valuable.
How to Improve Randomized TrialsData are indispensable to understanding economic outcomes—but they need theory to be made useful.
Purely Evidence-Based Policy Doesn’t ExistConsider two race cars, each representing a forecasting model, that are set loose on a track with minimal room to accelerate before they begin to encounter obstacles in their lanes. Depending on how much room they have to get to speed and what obstacles they encounter, one car might seem faster than another at any given point. But how to determine which car is fastest overall?
A researcher can’t rely on their typical method, which is to use most of the historical data available to build the parameters for the model and then to use a smaller, more recent set of data to verify it. Where researchers choose to draw the line between the set used for making estimates and the set used for evaluation is known as “the splitting point.” Researchers try to pick a splitting point that leaves enough data in set one to build the model and enough in set two to evaluate it.
The process, says Smetanina, “is relatively ad hoc. There’s a lot of judgment and no methodology in the literature. But the choice is crucial.”
The proposed method avoids the splitting-point problem and instead examines each model’s performance against the other’s as a function of time—in effect, says Smetanina, comparing the performance of the two race cars over “all terrain.”
The result is a more robust model-selection process. Rather than rely on separate sets of historic data chopped into two epochs, Richter and Smetanina have developed a technique for estimating how well models perform under general conditions.
Ultimately, the researchers created two methods to compare forecasting performance—one that aggregates past performance to measure historical performance, and another that forecasts which model is most likely to outperform in the future. In an environment where things are changing, it can help to have these two perspectives, Smetanina explains. A model that has performed well in the past won’t necessarily continue doing so, and conversely, a model that hasn’t worked well before could do well in the future.
Stefan Richter and Ekaterina Smetanina, “Forecast Evaluation and Selection—a New Approach,” Working paper, March 2020.
Capitalisn’t hosts Bethany McLean and Luigi Zingales answer listener questions.
Capitalisn’t: Mailbag—UBI, AI, and Does Luigi Believe in Free Time?One idea for helping consumers avoid debt traps didn’t work in a UK experiment, partly because people didn’t have the funds.
Paying Off Credit-Card Debt May Take More Than a NudgeLars Peter Hansen and Kevin M. Murphy discuss how data can inform policymaking.
A Nobel Laureate on the Limits of Evidence-Based PolicyYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.