Nate Silver is no fan of political pundits.
“It turns out they’re no better than monkeys throwing darts,” the statistician and political forecaster told an audience on February 15 at the Gleacher Center, where he poked a little fun at pundits whose forecasts are routinely wrong because, argued Silver, they do not think systematically about polling data.
Silver suggested pundits could make more accurate predictions by doing a better job of using data. That approach has earned him fame and drew a full crowd to the Myron Scholes Global Markets Forum, a distinguished speakers series sponsored by Booth’s Initiative on Global Markets. Before and after giving the keynote address to alumni, faculty, and members of the general public, Silver signed copies of his book, The Signal and the Noise: Why So Many Predictions Fail — but Some Don't.
Silver, who studied economics as an undergraduate at the University of Chicago, initially worked as an economic consultant before he began work on a statistical system that could predict the performance of baseball players. He then spent five years as a baseball analyst when that sport was being shaped by what he called the “nerd revolution” that would soon reach politics.
In 2008 Silver founded the political blog FiveThirtyEight.com and predicted the US presidential election, correctly calling the results in 49 of 50 states. In the 2012 election, he correctly predicted the outcomes in all 50 states. His blog, which now appears on the New York Times website, reportedly drew a fifth of the newspaper’s online readers in the days leading up to the Presidential election.
At his talk, Silver summarized how he models political elections, by combining polling data with economic fundamentals such as jobs figures, inflation, stock prices, industrial production, consumption, personal income, and GDP.
Silver described ways that predictions can go wrong, for example when politicians or their supporters cherry-pick polls, focus on results they like or fail to get a full picture of the polling universe. That in turn can lead to misleading news reports. “The media is bad with outcomes that are not fifty-fifty or one hundred-zero,” Silver said. “Everything is either too close to call or sure to happen, where most real-world outcomes are somewhere in between.”
He cautioned that some analysts tend to overrate the newest data point, while others ignore new evidence and stick to a preexisting narrative. After the second Presidential debate last fall, some reports declared that President Obama had outperformed Mitt Romney (after underperforming him in the first debate) and yet proclaimed the debate a tie, citing Romney’s momentum, Silver recalled.
Bad data can lead to the wrong conclusions: when pollsters call only landlines they end up with a skewed sample, for example, as many young voters use cell phones.
Asked about the increasing amount of money in politics, Silver expressed doubt that the spending is worthwhile. Paying for a commercial to run hundreds of times in one market can have “diminishing returns,” he said. He predicted that money invested by Super PACs will affect US congressional elections more than it will affect presidential elections.
As for what comes next, Silver said he’s interested in education but questioned whether using data-driven approaches to evaluate students and teachers could be effective. He said he hasn’t exported his political forecasting to other countries in part because it can be more challenging to model elections that involve numerous political parties.
He also said, to laughs, that while people have asked him to predict the outcome of the papal election, he’s unlikely to weigh in on that topic— until he sees better data.—Emily Lambert