How to Tell Which Forecasters to Take Serously
Forecasting is a tricky business, and there are a few ways to tell which people are worth paying attention to and which are noisy pundits.
Making predictions is hard, and the large amount of information we have in the digital age may make the project of seeing the future even more difficult. It can be hard to sift through the noise and find the bits of information that actually hint at the future.
Nate Silver’s book The Signal and the Noise works through the challenges of telling important forecasting information from irrelevant data. After his career in professional poker, Silver ran the election forecasting website FiveThirtyEight. He knows better than most the frustrations of not only creating forecasts under high uncertainty, but also the blowback a forecaster can get when an unlikely outcome occurs.
Silver sums the challenge up in an early passage in his book’s introduction:
“…making better first guesses under conditions of uncertainty is an entirely different enterprise than second-guessing. Asking what a decision market believed given the information available to her at the time is a better paradigm than pretending she should have been oracular.”
Precision Disguises Accuracy Problems
There’s a particularly good reason for not trusting forecasters as oracles. A major reason for that is the large number of predictions that are precise, but inaccurate.
Precise forecasts may use many decimal places or give a clear “one in however many chances” figure that paints a clear picture of an event’s odds. However, that doesn’t mean that they’re right. As Silver puts it:
“One of the pervasive risks that we face in the information age…is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening. This syndrome is often associated with very precise-seeming predictions that are not all that accurate…This is like claiming you are a good shot because your bullets always end up in about the same place—even though they are nowhere near the target.”
Silver gives the example of recession odds leading up to the 2008 Financial Crisis. He offered economists’ forecasts that estimated the risk of a recession at 1 in 500. Silver also estimated that economists underestimated the default risk from the securities that crashed the economy by “a factor of 200.”
“Precise forecasts masquerade as accurate ones, and some of us get fooled and [double down] our bets,” Silver wrote. “It’s exactly when we think we have overcome the flaws in our judgment that something as powerful as the American economy can be brought to a screeching halt.”
Look at the Long Run
One of the tricky things about evaluating forecasters is that unlikely events do happen. What if, with the benefit of hindsight following an event, it was reasonable to believe an event had a 10% chance of happening before it did? An accurate forecaster would have predicted that 10% chance instead of saying the event had a 90% chance of coming to pass.
“If you forecast that a particular incumbent congressman will win his race 90 percent of the time, you’re also forecasting that he should lose it 10 percent of the time,” Silver wrote. “The signature of a good forecast is that each of these probabilities turns out to be about right over the long run.”
It’s impossible for forecasters to achieve 100% accuracy, but some can navigate uncertainty better than others. The Signal and the Noise is a great overview of the challenges forecasters must overcome and some of the ways to tell which forecasters to take seriously and which are paying attention to the wrong subjects.