Predicting vs. Forecasting the Future

Predicting vs. Forecasting the Future

What is yet to happen cannot be predicted with accuracy but it can be forecast in varying degrees of probability. Prediction is a subjective judgment of what someone believes will or might happen. It’s either optimistic or fatalistic in nature depending largely on their prevailing knowledge and the human proclivity to predispose unknown events. Since no one can know everything and stuff does happen, it’s invariably wrong. Forecasting, when done well, is a scientific extrapolation based on what is known at the time and an awareness of how that will be affected by changing conditions. Hence, unlike prediction, it ought to be free of intuition, speculation and bias. Prediction involves binary specificity – something will or won’t happen. Forecasting is a statement of probabilities.

Perhaps the most famous study of expert predictions was conducted by a Canadian, Philip Tetlock. At a conference 35 years ago, he listened intently as renowned authorities in their various fields of expertise delivered their reasoned predictions of what would happen going forward. He was struck by how many contradicted each other or were impervious to counter-arguments. His subsequent research involved a twenty-year study of 82,361 prophecies offered up by 284 highly educated and experienced experts about what would unfold in the future.

The result: These highly-credentialed oracles were horrific predictors of what would occur in their specialized domains. Their years of experience and expertise (including, for some, access to government classified information) made no difference in the accuracy of their prognostications. They were bad at short-term predicting and even worse at long-term forecasting. And, as these bona fide authorities amassed even more information supporting their views, they became more dogmatic about them. Like most of us, once a pundit jumps to a conclusion, she’ll either ignore or explain away conflicting evidence – a hard-wired tendency we all possess called confirmation bias.

In Tetlock’s brilliant study, two-thirds of the expert predictions simply did not occur, leaving him to conclude they were, in his words, about “as accurate as a dart-throwing chimp.” Many in this eminent research population said that certain future events were “quite impossible to occur,” yet 15% happened nonetheless. When their prophecies were declared to be “a sure thing,” more than a quarter of them failed to transpire. Nicholas Taleb (the black swan guy) is more harsh in his assessment of the ability of highly credentialed authorities to predict the future: “You can be an intellectual yet still be an idiot.”

When confronted with their errors, most of Tetlock’s experts did not admit it to be a flaw in their thinking. When they missed wildly, it was declared a “near miss.” In their view, if just one little thing had gone differently, they would have nailed it. Or, as Einstein once said, “If the facts don’t fit the theory, change the facts.” And the very worst at seeing the future had the highest level of confidence in their predictions. (This is the Dunning-Kruger Effect: The least able are often the most overconfident in their abilities.)

The track record of predictors in science, economics, politics and certainly the weather is profoundly dismal. In business, esteemed and lavishly compensated foretellers of future events are routinely and often wildly wrong in their predictions of everything from stock-market corrections to housing booms. In a decade-long study of the accuracy of currency-rate prognoses by 22 international banks, including the world’s largest and most prominent, not one accurately predicted the end-of-year exchange rate. In six of the 10 years examined, the actual rate fell outside the entire range of all 22 forecasts. The frequently ignored question is why do so many CFOs continue to rely on such unreliable predictions?

Credible insight into the future is possible but it requires a style of thinking that’s uncommon among those who like to believe the depth of their knowledge has granted them a special grasp on what is to come. To do so, one must become a forecaster rather than a predictor. This skill requires one to embrace a multitude of disciplines, be open (vs. closed) minded and understand integrative and contrarian thinking. This is an ability to consider contradictory worldviews and being comfortable with reaching tentative conclusions. Tetlock differentiated these thinking styles based on an ancient Greek parable that states, “The fox knows many things, but the hedgehog knows one big thing.” In order to catch the hedgehogs, the foxes integrated a variety of different strategies. Tetlock found these nicknames useful.

Hedgehogs are highly specialized, deeply and tightly focused in their thinking. They are what I call analyticals. Some have spent their entire careers using one thinking mode to fashion tidy theories about how the world works based on the rational abstractions of predecessors who examined problems through the single lens of their specialty. Foxes conversely “draw from an eclectic array of traditions, and accept ambiguity and contradiction.” Where hedgehogs represent narrowness, foxes embody breadth. Which is why the forecasting success rate of foxes invariably destroys the competition.

In a different study of 3,200 experts, Tetlock and his spouse Barbara Mellers found that the foxiest forecasters were “bright people with extremely wide-ranging interests and unusually expansive reading habits.” In an era of infoglut, too many of us have become skimmers of information instead of careful, deep readers. Not only are foxes the best forecasters, they tend to have qualities that make them particularly effective collaborators. They are “curious about everything,” easily cross discipline boundaries and view teammates as sources for learning, rather than as peers who need to be convinced their thinking is wrong.

A decade ago, the U.S. Office of National Intelligence launched a massive forecasting tournament for the general public. Over the course of four years, thousands of non-experts were asked to answer about 500 questions using this format: “Will this happen … within the next three years?” When competitors failed to achieve certain benchmarks of accuracy, they were eliminated from the competition. After two years, only one team (led by Tetlock and Mellers) remained. Another national contest for “superforecasters” followed. The winners of this competition applied a mode of reasoning known as probabilistic thinking where forecasts are regularly updated with the latest information available. In other words, their thinking evolved as circumstances changed. What distinguishes a superforecaster is that he knows what information is relevant, how to find it and what isn’t.

When our predictions come true, our beliefs are reinforced and rigidity sets in. Foxes think differently. When outcomes take them by surprise, they adjust their ideas. Conversely, hedgehogs stick to their initial beliefs and then update their theories in the wrong direction. They become even more convinced in the rightness of their prediction and the value of the thinking that led them astray.

There are six levels of proof – the gold standard is scientific meta data; the lowest is the opinions of experts. As Bertrand Russell said “Even when all the experts agree, they may well be mistaken.” By contrast, the best forecasters view their ideas as no more than a hypothesis in constant need of testing and updating. When they make a bet and lose, they embrace the logic of a loss as they would the reinforcement of a win. In a word, this is called learning.