top of page

Forecasting in a Pandemic World (Part 1)

This is the first of a multi-part blog on predictions based on probability statement, using a Tetlock framework. Part 2 will explore the use of probabilities, with particular focus on a quantitative approach. Part 3 will consider the local real estate market and what predictions mean in this context. The Good Judgment Project has a corona outbreak page. I’ll review this in Part 4. Multiple sources have statistics and projections on Covid19. I’ll compare the Good Project approach to the rest in Part 5.

I recently attended a teleconference call by a real estate economist, focusing mainly on the local real estate market. He was competent and provided good information (possibly a bit over-confident), including a number of categorical predictions (with the usual weasel words, like “probably,” “most likely”). That’s expected from most sources, but I think a series of probability statements would be more helpful. A major point of reference is Median Voter Guy reviews of two books by Philip Tetlock: Expert Political Judgment and Superforecasting. The local market (Austin area) is doing okay so far, even with the current pandemic and, the economist believed, will continue to do well, with some areas (like Williamson County) doing better than others.

One of the problems with many experts is behaving like hedgehogs (single minded) rather than foxes (willing to look at all available information). They may be right, but overall prediction accuracy is often questionable (see below and Part 2). Hedgehogs can be authoritative forecasters, being direct, forceful and fearless, which means they make great press and are more likely to get considerable air time. The problem is their accuracy is questionable. With few people actually measuring accuracy, they are home free.* If these are the financial expects, the losers are the actual investors relying on them. To add insult to injury, these same prognosticators often claim they were right all along.

Super-forecasting

The importance of improving forecasting and who was really good (“super-forecasters”) was the focus of Superforecasting: The Art and Science of Prediction. National Intelligence discovered this after the disastrous insistence that Iraq had weapons of mass destruction (“a slam dunk” according to CIA Director George Tenet). The CIA and others never explored the possibility they could be wrong. After that, they funded research on forecast accuracy, for which Tetlock played a significant part. Some of the participants proved to be expert forecasters, given training and considerable practice. The book summarizes how to improve predictions in an appendix checklist, with these key points: Focus on questions that have the best payoff (the “Goldilocks zone”) and stay away from the unpredictable. Break complex problems into more manageable sub-problems, separating knowable from unknowable parts and be careful about assumptions. Strike a balance between inside and outside views; the outside views should come first using an anchoring and adjustment approach. Continue to add new evidence (“incremental belief updating”). Look for clashing causal forces, arguments versus counterarguments. Consider the right level of skepticism (experience helps), prudence versus decisiveness. Look for errors behind earlier mistakes and beware of hindsight bias. Use team management and consider multiple views. Practice, practice, practice (consider Gladwell’s 10,000 rule).

Tetlock has his Ten Commandments for Aspiring Super-forecasters (the Appendix to Superforecasting):

  1. Triage: “Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

  2. “Break seemingly intractable problems into tractable sub-problems. … Decompose the problem into its knowable and unknowable parts. Expose and examine your assumptions.

  3. “Strike the right balance between inside and outside views.” Macro-data may be widely available and relevant, the outside views. Insiders have their own micro perspective. Both are necessary for predictions, starting with the outside information.

  4. “Strike the right balance between under- and overreacting to evidence” … including indicators that are not obvious. [Super-forecasters] are incremental “belief updaters.” … “Superforecasters are not perfect Bayesian updaters, but they are better than most of us.”

  5. “Look for the clashing causal forces at work in each problem. … For every good policy argument there is typically a counterargument. … In classical dialectics, thesis meets antithesis, producing synthesis. … Synthesis is an art that requires reconciling irreducibly subjective judgments.”

  6. “Strive to distinguish as many degrees of doubt as the problem permits but no more. … George Tenet would not have dared utter ‘slam dunk’ about weapons of mass destruction in Iraq if the Bush 43 White House had enforced standards of evidence.”

  7. “Strike the right balance between under- and overconfidence, between prudence and decisiveness. … They routinely manage the trade-off between the need to take decisive stands (who want to listen to a waffler?) and the need to qualify their stands.”

  8. “Look for the errors behind your mistakes but beware of rearview-mirror hindsight bias. Don’t try to justify or excuse your failures. … Although the more common error is to learn too little from failure and to overlook flaws in your basic assumptions, it is also possible to learn too much. … Not all successes imply that your reasoning was right.”

  9. “Bring out the best in others and let others bring out the best in you. Master the fine arts of team management, especially perspective taking (understanding the argument of the other side …), precision questioning … and constructive confrontation.”

  10. “Master the error-balancing bicycle. … Learning requires doing, with good feedback which leaves no ambiguity. … Super-forecasting is the product of deep, deliberative practice.”

*President Donald Trump is a rare exception on the inaccuracy count. The Washington Post started tabulating Trump’s “lies, damned lies and statistics” (actually “false and misleading claims”) during his first 100 days, an average of five a day. The Post count was over 16,200 as of January 20, 2020.

bottom of page