With the start of the American football season, pundits everywhere are stepping forward, declaring their predictions for the coming season. They are blogging and appearing on talk shows with confident pre-season power rankings, team-by-team analysis and Super Bowl predictions. There’s just one problem; they are unlikely to be correct. In fact, past studies have shown these “experts” to be no more accurate than proverbial “dart throwing monkeys”.
American football, like most sports, is representative of a complex system. Winning is the result of the interaction of 2-53 player squads directed by dozens of coaches and supporting personnel. The real-time weather, crowd noise and officiating proficiency can all play a determining role in the outcome of a game. Many other factors are impossible to judge in advance of the season:
- Will the new star acquired in the offseason “gel” with his teammates?
- Will any significant players suffer injuries or be arrested for an “off field” incident?
- Will a rookie “step up” and contribute unexpectedly?
Sports pundits are not alone in their inability to accurately predict the future. As I covered in a previous post, experts are typically ineffective in such domains as politics and economics as well. The researcher and author Phillip Tetlock chronicled much of this folly in his landmark book, Expert Political Judgment: How Good Is It? How Can We Know?
While predicting sports events is simply good fun, inaccurate predictions in other domains can lead to serious problems. There is an entire industry of consultants who provide guidance to firms regarding strategy, sales projections, product introductions and economic futures. A parellel set of consultants offers similar guidance for government agencies considering particular initiatives.
In both cases, like our sports pundits, the consultants attempt to make specific, quantifiable predictions about outcomes that are driven by enormously complex factors. And much like our sports pundits, they are frequently, and tragicly wrong. A 2007 Department of Transportation report reviewed the sad state of predicting the ridership of light rail systems, prior to their construction. For 19 different projects, actual ridership exceeded projections in only 4 cases. In 8 out of 19 cases, the actual ridership was less than half the projected number!
While the consultant collects their fees and moves on to the next project, the firm or municipality is left dealing with the aftermath. In the case of light rail projects, that means increasing fares, raising taxes or cutting service. Which leads to an interesting question. How could one establish a set of incentives that would hold consultants accountable for the accuracy of their predictions?
The late Charles Lave, who chaired the Economics Department at the University of California, Irvine, proposed an interesting solution back in 1991. In an article entitled “Playing The Rail Forecasting Game”, he suggested that consultants be required to post a bond to guarantee the accuracy of their predictions. Another variation on his theme would require consultants to place some of their fees in escrow until the project’s success could be determined.
There would certainly be some logistical problems associated with these schemes. Some projects take years to complete and additional years to determine their success. Is it reasonable to expect anyone to wait that long to be paid for work they have completed? How much accuracy could anyone reasonably expect for this type of forecasting effort? What if factors beyond the consultant’s control adversely impacted the project (e.g. legislation, recession)?
Nevertheless, Lave’s proposal highlights the folly of making major decisions about complex endeavors based on expert prediction. Highly complex systems, by their nature, are difficult to model with any level of reasonable certainty. Human beings, by our nature, are prone to overconfidence, ready to provide prognostications to anyone willing to listen (or pay). Current protocol for consulting engagements doesn’t create a shared risk between client and vendor.
Forward thinking enterprises would be wise not to base decisions on the “false precision” associated with the prediction of complex events. When depending on consultants for expert guidance, a shared risk model regarding fees is a possible strategy for holding people accountable.
What are your thoughts? When does it make sense to engage consultants or analysts for predictions? Where have you seen this model successful in your own firm? Do you feel that alternative payment schemes that share risk are practical?