Predictions – Know How vs. Numbers

In 1954, the psychologist and researcher Paul Meehl published a book with a startling and controversial conclusion.  The book, Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, argued that a combination of rules, formulas and statistics could outperform human judgment in predicting human behavior and patient treatment outcomes.  Predictably, the book was met with a combination of disbelief and hostility from the scientific community.  Meehl’s concept was extremely threatening to those who had established a career and reputation based on their personal expertise and intuitive judgment.

Since Meehl’s landmark book, a broad array of studies have served to confirm his thesis across a large number of additional predictive domains beyond clinical psychology.  In 1996, Meehl together with his colleague, William Grove, performed a meta-study that analyzed the results of 136 studies comparing clinical and statistical prediction.  The underlying studies looked at predictions such as criminal recidivism, job performance, supervisory potential and surgery outcome.  The meta-study found that 64 studies favored the statistical approach, 64 showed equal accuracy and only 8 favored clinical prediction.

Despite the large amount of research supporting Meehl’s position, statistical prediction remains a largely unpracticed technique.  Professionals have a strong preference to remain independent decision makers, relying on a combination of personal observation, experience, training and intuition.  Turning control over to a statistical model seems counterintuitive, reckless, immoral and inhuman.  People typically raise the following reservations about the statistical approach:

  • There are some things that can’t be quantified and simply reduced to numbers.
  • I have a strong intuitive ability.  I can predict outcomes without being able to explain them.
  • How can I trust that this predictive model is accurate?
  • Each case is unique and has to be assessed individually.  Only a human can do that.

While each of these reservations is understandable, Meehl and others have shown that they are not appropriate objections to the use of statistical prediction.  A better understanding of probability, statistics, scientific research and cognitive biases would create a better appreciation for the power of the statistical prediction approach.

The hallmark of the statistical prediction approach is the establishment of statistical prediction rules (SPRs), known alternatively as clinical prediction rules.  SPRs are rules or algorithms designed by experts to accurately forecast an outcome from a set of current observations or properties.  Typically a group of experts identify a set of potentially diagnostic or predictive characteristics.  This set of characteristics is then statistically correlated to outcomes to determine its validity as a predictive model.

Let’s look at a sample SPR.  An area of great societal importance is the granting of parole to prisoners.  The goal is to release prisoners that will not commit additional crimes.  The traditional (i.e. expert) model for predicting recidivism involves a comprehensive process of interviews, reviews of the inmates incarcerated behavior and reviews of their criminal record.  Ultimately a team of experts (i.e. Parole Board members) votes on an outcome.

As an alternative to this methodology, an alternative SPR has been developed.  It is a fairly simple algorithm that incorporates 3 historical factors: the type of crime, the number of past offenses and the number of violations of prison rules.  In a 1988 study of 1035 convicts in Pennsylvania who were eligible for parole, the SPR dramatically outperformed the expert approach for predicting recidivism.  In the expert approach there was virtually no correlation between predictions and results.  The SPR, although modest in performance (22% correlation) was dramatically more capable.

What could explain this lack of effectiveness of the human expert model?  A different study of an Israeli parole board could shed light on the matter.  In this startling research, the percentage of decisions to grant parole was materially affected by food breaks.  There was a significant correlation between the time of day that the case was heard and the parole decision.  When cases were heard at the beginning of the day or after immediately returning from a break, parole was granted approximately 65% of the time.  Cases heard just before a break had a nearly 0% chance of a positive ruling!   This is just one possible example of the fallibility of human predictive and decision making capabilities.

If there is so much evidence that SPRs can beat expert decision making, why is there so much resistance to their usage?  There are a number of possible explanations.  First, we have a tendency to overrate the effectiveness of our intuitions.  As an expert when something “feels right”, it’s hard to accept that a cold, lifeless formula provides an opposite, and correct prediction.  There is a significant body of research that shows that people tend to be overconfident and overestimate their abilities across a wide array of domains.  Additionally, many people look at prediction rules as a threat to their professional worth.  If a rule or software can make the prediction, who needs my expertise?  Finally, many people with a lack of awareness of the research and statistical science behind prediction rules, fail to understand their powerful capabilities.

I would be remiss if I didn’t acknowledge that SPRs have been well accepted in several domains.  Bank routinely use SPRs for determining credit worthiness for loans and credit cards.  They have also taken hold in some areas of medical practice including the diagnosis of bone fractures.  However, there is still very little usage of SPRs considering the vast amount of research demonstrating their effectiveness.

In conclusion, organizations are frequently faced with repetitive decisions that involve judgement or forecasting of outcomes.  Where there is a large history of data regarding outcomes, SPRs can be a valuable tool in improving decision making.  Forward thinking organization would be advised to understand the mechanics of creating and implementing SPRs.

This entry was posted in Cognitive Bias, Probability/Statistics, Psychology. Bookmark the permalink.