“If I Can Imagine It, It Must Be Likely” – The Availability Heurisitic

We live in a world filled with news stories about frightening tragedies and potential risks. In just the past six months, we had a horrifying school shooting and a sickening bombing at a marathon. We’ve been threatened by a rogue regime with nuclear annihilation. On a more mundane level, we’ve been warned about additional influenza threats and informed that horse meat has been surreptitiously used in certain food products.

All of these stories give rise to a certain level of predictable fear, in turn driving an internal monologue. Are my children safe at their school? Should I attend that concert next week? Did everyone in our family get a flu shot this year? That hamburger I ate last night tasted kind of strange.

While fear is an unpleasant emotion, it’s a dominant and important one for all higher order animals. In a prehistoric world filled with danger, fear was an essential element to avoiding harm. And an oversensitivity to fear wasn’t necessarily a bad quality. Is that rustling in the grass a poisonous snake or the wind? Better to assume the former and quickly run away. Is that a dangerous and angry rival in the distance? I’ll guess that it is and quickly hide.

But this same dynamic that made us safer in a prehistoric world, can be counterproductive and even crippling in a modern society. A 24 hour news cycle along with pervasive social media bombards us with graphic images of tragedies and threats. We’re tortured by the horrifying photos of the bloody victims of the marathon bombing. We’re haunted by the picture of small children being led away from the Newtown school shooting. We fearfully watch videos of military parades in North Korea that are filled with columns of dangerous looking missiles.

Unlike the prehistoric world, where fear was appropriately calibrated to prevailing threats, in a modern environment, it can lead to inappropriate risk calculations, and counterproductive behaviors. Psychologists have conducted significant research into the way people perceive risks and estimate the likelihood of events. A lot of this research centers on a phenomenon known as the availability heuristic.

The psychologists Daniel Kahneman and Amos Tversky coined the term availability heuristic (or bias) as part of seminal research they conducted in 1973. In this research, they demonstrated that people would overestimate probabilities if a scenario was easy to recall or envision. As opposed to the probability of risks and dangerous situations discussed above, Kahneman and Tversky initially demonstrated availability bias in some mundane settings.

In one experiment, subjects were presented with several letters from the alphabet. They were asked to judge whether each of these letters appeared more frequently as the first or third letter in a word. The letters, K, L, N, R and V, all occur more frequently as the third letter in English words. In every case, subjects believed that the letters occurred more frequently in the first position, by a ratio of approximately 2:1. Kahneman and Tversky believed that the effect occurred because it was far easier to recall words that started with a particular letter as opposed to words with a letter in the third position. That is, these “first position” words were more “available” to the subjects.

As part of this same research, Kahneman and Tversky conducted an additional experiment. Subjects were broken into two groups and asked to listen to a tape recorded list of 39 names. For one group, the list contained 19 famous male names and 20 less famous female names. For the other group, the list contained 19 famous female names and 20 less famous male names. Out of 86 participants, only 13 recalled a greater number of less famous names. Out of 99 subjects, 80 believed that their tape recorded list contained more famous than less famous names. Again, Kahneman and Tversky concluded that ease of recall led people to miscalculate the frequency of a scenario.

In additional research in 1978, the researchers Lichtenstein et al. examined how people assess the probability of lethal events. They presented subjects with a list of 41 causes of death with widely varying levels of frequency. As predicted, subjects were more inclined to overrate the frequency of death under particular conditions. Specifically, disproportionate exposure, memorability or imaginability were linked to higher judgements of frequency. As an example, subjects believed that homicide was more common than stomach cancer, even though the latter was five times as common! Sensational and vivid events such as tornados and floods were perceived as more common than their actual occurrence. Natural causes of death such as asthma and diabetes were underestimated in terms of likelihood. Lichtenstein et al. found a strong relationship between newspaper coverage of deadly events and perception of frequency.

Additional research conducted by Sherman et al. in 1985, looked at risk assessment in an additional, novel way. They showed that simply imagining a hypothetical scenario could increase an individual’s perception of the likelihood of the event. In the study, subjects were told that a new disease was becoming more prevalent in their area. They read a list of symptoms of this new disease and were asked to rate their likelihood of contracting the illness on a scale of 1 to 10. The subjects were broken into four groups and presented with some variation on this task. In one condition, the subjects simply read the list of symptoms. In a contrasting condition, the subjects were asked to actually imagine themselves with the symptoms. Additionally, the type of symptoms were presented differently. In one case, the symptoms were common and vivid (e.g. low energy, headaches). In an alternative case, the symptoms that were presented were not concrete and difficult to imagine (e.g. poorly functioning nervous system, inflamed liver)

Group 1 – Symptoms: Easy to imagine / Exercise: Imagine

Group 2 – Symptoms: Easy to imagine / Exercise: Read list of symptoms

Group 3 – Symptoms: Difficult to imagine / Exercise: Read list of symptoms

Group 4 – Symptoms: Difficult to imagine / Exercise: Imagine

As the researchers had hypothesized, the subjects in Group 1 predicted the greatest likelihood of contracting the disease. The combination of easy to imagine symptoms, along with an exercise of visualizing the condition, caused their perception of risk to be significantly elevated. On the opposite side of the spectrum, the folks in Group 4 predicted the least likelihood of contracting the disease. Attempting to imagine symptoms that were difficult to visualize created a perception of least elevated risk.

Personal Takeaways – Understanding availability bias, and recognizing when it could influence your behavior is a very valuable capability. Our tendency to overestimate the dangers of terrorism, crime and severe weather can cause us to live in unwarranted fear and take unnecessary precautions. Our bias to underestimate the dangers of common diseases can lead us to have undesirable dietary habits, to avoid medical exams and to be noncompliant with prescription drug regimens. Conditions that are asymptomatic, for example hypertension or colon cancer will be viewed as unlikely, when they are in fact, common killers.

Professional Takeaways – Availability bias can have unfortunate influences in the workplace as well. Often, as we look to make judgments and decisions, we are captivated by vivid imagery and memories. Just as in our personal lives, our sense of risk and probability can be distorted by availability bias. Any decisions that require calculations of risk are prone to this problem.

A new data center will be meticulously planned to withstand the effects of adverse weather events. While these events are extremely rare, their destructive effects are vivid and easy to imagine. A far more common threat are the outages resulting from process failures and operational errors. Although this is a bigger contributor to downtime in most data centers, it is difficult to envision and doesn’t evoke compelling imagery.

We often judge co-workers based on vivid memories. An individual who made significant contributions to a highly visible effort, receiving an award, will be well remembered. The individual who asked the uncomfortable question at the Town Hall will be closely associated with that moment. In either case, that one episode represents a minor part of their total body of work. But it will typically influence a large portion of your opinion of that individual. And it may also be used as a rationalization (or confirmation) for your existing opinion. If you had thought favorably of our award recipient, this will confirm your belief that he is a star. If you originally had negative feelings, this will reinforce your thoughts that he is a showboat, taking an unfair share of the credit. Our Town Hall questioner will either be viewed as a nuisance or a maverick.

In summary, on a routine basis, we need to consider the odds of an event occurring. Most often, we do so in an ad hoc fashion, relying on memory and instincts. This leads to an availability bias where we miscalculate the probability of the event. The key to avoiding availability bias is to use actual statistical odds as opposed to intuition when attempting to determine the likelihood of an event.

Posted in Business Continuity, Cognitive Bias, Probability/Statistics, Psychology, Risk | Comments Off on “If I Can Imagine It, It Must Be Likely” – The Availability Heurisitic

The NFL Draft – Steals, Busts and The Science of Recruiting

Football PlayersThis past week, an important rite of spring took place; the annual NFL draft. This highly followed process allows pro football teams to replenish their rosters with fresh talent from the college ranks. It has become a media spectacle, with fans and pundits alike, ranking the prospective draftees and grading teams after they have made their picks. Arguably, there is no recruiting process in America in which an institution invests more time and resources. Which brings up an interesting question: Does the NFL draft provide useful lessons for other institutions, such as corporations, regarding the recruitment of personnel?

Before answering that question, let’s look at some important aspects of the draft process. First, the draft is an essential element for staffing. The signing of free agents represents the only other viable way for a team to improve their roster. Teams are limited by league rules to having 53 players on their active roster. In a typical year, teams are able to draft somewhere between 6 and 10 new players. Selecting the best possible players is an important element in a team’s future success.

Because of how critical the draft process is, teams invest huge amounts of resources in an attempt to make the best possible recruiting decisions. They have full time scouts that attend college games, watching and grading every play of the top prospects. These scouts review endless amounts of game film, re-watching countless plays in slow motion, from multiple angles. They attend the NFL Combine, where players are put through a series of standardized drills and tests, rating everything from speed to strength to intelligence. Teams will then bring their top prospects to their headquarters for personal workouts and interviews.

During the draft itself, each team sets up a sophisticated war room. This NASA-like environment has owners, general managers, coaches and scouts huddled over laptops, video screens and posters – chock full of key player data. As the draft unfolds, the team is constantly reevaluating the remaining talent, considering trades with other teams, and ultimately landing on their choice for each round.

Upon completion of the draft, it’s become a ritual to grade teams on the effectiveness of their picks. This starts instantly with tweets, blog posts and articles and frequently becomes a part of team folklore, many years later. Those players that ultimately outperform expectations are labeled steals, while those who significantly underperform are called busts. The classic example of the former is Tom Brady, a sixth round pick (199th player selected) in the 2000 draft, widely regarded as one of the greatest quarterbacks of all time. The epitome of busts is Ryan Leaf, the 2nd pick of the 1998 draft, who had a short, unproductive career, highlighted by poor play, injuries and behavioral incidents.

There has been significant debate amongst journalists, academics and fans regarding the ability of teams to effectively judge the talent of prospective draftees. The conventional wisdom seems to be that teams differ in their ability to distinguish talent, with some organizations consistently making the better selections. However, some folks, including noted author Malcolm Gladwell, have made the alternative claim that forecasting the success of prospective NFL players, especially quarterbacks, is not really possible. This school of thought believes that there is a huge element of luck involved in making effective draft selections.

The economist Cade Massey has done some of the most significant research on this topic. His findings seem to reflect an amalgam of the man-on-the-street viewpoint blended with the skeptical view of Gladwell et al. He recently presented his findings in a talk entitled Flipping Coins in the War Room: Skill and Chance in the NFL Draft that he delivered at the Sports Analytic Conference at MIT Sloan. Based on extensive research of previous NFL drafts, Massey arrived at the following conclusions:

  • Teams are able to do a pretty good job of identifying talent. Those players selected earlier in the draft, on average, have more successful pro careers than those selected in later rounds.
  • Teams are roughly equal in their draft selection prowess. While there may be significant variance in a single year, over the long term, the differences are small.
  • While teams are generally able to identify the better players, there is still a large component of luck within the selection process. It is not unusual for later picks to have more successful careers than earlier selections.
  • Since the selection process is prone to error, and personal bias, utilizing a broad set of independent evaluators provides the greatest level of assessment accuracy (wisdom of the crowds).
  • Since there is an element of chance, a smart strategy is to amass a greater number of picks (usually done by “trading down”).

Now let’s compare the NFL draft to the recruiting process typically utilized in corporate America. In traditional institutional settings recruiting tools have historically been limited to a classic triad: resumes, interviews and references. These three devices have some significant limitations as selection tools. First, resumes are documents created by the candidate, designed to offer a favorable, rather than critical view. They can weed out candidates that are completely “off base” for the advertised position or serve as a discussion document for the interview. Interviews have historically been conducted in unstructured formats. That is, each interviewer utilizes their own personal style, conducting the interview in an ad hoc fashion. There is a significant body of research that has found that unstructured interviews are poor predictors of future job success. The last tool, references, much like resumes, are supplied by the candidate. It is rare to have an individual that can’t find someone willing to speak highly of their accomplishments and potential. As such, self-supplied references, like unstructured interviews, are a weak predictive tool.

In addition to utilizing recruiting tools that are inferior to those used by NFL teams, corporations have an even greater challenge of mapping prospective employees to new job roles. While there are definite differences between college and pro football, and team to team differences, a draftee’s college role maps fairly well to their pro role. A quarterback is still a quarterback, with roughly equivalent responsibilities. In the corporate world, the actual duties and success criteria associated with jobs of equivalent title can be quite varied. As opposed to the NFL, where each team is playing the same game, with the same positions, and consistent rules and goals, the corporate world has a much greater degree of variability. Companies can have vastly different sizes, cultures, mandates, and objectives. A job in one firm, (e.g. Financial Analyst), can have a wildly different set of responsibilities in another firm.

So, what should corporations do to maximize the effectiveness of their recruiting process? Let’s start by acknowledging some of the findings outlined above:

  • Predicting the future performance of recruits is a difficult business, fraught with errors, biases and randomness
  • Many corporate recruiting efforts use a limited set of ineffective tools
  • Improving corporate recruiting effectiveness should focus on creating a more robust, comprehensive and standardized process

With these findings in mind, the following practices would seem to be sensible adds to corporate recruiting practices:

  • Structured Interviews – Utilizing predetermined, standardized questions allows for more accurate comparison of candidates. Standardized scoring processes allow for consistent rating practices by interviewers.
  • Assessments – A large body of research indicates that assessment tests are a more reliable predictor of future performance than interviews. Consider the NFL draftees. At the combine, and in team workouts, they are given what amounts to a comprehensive and standardized assessment test. In the corporate world, similar tests are available for skills, aptitude and personality for different job types. The best way to design and validate assessment tests is to utilize existing personnel. You can compare their results on particular tests to their historical success at your firm. This can allow for the customization of tests that are optimized for your particular environment.
  • Independent Evaluation – Ensure that interviewers complete their grading and assessment of candidates prior to discussing the situation with other interviewers. This prevents each interviewer from influencing the assessment of other evaluators. By including a broad set of raters, you can create a “wisdom of the crowds” effect.
  • Independent Reference Checking – In today’s age of social media, it’s not difficult to locate independent references. The best references are those located by the recruiter, who are intimately familiar with the candidate’s capabilities and history, familiar with the hiring firm’s culture, and our trustworthy themselves.
  • Try/Buy – Even with the comprehensive of the NFL draft process, the busts, and even moderate disappointments still occur. One would expect no greater capability from even a world class, best practices corporate recruiting program. A simple way to get greater insights into a candidates capabilities and suitability for a position and culture is through a try/buy program. In this case, the candidate is brought on as a temporary or provisional employee through some probationary period. After observing the candidate’s performance “in real life” a decision can be made about hiring them on a permanent basis.

While each of these practices can improve the reliability of your recruiting program, they all have limitations and drawbacks. In some cases they add to the expense of the effort. In other cases they lengthen the recruiting process. Other practices, such as try/buy may be unacceptable to particular candidates. The tightness of the employment market and the desirability of your firm as an employer will both factor into the creation of an optimal recruiting program. Your goal is finding that sweet spot that provides the right amount of predictive power without excessive “overhead”.

Posted in Economics, Future of Work, General Management, Organization, Process Improvment | Comments Off on The NFL Draft – Steals, Busts and The Science of Recruiting

The Boston Marathon Tragedy – Standing Strong In The Face of Terrorism

The barrage of tragic images continues to haunt us. The horrible pictures and video show people’s lives painfully altered in a split second. The sickening accounts pour forward from eyewitnesses and trauma surgeons. The stark and depressing shots of the aftermath, reveal debris filled and blood soaked streets. Then, we hear the heartbreaking stories of the actual people that were hurt or killed. We are racked with fear and uncertainty, with an endless series of negative thoughts cycling through our minds:

  • Was anyone I know hurt in the incident?
  • That could have been me, I’ve run in these types of events.
  • I’m flying next week, will my plane be a target?
  • Is this the new normal; will there be constant bombings?

Terrorism continues to be a popular tactic precisely because it evokes these types of reactions in people. For a relatively small “investment”, it generates a massive amount of fear, economic loss and publicity. In an age of social media, all of these effects are dramatically amplified, offering free “advertising” for the sick individual, or movement, sponsoring the act.

Why is it that terrorist acts are so effective? Why do they stoke our fears in ways that “ordinary” risks don’t? Is there anything we can do to combat our fears and put this incident into perspective? Psychologists have long understood that people have innate tendencies to have poorly calibrated reactions to different types of risks. From an evolutionary perspective this makes sense. Our ancestors that were able to quickly recognize a threat, and react swiftly to it (e.g. running for cover), were able to survive and pass down their genes. It was better to have a high number of false alarms then to miss that one threat that resulted in your demise.

Unfortunately, in a world of pervasive media, our inherent tendencies work against us, causing us to miscalculate risks, and worry about the wrong things. A significant body of research has demonstrated a behavioral tendency known as availability bias. Availability bias occurs when people associate the probability of an event with how easy it is to envision that event. A classic 1978 study showed availability bias at work. Subjects were asked to guess the likelihood of death resulting from different causes. On average, they believed that accidents and disease were approximately equal. In fact, diseases cause about 16 times as many deaths as accidents. Similarly, they thought that homicide was a more frequent cause of death than suicide. In fact, suicide causes twice as many deaths.

Researchers believe that availability bias occurs because some events are more vivid than others.  Accidents, by their very nature, conjure up horrifying images of mangled cars or planes. On the other hand, respiratory disease, which kills more people than all accidents combined, is tougher to visualize. Similarly, homicides are constantly covered in the media, in graphic detail. Suicides, though twice as common, receive minimal news coverage, and are an infrequent subject of television or movie drama.

Terrorist acts have a tendency to impact people even more than accidents and conventional homicides. They are highly vivid and dramatic events. They create an endless trail of disturbing imagery that is instantly recallable. This makes them appear more likely and more threatening than they actually are. This was demonstrated by a recent survey by the National Consortium for the Study of Terrorism and Responses to Terrorism (START), a research effort based at the University of Maryland. In the survey (taken prior to the Boston Marathon tragedy), 15% of people thought about the prospect of terrorism in the United States in the preceding week. This compared to only 10% thinking about the possibility of hospitalization or becoming the victim of a violent crime. Let’s compare that to the actual data about these risks. Since 9/11, and prior to the Boston tragedy, there have been 26 terrorist acts on U.S. soil, resulting in 22 deaths. In contrast, in 2010 alone, there were over 1.2 million violent crimes, including 16,000 homicides. In just 2009, almost 8% of the population had an overnight hospital stay.

Another psychological tendency that makes terrorist acts scarier than everyday risks is something known as the illusion of control. The illusion of control is the tendency to believe that we can influence events or phenomena that we actually can not control. A trivial example would be the superstitions involved in blowing on dice or crossing one’s fingers. Another simple example is the “body english” employed by bowlers to attempt to modify the course of a ball that they have already thrown.

But beyond these minor examples from superstition and sport, there are serious consequences resulting from the illusion of control. Despite the fact that flying on commercial planes is significantly safer, many people choose to drive, believing that they are in control, and therefore can guarantee their safety. Since 9/11, there have been no successful acts of terrorism against U.S. based carriers. Additionally, there has been only one fatal crash of a domestic carrier involving large, commercial planes (i.e. mainline as opposed to regional jets). But the illusion of control makes us more comfortable in our own car, directing our vehicle’s course and speed. It also makes us relatively less comfortable in large public settings, where we feel no control over potential risks.

Another unfortunate byproduct of terrorist events exploits the natural human tendency to seek order in randomness. We see “faces” in clouds and look for meaning in coincidences. This is once again our ancestral nature betraying us. In a prehistoric world filled with threats, quickly identifying patterns of potential harm was a useful attribute. In the modern world of instant news, our pattern matching tendency leads to hypersensitivity to potential threats. After the initial two explosions at the marathon, a fire at a nearby library was immediately, and incorrectly linked to the bombings. A letter containing Ricin, addressed to a Senator, raised immediate alarms of broader, connected terrorist activities. Any suspicious news story over the next several months will be immediately viewed through the lens of potential terrorist activity. This again allows for free amplification of the impact of a single actual event.

It’s important that we don’t allow the demented, morally bankrupt perpetrators of this act to achieve their objectives. Recognizing our natural tendencies to react unproductively to vivid tragedies can allow us to reflect more rationally about the future. Let’s stop, take a deep breath, and look at some statistics. There were 8.4 million domestic flights last year on scheduled, commercial airlines. If the world was somehow turned upside down, and terrorists could consistently destroy one plane a month, your odds of being on a fated flight would be close to one in a million. Iraq, with one of the highest rates of terrorism in the world, had a peak of activity in 2005, with just over 13,000 people killed. Even in this war zone, in its worst year, a citizen had about a 1 in 2,500 chance of being killed in a terrorist attack. Neither of these scenarios is even remotely likely.

We can’t say for sure what the future will bring regarding terrorist activity. And we can never live in a world of perfect safety. But based on historical data of the last 40 years, even with a large increase in frequency, terrorism is unlikely to represent a significant source of risk for the average citizen. There will continue to be many other greater risks (e.g. car accidents, heart disease) that we have traditionally lived with each day, without being paralyzed by fear. Don’t allow this one, tragic incident to change your life or your concerns. Stay strong, continue your daily routines, and don’t permit these pathetic individuals to accomplish their twisted objectives.

Posted in History, Personal Growth, Probability/Statistics, Psychology, Risk | 17 Comments

That’s Just Not Normal – Power Laws

Most people are familiar with the idea of a bell curve. This statistical concept describes a distribution of outcomes of commonly observed processes and natural phenomena. A classic example of this idea is seen in the normal distribution of the height of human beings. The average American adult male is approximately 5’9″. About 68% of men are within 3″ of that mean (i.e. 5’6″ to 6′). And 95% of all men are within 6″ of the mean (5’3″ to 6’3″).  Therefore, only 5% of adult American males are either greater than 6’3″ or shorter than 5’3″. As a normal distribution, heights are clustered heavily towards the mean, with outliers being rarer occurrences. The mean and median are equal. That is, there are an equal number of people above the average as below the average. Additionally, the entire range of values is relatively small. The tallest person to have ever lived is less than 5 times as tall as the shortest person. Other examples of measurements that fit roughly to a normal distribution include SAT and IQ scores. The graphic below shows our example of average height, with the classic bell shaped curve.

Normal Curve Height

 

But there is another commonly observed distribution type that has a much different set of characteristics. This class of distributions is governed by a concept known as power laws. As opposed to the clustering around the “mean” seen in normal distributions, power law distributions are skewed so that a small number of outcomes have dramatically higher values than the remaining population. Typically, as well, a larger number of values are below the mean. Let’s look at a quick example. The size of U.S. cities is representative of a power distribution. A small number of very large cities constitute a big percentage of the overall population. Also, a very large number of small cities represent a small portion of the population. Out of 25,000 places, just the top 20 cities contain roughly 10% of the population. Also, as opposed to height, the ratio of the largest city to the smallest spans several orders of magnitude. New York, with 8.24 million people is 2 million times larger than Lost Springs, Wyoming. The graphic below shows the classic shape of a power law distribution.

Slide1

 

Other well known examples of power law distributions are the popularity of surnames, best selling books and the most popular websites. In each case, a small number of “top performers” (think Smith or Gladwell or Google) is dramatically larger than a huge number of marginal performers. A good way to think of power law distributions is that a handful of the largest items account for a markedly disproportionate percentage of the combined values of the overall distribution.

Let’s examine our surname example. Similar to the population of cities, the top 20 names in the U.S. accounted for over 8% of the population. The leading name, Smith, accounted for over 1% of the total by itself. On the other end of the scale, there are 10’s of thousands of surnames with less than 2500 occurrences, less than 1/1000 as common as Smith.

A curious feature of power law distributions is that many follow a rule known as Zipf’s law. This idea, proposed by the linguist George Kingsley Zipf in 1935, states that the frequency of an item will be inversely proportional to its rank. That is, the 2nd most popular item will be 1/2 the size of  the highest ranked item. The 3rd most popular will be 1/3 the size of the leading item. Let’s look at a table of largest U.S. cities as an example:

City Population Zipf’s Law Ratio Zipf’s Predicted Pop Error
New York  8,244,910
Los Angeles  3,819,702 1/2  4,122,455 7%
Chicago  2,707,120 1/3  2,748,303 1%
Houston  2,145,146 1/4  2,061,228 -4%
Philadelphia  1,536,471 1/5  1,648,982 7%
Phoenix  1,469,471 1/6  1,374,152 -7%
San Antonio  1,359,758 1/7  1,177,844 -15%
San Diego  1,326,179 1/8  1,030,614 -29%
Dallas  1,223,229 1/9  916,101 -34%
San Jose  967,487 1/10  824,491 -17%

While not a perfect match, the sequence does roughly follow the predictions of the law. More recent adjustments to Zipf’s algorithm produce even more accurate results.

Why do some phenomena follow a normal distribution while others follow a power law distribution? Phenomena with normal distributions are driven by the following dynamics:

  • Bounding constraints that inhibit growth or change
  • Slow growth across time leading to a limited range of values
  • Events are independent from one another
  • Related to simple measurements and repeatable processes

Phenomena that follow a power law distribution are driven by the following dynamics:

  • Lack of natural bounding constraints to inhibit geometric growth
  • Significant growth over time leading to very large ranges of values
  • Inter-connectivity, dependency or relationships between items (typically described as a network effect)
  • Related to highly dynamic, complex systems

Let’s look again at our stereotypical examples of each distribution type, through the lens of these dynamics. The height of humans is constrained by biological and genetic factors that limit its growth. Consequently, the average height has not changed significantly over time. Over the last 1000 years, it has changed by just a few inches. Each person’s height is essentially an independent and simple measurement.

The size of cities has grown dramatically over the last several centuries. The population of New York in 1723 was 7,248, less than 1/1000 its current size. It could continue growing rapidly by incorporating adjacent land, and building taller structures. Its growth was highly dynamic and complex, based on the countless decisions of millions of people and institutions.

Outside of power laws being a mathematical curiosity, why should the average professional care about their existence? Here are a number of reasons:

  • Application of Statistical Tests – Anytime you are applying a statistical test to a dataset, it’s important to understand the nature of the distribution of data. Many statistical tools will only work correctly on a given distribution type.
  • Diversity of Products – Chris Anderson coined the term “The Long Tail” to describe elements of the new digital economy. He showed how many product offerings historically followed a classic power law distribution. For example, a small number of top albums garnered a large percentage of overall record sales. A few blockbuster movies dominated the box office. In the traditional economy, limitations of record store shelf space, or number of theaters, eliminated many low popularity works from attaining adequate distribution. Anderson argues that in digital world, with unlimited virtual “shelf space”, low popularity items will play an increasing role. Producers will be able to effectively cater to a wider range of tastes with tailored offerings.
  • Risk Management – A common mistake that many institutions have made is viewing disruptive events as normally distributed phenomena. This has led to an underweighting of the impact of these events. In fact, many of these types of events (e.g. natural disasters, financial crises) are represented better as power law distributions. That is, the impact of a few, rare events is dramatically greater than the average event. Consequently, the probability of a major disruptive event is much higher. As an example, if one uses a normal distribution to model the performance of the stock market, the infamous crash of 1987 would be a statistical impossibility. That is, the predicted likelihood of such an event would be infinitesimal, eliminating it from planning scenarios. However, that same meltdown is a rare but expected event, when the performance of the market is modeled as a power law distribution.
  • Power Law Trend – We appear to be moving to a world where power laws describe an increasing number of important phenomena. Factors such as globalization and social media create more interconnectedness and complexity, in turn leading to power law behavior. Being able to recognize power law driven phenomena will allow you to perform sounder analysis of data and make better management decisions.
Posted in History of Science, Probability/Statistics, Risk | 1 Comment

Weak Evidence – Folklore, Pseudoscience and Superstition

Last week, in this blog, I covered the topic of scientific research. In a post, I described the most common forms of research as well as how they differ in terms of the strength of evidence that they provide. I thought it was important for lay people to understand that all research isn’t created equally. The fact that a researcher has advanced degrees, practices at a prestigious institution or has won a Nobel Prize, is no guarantee that his findings are accurate and relevant. Therefore, understanding the value of different forms of research, makes you a more informed professional, with an improved ability to critically assess scientific findings.

After I wrote this post, I realized that there is a larger and more insidious threat to the acquisition of knowledge. While I was focused on the reliability of scientific research, many professionals have belief systems and knowledge acquisition methods that are  decidedly non-scientific. This post will assess those methods and compare them to scientific study. Continue reading

Posted in Cognitive Bias, History, History of Science, Uncategorized | Comments Off on Weak Evidence – Folklore, Pseudoscience and Superstition