Professionals are constantly looking to grow intellectually, and learn new ideas to add to their mental toolkit. One of the ways to do this is by reading the current research that is pertinent to their field. The research could be an MBA-style study of the practices of successful firms or a laboratory-based psychology experiment testing the effectiveness of different personnel management techniques. In either case, professionals tend to view the research results with an unfortunate lack of critical analysis. First, the studies are frequently assumed to be accurate, authoritative and relevant. “They were performed by highly credentialed researchers, frequently from esteemed institutions. How could they be wrong?” This line of thought is especially prevalent when the research validates an idea that is consistent with the reader’s worldview. An additional problem is that most knowledge workers who don’t engage in formal research have little understanding of the different types of scientific studies. This post will examine those different types of research and explain how they differ in terms of the quality of evidence that they provide.
Before examining the many types of scientific research, let’s look at the historical basis for formalizing research practices. The 17th century British philosopher, Sir Francis Bacon, is credited with the early establishment of formal methods for experimentation. Bacon understood that existing techniques of knowledge acquisition were inaccurate, relying on personal observation, simple conjecture or divine wisdom. He recognized that people were prone to particular ways of thinking that limited their ability to objectively assess evidence and obtain or verify knowledge. Bacon labeled these inherent limitations the four Idols of the Mind. His ideas were very consistent with the modern notions of cognitive biases. In order to counter the Idols, Bacon proposed a set of formal practices that we now know as the scientific method. That method is a cornerstone idea of all modern scientific research.
It’s important to understand that the studies we read about in academic journals or mainstream media publications vary dramatically in methodology and quality of evidence. The most fundamental distinguishing characteristic of scientific research is whether it is an experimental or observational study. In experimental studies, researchers actively control the variables of the study, establishing both experimental and control groups of subjects. A classic example would be a study to determine the effect of background noise on cognitive skills. One group of subjects would be assigned to a noisy environment while attempting to solve puzzles. A control group would attempt to solve the same puzzles without any background noise. Experimental studies are considered a high quality and reliable form of research. The “gold standard” of research is the double blind, randomized control trial. In this type of experiment, subjects are randomly divided between experimental and control groups. Neither the subjects nor the experimenters are aware of the group to which each subject has been assigned.
Unlike experimental studies, researchers don’t control the variables of an observational study. Instead, a population is simply observed and measured. A quick example highlights the difference. In our experimental example above, the study measured the effect of background noise on problem solving. In an analogous observational study, one might measure the noise level in two different office locations for a period of time. At the end of the study, you would compare the number of sick days taken by employees each location. This type of observational study is called a cohort study, following subjects forward through time. It looks at causes (i.e. noise) first and then looks at their potential effects (i.e. attendance).
An opposite form of observational study is known as a case-control study. In this type of study you look first at an effect and then work backwards to establish a cause that occurred in the past. For example, you have noticed an increase in mental health related issues for your employees. You are concerned that it may relate to changing shift times (e.g. constant movement between day and night shifts). You review time sheet data for the past 5 years and see if the employees with mental health issues had more variance in their scheduled shifts than employees without mental health issues. In a case-control study you look at an effect (i.e. mental health issues) and work backwards to find a cause (i.e. shift variability).
A third form of observational study, known as a cross-sectional study, looks at potential cause and effect at a single point in time. As an example, you are having an employee health day, with free wellness checks. You want to see if tenure is a factor in participation. You compare the length of service between participants and non-participants. In this case you are taking a snapshot that looks at the potential cause (i.e. current tenure) and effect (i.e. participation) at a point in time.
The three types of observational studies described above are considered analytical studies. They are considered the next most reliable form of research after experiments. There is another form of observational study known as a descriptive study. Descriptive observational studies are considered lower quality than analytical studies. A classic example of descriptive research is a case study. For example, a business journal might look at several companies that have implemented new employee relations practices and have improved their turnover rates. The case study would describe the new practices and attempt to draw conclusions about their effectiveness in reducing turnover.
The quality of evidence of any study is influenced not only by its type, but also by a number of other factors:
- Causation vs. Correlation – This is one of the most commonly misunderstood concepts in statistical research. Correlation simply implies that an effect is associated with a factor. That is, they are seen together. It doesn’t automatically imply that the effect resulted from that factor. Let’s go back to our study of the relationship between background noise and solving puzzles. The study may show a relationship, that is; the less background noise the greater the problem solving ability. But there may be other factors at work. The subjects in the “silent” group may be better innate problem solvers. It may be that the group subjected to noise were tested later in the day, and were tired. Studies other than well-designed randomized control trials have difficulty controlling for all factors. Therefore, they are not as effective in isolating causal factors.
- Significance of results – In any study, there is a possibility that simple random variation can influence results. Let’s assume we are doing an experiment to see if a coin is “fair” (i.e equal probability of heads/tails) or if it has been “loaded”. We flip the coin 10 times and get 8 heads. It would be unreasonable to assume the coin is loaded. There is a greater than 5% chance that a fair coin would land on heads at least 8 times out of 10. The general standard in research is a significance level of less than 5%. That is, the results would happen less than 5% of the time based on chance alone. Said differently, results are considered significant if there is a greater than 95% chance that the results are not due to some type of random variation.
- Magnitude of the effect – Sometimes a study shows a statistically significant result (i.e. high correlation or causation) related to an effect but only marginal magnitude. Consider our study of noise levels and sick days. It’s possible we could see a statistically significant result that confirms that our locations with higher noise levels have greater frequencies of employee sick days. But let’s say the result is a 10% increase. If the average employee in a “quiet” location has 10 sick days a year, an employee in a “noisy” location can expect to have 11. Perhaps the firm knows that they can fix the noisy locations by replacing all of the cubicle partitions and ceiling tiles with special sound absorbing material. The project to do that would run $2 million dollars. They might decide that the extra sick day per employee was an acceptable tradeoff, and forgo the improvement project.
- Bias and Conflict of Interest – The quality of evidence of a study can be severely compromised by a researcher’s bias or ethics. Sometimes researchers can become emotionally attached to a particular theory. This can cause them to subconsciously design and execute studies that lack methodological soundness. Another common driver of bias is the quest for tenure. Many researchers need to have their research published in journals to qualify for tenure. This can again lead to methodological shortcuts to achieve results that are novel, and therefore of interest to academic journals. An even more insidious problem occurs when research is sponsored by an interested party. This can influence a researcher to work to achieve the results that are expected by the sponsor. For a more detailed treatment of biases and conflict in research, see this blog post.
You may be asking, if randomized controlled trials produce the strongest evidence, why do other research types exist? Unfortunately, it is not always possible, or practical, to conduct a rigorous, methodologically sound experimental study:
- There may be ethical reasons that an experiment would be inappropriate. For example, we would not expose subjects to dangerous conditions (e.g. radiation) and check the results. We might, however, do a cohort study to follow people who live near microwave towers to see if they developed cancer.
- Taking our microwave tower study in reverse, we might observe a number of new rare cases of cancer. An appropriate study would be a case-control that looks backward to see if these people were more likely to live near a tower than folks who were healthy.
- Sometimes the cost, complexity or required time-frames make experiments impractical.
- Experiments have difficulty capturing large, diverse populations. It is sometimes impractical to get an appropriately representative subject group to participate in a lab based experiment.
Well designed scientific studies are an important source of enhanced knowledge. They are certainly a higher quality source of evidence then pseudoscience, folklore or simple expert opinion. However, they are not an unimpeachable source of truth. The next time you are confronted with an interesting study finding, ask yourself the following questions:
- What type of study was involved?
- What was the significance of the study?
- What was the magnitude of the effect?
- Who conducted/sponsored the research? Was there a possible conflict of interest?
- How profound were the findings? Novel, ground breaking results should be viewed more skeptically.
- Has the study been replicated by other independent researchers?
Answering these questions will give you a better sense of how strongly you should rely on this research to guide your professional behavior.