“We need science. We need science. We need science.”
This was the messaging repeated by speakers at California’s fifth annual Emerald Conference, an event focused on discussing the latest in cannabis production and analytics. This mantra is the response to two fundamental problems facing the cannabis industry.
The first of these problems is that no one knows the mechanisms for how this plant affects humans. There are some basic understandings about the human endocannabinoid system observed via anecdotal studies, but little in terms of clinical trials based on science.
The second challenge to the science of cannabis is that it is incredibly difficult to legally conduct a scientific study on cannabis in the United States. Being federally classified as a Schedule 1 drug, most research facilities cannot study cannabis.
In an effort to drive the state of cannabis knowledge forward, researchers instead seek out partnerships with licensed cannabis producers that might yield some mutually beneficial results. In concept, producers are often very receptive to this type of partnership — it is seen as a potential opportunity to be on the forefront of the hottest new technology or production strategy that could give them an edge in an extremely competitive market.
In practice, partnerships between scientific researchers and cannabis producers are very challenging, as the two parties tend to have different objectives. Scientific studies generally demand quite a lot of time, space, and meticulous management of the experiment. If a study is exploring various doses of a given treatment, this can often result in extremely stressed, low-yielding plants, to the point of crop loss. While this may have been part of the experimental design, producers understandably aren’t enthusiastic about a research partnership where they lose part of their production and hurt their bottom line.
What Makes a Study “Scientific?”
Put simply, a scientific study produces results that have a quantifiable degree of certainty to them. For instance, if a study finds that one of their experimental treatments was “statistically significant,” this usually means there was more than a 95 per cent chance that the observed result was indeed due to the applied treatment, and not due to random chance. This 95 per cent certainty is the most common threshold for considering something statistically significant (though there are exceptions where this threshold must be much greater than 95 per cent), and the threshold used in a study is always clearly stated in a scientific report.
To achieve this degree of certainty, studies employ randomization and replication to account for potentially confounding factors that could yield misleading results. Proper randomization and replication are essentially what experimental design is and is what separates a science experiment from an anecdotal comparison.
Designing a Growth Chamber Experiment
To understand the types of confounding factors that could generate misleading results in an experiment, let’s consider an example of an experiment conducted in a walk-in plant growth chamber where we want to compare the influence of two different light spectra on plant yield. One light is yellow in appearance, and the other appears cyan. Inside our chamber, we have a wire rack on the left, and a wire rack on the right. Both wire racks have four tiers that we can grow plants on. Designed as a highly controlled environment specifically for plant production research, the interior environment is theoretically homogenous. There are multiple sensors throughout the chamber constantly monitoring the environment — temperature, relatively humidity, vapor pressure deficit, air speed, and CO2 concentrations are accounted for. The plants being grown in the chamber are genetically very similar, being cloned from the same mother plant. All the plants are anchored in the same inert rooting substrate and fed from the same nutrient reservoir that is constantly monitored for pH, electrical conductivity, and dissolved oxygen content. As we proceed with our study, comparing the influence of two different light qualities on yield in this environment, there should seemingly be no other factor influencing our result.
In practice, it is incredibly difficult to achieve a totally uniform environment, and it is generally accepted that no environment ever achieves this theoretical perfect uniformity throughout. More often, there are many tiny inconsistencies in environment that have the potential to add up to a problem for your study. In this case, perhaps the positioning of the CO2 probes provided an excellent reading of average CO2 within the chamber but didn’t capture the fact that the concentration of CO2 was greater closer to the floor than the ceiling.
In addition, maybe the fertigation lines that feed the left side of the chamber developed a level of algal contamination high enough that the algae were consuming many of the nutrients in the fertigation solution before it reached the plants being studied.
In this hypothetical situation, the plants towards the bottom of the chamber and on the right side of the chamber would be pre-disposed to achieving greater yields; they are developing in a comparatively CO2 and nutrient-rich environment, despite the efforts made to homogenize the research chamber.
Proper experimental design assumes that random, often mysterious confounding factors such as these are inevitable, and so randomization and replication is used to account for the influence that these factors may be having on your study.
Back to our growth chamber example, we can try and identify factors that we don’t expect to meaningfully impact our study, but we acknowledge the possibility that maybe there’s something about that factor we don’t understand. Remember, at the start of this study, we don’t know that there’s going to be a problem with algae in the fertigation lines on the left side of the chamber, nor that our CO2 concentrations are uneven. Instead, we identify that our experimental plants are spread through the chamber. Some plants are up high, some are down low, some are on the left, and some are on the right. From this, we identify two potentially confounding factors: “Height” and “Side.” Since we’re growing plants on four tiers, the factor known as “Height” has four levels: Top, Mid-High, Mid-Low, and Bottom. Since we’ve got a set of shelves on the left and a set of shelves on the right, the factor known as “Side” has two levels: Left and Right. It could easily be argued that there could be another factor called “Depth,” in which we try to account for variability from the front to the back of the chamber. For the sake for brevity, we’ll pretend that Depth doesn’t exist in this example.
Having identified what we think are possibly confounding factors in our production environment, we can determine how to best introduce the factor we’re actually trying to study: “Light Quality.” Indeed, for all intent and purposes, our “treatment” is just another influencing factor. In this case, that factor is called “Light Quality,” and it has two levels: “yellow” and “cyan.” Our objective here is to systematically intermingle our factor of interest with the other two factors we’ve identified to be potentially at play in our chamber.
For example, we want to make sure that in the “bottom” level of the “Height” factor, we’ve got some “yellow” Light Quality and some “cyan” Light Quality. The same rationale applies to intermingling Light Quality amongst the Side factor. We should end up with a layout that looks something like this:
By systematically randomizing the replicates of our factor of interest, we can account for whatever influence our confounding factors may have on our result.
At a conceptual level, here’s how this works: when analyzing yield data from this experiment, each factor would initially be analyzed separately. Analyzing the influence of “Height” in this study, all the data for each level of Height would be averaged and then compared to each other. Since each level of Height has an equal number of Cyan and Yellow Light Quality plants, the Light Quality factor doesn’t make any difference in our Height factor analysis; any variability introduced by Light Quality is accounted for and nullified in this comparison.
Next, the factor of “Side” can be analyzed. All the yield data from plants on the Left Side of the chamber are averaged, as are the data from the Right Side of the chamber. These averages are compared to determine if there was a significant different in plant yield based on which Side of the chamber they were grown on. Again, the influence of Light Quality is nullified by the fact that both Sides of the chamber are balanced in Light Quality Levels.
Finally, Light Quality can be analyzed. Yield data from all the Yellow Light Quality plants are averaged, as are the yield data from the Cyan Light Quality plants. Comparing these averages, we can identify if the Light Quality factor had a significant influence on yield.
In the spirit of keeping this article about the principles of experimental design and not the arithmetic, I’ve not discussed here exactly how to tell if averages that you compare within a factor (for example, the average yields between the Left Side plants and Right Side plants) are significantly different or not. The short answer is that it has to do with the variability surrounding each of those averages, and how much the variability between the two averages overlaps with each other. To learn the details, you’ll want to study up on standard error, standard deviation, and p values.
In designing our chamber study this way, we’ve eliminated the potential for confounding factors to influence the effects our factor of interest has. It may seem like overkill, but by accounting for as many potentially confounding factors as we can, we achieve a level of credibility and certainty in our results that would otherwise be unattainable.
This article is only a brief introduction into strategies on proper experimental design. The takeaway message to producers here is that if you’d like to execute a scientific study, be mindful of all factors that could influence your results. Do this, and yours could be the company driving the forefront of cannabis research which everyone else wishes existed.