An interesting article entitled, University of Twitter? Scientists give impromptu lecture critiquing nutrition research, appeared online a couple of weeks ago (CBC News, May 5, 2018). An oncologist and medical policy researcher from Oregon, fed up with the sensationalized coverage of a recent study on wine consumption gleaned from published epidemiological data, excoriated the study, the media’s coverage of the study, and the field of nutritional epidemiology overall, in a series of tweets he called a “tweetorial.”
Among other things, the physician discussed at length his opinion about how flawed epidemiological research can be. He went on to say, “it is a field with fundamental structural problems that make drawing conclusions from it incredibly unreliable.” He was particularly harsh on nutritional epidemiology, the area of research that often leads to the contradictory headlines that tell us vitamin E cures cancer one day, and that it causes cancer the next.
But was his criticism justified? Is nutritional epidemiology “a scandal,” as a Stanford professor quoted in the article said? Or is it a misunderstood, and maybe a bit overused form of research that often generates public perceptions about the foods we eat (and don’t eat), not to mention public policy and research funding decisions? My inclination is to lean toward the latter; I certainly don’t think nutritional epidemiological research is scandalous, though I do feel that it is often misused as an arbiter of our beliefs about the foods and beverages we consume.
The fact is, no matter how much care nutritional epidemiologists take to ensure accuracy of their findings, and work and re-work to validate the measuring instruments (mainly questionnaires) they use to collect information, epidemiological research will always be subject to bias (largely because subjects are not randomized into groups), and confounding variables (mainly because human beings are complex creatures).
One of the hallmark tools of the nutritional epidemiologist is the diet recall or diet record, a questionnaire designed to measure what a person habitually eats and drinks. And while researchers have become much savvier about how they ask questions of their subjects and how they generate their questionnaires, one doesn’t need a PhD in statistics to understand the possible pitfalls of placing a subject in a specific food pattern group for the duration of a study (often 10 to 20 years or longer) based on their response to what they’ve eaten over just the past three days. But that’s generally how it’s done. The best epidemiological databases do update diet records periodically, but even then, only every few years or so.
So, if nutritional epidemiology is prone to so many miscalculations that can produce misreads on what constitutes “healthy” eating from “unhealthy” eating why do nutrition scientists rely on it so heavily? Much has to do with the fact that randomized control clinical trials, long considered “the gold standard” of research designed to identify cause-and-effect, can be very expensive, very time consuming and, in some instances, unethical to perform on human subjects. Further, as I’ve pointed out in a previous blog post, even the most well designed clinical trial is subject to error because no matter how hard we try, every aspect of the human condition cannot be replicated in the same person time after time.
How, then, should we view this sort of diet and health information that seems to bombard us on a daily basis? In a word I’d say, cautiously. Be particularly wary of headlines that sound too enthusiastic- – “New Study Indicates Vitamin Z Cures Cancer!” And, as best you can, try to ascertain how much research actually exists on a given issue, and how much of the existing research is epidemiologic in nature.
Generally speaking, the hierarchy of nutrition research validity suggests that randomized, double blind, placebo-controlled clinical trials are best, prospective cohort epidemiological studies (considered by most to be the “gold standard” of epidemiological research) a somewhat distant surrogate, and animal trials even less desirable for studying the human condition. There are exceptions, of course. A randomized clinical with a very small number of subjects may be a less enlightening than a well-done animal trial. So, while most of us don’t have the time or inclination to delve that deeply into the amount of research that exists on a given issue, keeping these general tenets in mind can at least help one to become a more discriminating consumer of nutrition science.
Nutritional epidemiologic research certainly has its place in the continuum of nutrition studies. It helps to generate hypotheses and, done well, can play an important role in helping to identify diet-disease relationships. But epidemiologic studies are not designed to identify cause-and-effect and, on their own, should probably not be used to generate policy or cement the public’s perspective on any food or nutrient. It’s a tool in the tool box, but like any tool it should be handled with care.
Mitch Kanter, Ph.D., is a consultant at FoodMinds