How to tell if a scientific claim is valid or not

Individuals or companies often approach me to endorse their health or fitness products. They usually quote scientific studies to back up their product's claim. Most of the time, after questioning them, I discover that they have no idea about what a "scientific study" is really all about. Apparently, most of them just parrot the line "it is scientifically proven".

If these individuals who are involved in selling these products are in the dark about scientific claims, how much more the average consumer who is confronted daily with advertisements and news media reports about "scientifically proven" health and fitness products or services.

Here is a guide to making sense of scientific claims. Hopefully, it will make you a more educated and less gullible consumer.

Different types of scientific studies.
According to a special report in the September 1998 Tufts University Health & Nutrition Newsletter, there are three major types of human research - clinical trials, epidemiologic studies, and population-based intervention trials - and each has its own pros and cons.

Clinical trials: A clinical trial is an experiment conducted in a controlled setting, often a hospital, where researchers give a group of people treatment - such as a supplement, drug, or diet - and then measure their response. Clinical trials are believed to yield very accurate results that can help establish cause-and-effect relationships between various substances or lifestyle activities and specific health outcomes. However, they tend to be conducted on restricted groups of people that include, for instance, just one age group, sex, or race. That allows the scientists to keep the study environment more "air-tight" so that variations within the population being studied don't confound the results. However, it means the results are not necessarily applicable to all people. Clinical trials often need to be repeated in different groups with different genetic make-ups and lifestyles before a recommendation for the general public can reliably be made.

Epidemiologic studies: Epidemiologic studies look at much larger groups of people than clinical trials - up to tens of thousands of subjects. These are not experiments in which researchers control a certain aspect of the subjects' lives but, rather, make observations of free-living populations in which they search for relationships between lifestyle or genetic factors and the risk for chronic diseases. Harvard University's Nurses Health Study, which looks at the lifestyles of some 90,000 women, is an example, of epidemiologic research. Because epidemiologic research is generally conducted on large groups of people, the results tend to be more applicable to the population at large. However, epidemiology virtually never proves cause and effect; it can only make associations on which other researchers might then decide to base a clinical trial to test whether "X" lifestyle actually leads to "Y" condition. Granted, the more people in the study and the more tightly controlled it is for various lifestyle factors, the higher the chance that there really is something to any association found. But still, one can never automatically assume that an association proves a cause.

Population-based intervention trial: Sort of a cross between an epidemiologic study and a clinical trial, a population-based intervention trial is a project in which large numbers of people live freely rather than in a controlled setting but are given either a treatment or a placebo and then observed to see whether a specific outcome occurs. A study of 29,000 male Finnish smokers that was released a few years ago, in which those who took beta-carotene turned out to be more likely to develop lung cancer than those who didn't, is an example of an intervention trial. The strength of such studies is that, like epidemiologic research, they can observe thousands of people. The drawback is that they cannot be as well controlled as clinical trials. Thus, it may not always be the treatment that's having the effect (or the full effect) but something in the subjects' lifestyles that the scientists didn't account for.

Mini-glossary of research terms.
This mini-glossary of basic research terms comes from that same Tufts Newsletter special report.

Placebo-controlled: If a clinical trial or population-based intervention trial is placebo-controlled, that means there is a group similar to the treatment group that is given a mock pill, or placebo. The effect on the placebo group allows researchers to tell whether the actual treatment is having an effect or whether it's just the fact that their subjects are being treated. Sometimes just being given a "sugar pill" provides a psychological boost that yields beneficial results.

Double-blind: A double-blind trial is one in which neither the study participants nor the researchers heading the study know who is getting the real treatment and who is getting the placebo until the experiment is over. As a result, the subjects can't knowingly alter their lifestyles during the trial to make the treatment more or less effective, and the researchers are prevented from reading into findings in order to come up with "expected" results.

Prospective study: In a prospective epidemiologic study, scientists look at a group of people at a specific point (or points) in time and then wait to see who gets what diseases before making associations between lifestyle and risk of illness.

Retrospective study: In a retrospective study, researchers compare people with a disease or other condition to a similar group of people who aren't affected and then look backwards in time to see what differences in their lifestyles might have contributed to the different outcomes in their health status. Some retrospective studies are designed better than others. For example, in a retrospective study that looked at pregnant women's consumption of hot dogs, mothers with teenage children were asked to recall what they ate as many as 14 years ago. (Can you remember what you ate last week?)

Tips for decoding "science speak".
In the August 1998 issue of AsiaFit Magazine, writer Matt O'Neill, gives tips for decoding "science speak". These words or phrases appear very often in media reports about the latest research or on a product's brochure.

  • "May" does not mean "will". You really don't know the odds here.
  • "Contributes to" and "is linked to" does not mean "causes". Other factors may be more important.
  • Although scientific studies build on our current knowledge, one study taken alone seldom proves anything. Subsequent studies may say exactly the opposite.
  • A "breakthrough" rarely occurs (example, the discovery of penicillin) but the term is used too often.
  • "Double the risk" may or may not be meaningful. If your risk was 1 in 1,000,000 it may now only be 1 in 500,000.
  • "Significant results" pop up all the time. A result is said to be statistically significant when the association between the factors has been found to be greater than what might occur at random. It may not mean "major" or "important".

How to evaluate research.
According to the January 1998 IDEA Personal Trainer Magazine, you need to ask the following questions before jumping to conclusions about scientific claims.

  • Has the study been published in a reputable scientific journal? This certainly lends the finding more credibility. However, all studies have their limits, so be sure to continue to ask questions even if you found the research in a reliable source.
  • Who were the researchers? Were they hired by someone with a financial interest in the study's results? This doesn't prove the findings were false, but it does bring them into question.
  • What limitations were inherent in the type of study? For example, studies that examine the health of a particular population without subjecting the members of that population to a controlled environment can only suggest relationships. What holds true for people in Bethel, Alaska, may not necessarily hold true for people in Bangkok, Thailand, because these two populations consume different diets and contend with completely diverse environmental factors.
  • Who (or what) were the study subjects? Can the study's results be generalized to other groups? Research conducted on rats, for example, may not translate readily to human beings. A study that seeks to define the best type of training for an elite athlete cannot be generalized to suggest a program for a weekend exerciser.
  • How do the results fit in with the body of research on the subject? This is a "biggie". Do the results add weight to what is already known? Or do they contradict other studies, thereby suggesting that further research needs to be done?
  • Has the author omitted important points in the background section of the study? Do the study subjects smoke? Do they have a history of disease? Are they overweight? How often do they exercise? All these factors can influence the results of the study.
  • What size was the study group? Obviously, a study of several hundred people should carry more weight than a study of five.

Continue reading here: Why You Shouldn't Believe Before and After Photos

Was this article helpful?

0 0