Authors aim to improve obesity research, confidence in outcomes
The Obesity Society
University of Alabama at Birmingham
For Immediate Release:
March 30, 2016
SILVER SPRING, MD – Exposing common statistical errors and bias in obesity research to improve the reliability of future research is the aim of the authors of three new papers and a commentary published today in the April issue of Obesity, the scientific journal of The Obesity Society. The researchers authoring this special statistical issue of Obesity identify common scientific and statistical errors, challenge assumptions about weight loss, and call for increased application of control arms in obesity intervention studies. Their findings have implications for researchers and journals, media reporting on the research, and people trying to make informed decisions about weight-loss treatments.
While statistical errors and bias occur in all fields of scientific research, there is increased scrutiny on research related to obesity, nutrition, and weight loss. There are many reasons for this attention, including the issue’s importance to overall health and wellness, and significant media coverage on weight-loss methods. An unfortunate outgrowth of weight-loss studies based on inaccurate or poorly constructed statistical anlaysis can lead to misleading news and flashy headlines, which can cause people to explore obscure, ineffective, and even dangerous measures to lose weight.
“People make weight-loss treatment decisions based on scientific findings -- whether they’ve met with a healthcare professional or just read about a new breakthrough in a magazine -- so the science needs to be rigorous and trustworthy,” said Diana Thomas, PhD, a spokesperson for The Obesity Society and a professor at Montclair State University. “These papers can help inform and improve obesity research so that future studies meet a high standard of science and trust.”
In an effort to recognize and avoid statistical errors in obesity studies, researchers led by Brandon George, PhD, statistician at the University of Alabama at Birmingham School of Public Health, identified 10 oft-repeated errors in study design, analysis, interpretation, and reporting. The three most notable errors identified by the researchers are: errors related to tests of pre-post differences between groups, inappropriate design or analysis of cluster randomized trials, and calculation errors in meta-analyses. Common issues in these three areas are highlighted in the following examples.
- When testing for an intervention’s effect with a treatment group and a control group, the appropriate analysis should look at the “difference of differences” between groups; it should not compare the nominal significance of within-group changes versus baseline to make inferences about between-group differences over time.
- Cluster randomized trials, such as a school-based intervention, need to be identified as such and analyzed using a method that properly accounts for within-cluster correlation between subjects.
- Two common sticking points in meta-analyses deal with incomplete or non-standard reporting in the original papers and which variances to use in which context. Confusion about this frequently leads to miscalculation of effect sizes or their variances.
To avoid these types of errors, the authors suggest that researchers receive more rigorous statistical training from course work during graduate or postdoctoral training or from workshops. Researchers should utilize published guidelines such as CONSORT or PRISMA, and closely collaborate with statisticians with specialized expertise by including them in both the research and reporting stages of the scientific process. Scientific journals can also help by engaging greater statistical support to help evaluate submitted papers for the 10 common errors identified by the authors.
“Our effort to identify common statistical errors can help researchers avoid making these mistakes in the future,” said lead researcher Dr. George. “An emphasis on statistical issues from conception to publication of obesity research will improve the quality and rigor of the science, ideally translating to greater success in maintaining weight loss for people with obesity.”
In an accompanying commentary, John P.A. Ioannidis, MD, DSc, Stanford University School of Medicine and School of Humanities and Sciences, focuses on how to improve the credibility of obesity research in light of these errors and biases. Dr. Ioannidis proposes a move away from the current practice of “salami-slicing” datasets that result in hundreds of papers, each on one small sliver of data. He calls on researchers to identify biases and errors early in the process in order to take corrective action in the design phase of a study.
“Beyond identifying problems, it is important to correct them,” said Dr. Ioannidis. “An emphasis on randomized, controlled trials to test obesity interventions is one of several important steps to improve research credibility in this important field that directly impacts people’s choices about nutrition and weight-loss treatment.”
Further highlighting the importance of control groups in obesity studies: in a separate paper, Paul Aveyard, PhD, and his team at the University of Oxford reviewed randomized trials of obesity and found that participants in inactive control groups lost weight after 12 months. This finding belies current assumptions that people not participating in an active weight-loss intervention would gain weight. Since many weight-loss interventions do not currently have a control arm due to expense and burden, this study provides important information for researchers designing studies to take into consideration.
The final paper in the trio offers a perspective on how non-specific factors (what the authors call placebo-related factors) might influence weight loss in obesity intervention clinical trials. Kevin R. Fontaine, PhD and his team, also at the University of Alabama at Birmingham, suggest that when different weight loss interventions are compared, factors such as attention and information/expectations conveyed to the participants may independently produce weight loss effects that are mistakenly attributed to the intervention itself. For example, a provider’s warm and caring attitude in conjunction with communicating expectations for success can enhance the effects of existing weight loss programs. This is important for clinical trials where the goal is to estimate the direct effects of a particular weight loss treatment, and for health care practitioners whose goals are to maximize the effects of treatment by promoting and exploiting any factors that might contribute to better results.
“As we work to help people with obesity, it’s important for researchers and scientists to hold to the highest standards for analyzing and reporting conclusions that arise from experimental data,” said Dr. Thomas. “Collectively, the findings in the new issue of Obesity will help raise awareness for researchers to account and correct for biases and potential errors in their work with the end goal of improving outcomes for people with obesity.”
Read the special statistical series in Obesity here.
# # #
About The Obesity Society
The Obesity Society (TOS) is the leading professional society dedicated to better understanding, preventing and treating obesity. Through research, education and advocacy, TOS is committed to improving the lives of those affected by the disease. For more information visit: www.Obesity.org. Connect with us on social media: Facebook, Twitter and LinkedIn. Find TOS disclosures here.