The Long Road to Retraction in Childhood Obesity
The scientific literature on childhood obesity is a bit dodgy in places. Lots of well-meaning people do studies to prove a point rather than discover hard truths. Obesity evokes emotion and stigma, especially for children. Because stigma is in play, scientific rigor can take second place. In scientific journals, papers about childhood obesity can appear that reflect more wishful thinking than scientific evidence. The antidote is simple. Vigilance. When scientists find an error, a correction or retraction fixes the problem. But a recent case in point tells us once again that the road to a retraction in childhood obesity can be tortuous.
Yet Another Cluster Fumble
By now, we hope that reviewers and editors are alert to the problems that cluster randomized studies can encounter. Cluster randomized studies are important in childhood obesity. In prevention studies, randomizing clusters of kids is often the only way to go. However, it’s easy to get the analysis wrong.
Despite all the attention to pitfalls in cluster randomization, this is precisely the problem in a retraction just announced by Childhood Obesity. The authors did not address clustering in their analysis. So Lilian Golzarri-Arroyo and colleagues raised this and other issues with the analysis in a commentary the journal published.
That led to “voluminous correspondence.” And now, more than three years after the paper first appeared, the journal and the authors agreed to a retraction. Said the authors:
We are concerned that the outcome for change from baseline to end could be coded incorrectly for how it is interpreted. And we are therefore unable to perform the appropriate re-analysis on the data adjusted for cluster when we cannot first match the sample sizes.
A Triumph or Defeat?
This long and winding road to a retraction in Childhood Obesity might feel like a triumph or a defeat. To be sure, a 2018 overview of retractions in Science played it both ways. The authors referred to retraction as a “death penalty.” But they also argued that more retractions are an indication of better editorial oversight.
We prefer the latter view. Certainly scientific fraud is a shameful thing. But owning up to honest errors and correcting the record is something to celebrate. It is unequivocally good.
Objectivity About Mom, Apple Pie, Fruits, and Veggies
The tortured path to retraction of a childhood obesity paper points us to one simple fact. We need more passion for objectivity. These authors were trying to prove that they could coax kids to eat more fruits and veggies. Move more. And maybe then, prevent some obesity. It sounds good and feels good.
But what if eating more fruits and veggies won’t actually reduce obesity prevalence in the population?
The truth is that obesity is wickedly complex. The reasons kids eat what they do is complex. Eating more fruits and veggies might be a good thing that does absolutely nothing for the problem of obesity prevalence. Four decades of growing obesity – despite many well-intended efforts – suggest it’s time to question our assumptions. It’s time to be objective about all that stuff that sounds good but does nothing to treat or prevent obesity.
And even more so, it’s time to get curious and find real evidence for strategies that actually have an effect. That’s a road worth taking.
Click here for the original (and now retracted) study, here for the commentary raising concerns, here for the response, and here for the retraction. For more on the need for rigor in childhood obesity research, click here.
A Long and Winding Road in Tuscany, photograph © Jacob Surland Fine Art Photographer / flickr
Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.
September 23, 2020
September 24, 2020 at 9:15 pm, Katherine Flegal said:
I learned basic statistics at Cornell in the ag school, so I think of this as the “pigs in a pen” problem, where you randomize a dietary treatment to pens (all the pigs in a pen get fed the same diet), but then analyze the data by individual pigs, not by pen. Back then, we were instructed by the professor to find an example of this error in the literature, which was startlingly easy to do. This was literally decades ago. It’s impressive that this simple error is so persistent and so unrecognized by researchers, editors and reviewers that it requires a letter to the editor to point it out.