Frustrated

Biostatisticians Are Frustrating When Data Are Weak

Biostatisticians can be very helpful when they save someone from an embarrassing mistake. They can be frustrating when a person has already made the mistake. The frustration scenario comes out loud and clear in a case that Retraction Watch investigated this week.

Claiming Efficacy Without Proving It

Back in February, ConscienHealth reported on the mysterious resurrection of a study retracted from Obesity. The retraction resulted from a false claim of effectiveness for a cooking, gardening, and nutrition program aimed at Latino youth. The program is called LA Sprouts. The same authors revised the paper a bit and republished it in Pediatric Obesity without disclosing the retraction. Once again, they claimed that “LA Sprouts was effective in reducing obesity.”

Retraction Watch dug a little deeper. They asked statistician Patrick McKnight to take a look at the revised paper. He said:

The authors did not change the analyses sufficiently to correct for the nested design. Moreover, I was quite surprised to see the similarities between the two papers. After a retraction, I assumed the authors would heed the advice of others and fix the problem. Instead, they appear to have simply dismissed the problems and moved on to another journal. I found this behavior quite puzzling. Never heard of a retracted paper finding a new home. Usually, the authors admit the error(s) and move on. Strange. Very strange.

Bottom line, the study does not prove that LA Sprouts works to reduce obesity.

Peer Review or Censorship and Bias?

Retraction Watch reached out to the authors, who sound peeved about the whole experience:

What is the purpose of peer-review if a journal editor can selectively censor scientific studies at his/her discretion? This is a form of publication bias which prevents the entire story from being told, and certainly does not serve to advance science.

They describe their work as a pilot study that was too small to analyze the data as randomized clusters.

And that’s the core of the problem. A pilot study is indeed a small scale study that serves only to evaluate methods and feasibility of an experiment – not effectiveness. Pilot studies are neither powered nor intended to prove anything about effectiveness.

Sounds Good Isn’t Good Enough

This little drama is important. It’s an important illustration of the tendency to settle for things that sound good to prevent and treat obesity.

Teaching Latino kids about nutrition, gardening, and cooking is a good thing to do. Even if it doesn’t reduce obesity, plenty of other reasons can justify it. But false claims, in a scientific journal, that it works to reduce obesity can cause real harm. They lead policymakers to rely on unproven strategies for reducing obesity.

For decades now, public policy has relied on strategies that sound good for stopping the growth of obesity rates. Those alarming rates keep growing – relentlessly.

Wishful thinking will not reduce obesity’s impact on health. We need honest investigations into what works, what doesn’t work, and why. Sounds good isn’t good enough for this complex, chronic disease.

When a smart biostatistician speaks up, listen up.

Click here for more from Retraction Watch and here for more from ConscienHealth.

Frustrated, photograph © Jessica Lucia / flickr

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


 

April 2, 2017

One Response to “Biostatisticians Are Frustrating When Data Are Weak”

  1. April 03, 2017 at 10:21 am, Allen Browne said:

    Ted,

    Thank you for keeping us on our toes!

    Allen