Caught It!

It Works! But Don’t Look Too Close at the Data

Are we numb to hype and little lies yet? Sad to say, they’re not confined to tabloid news or politics. Despite data that doesn’t support effectiveness claims, we see such claims for obesity treatment and prevention published in scientific journals. This week, researchers at Johns Hopkins and UC-Davis provide two distinct examples.

A Virtual Health Coach in an App

In the Journal of Internet Medical Research, Estelle Everett and colleagues report that they’ve found “a safe and effective method of increasing PA and reducing weight and HbA1c in adults with prediabetes.” It’s a nifty smartphone health coaching app. But there’s just one little problem. The study design provides no proof of effectiveness. For that, they would have needed a control group.

Instead, all they did was conduct an uncontrolled, observational study of people with a high BMI and prediabetes. No control group. No proof of effectiveness. The flaw is really quite basic. A control group is always important for proving that something works. And it’s especially important when studying an effect on excess weight. That’s because of something called regression to the mean.

If the subjects in your study start with an abnormally high value – in this case weight and HbA1c – the odds favor them returning to a more normal value when the study ends. Even if you do nothing.

So you need a control group.

A Significant Decrease in BMI – or Not

Our second case today comes from the Journal of Nutrition Education and Behavior. Rachel Scherr and colleagues are standing firm. They claimed last year that their nutrition education program is effective:

The SHCP [Shaping Healthy Choices Program] resulted in improvements in nutrition knowledge, vegetable identification, and a significant decrease in BMI percentiles.

But an independent analysis published last week says that claim is not true. A team of seven experts from five institutions wrote to the journal, saying:

Although news of a beneficial program in the domain of childhood obesity would be most welcome, unfortunately this conclusion is derived from an analysis inappropriate for a cluster randomized trial (CRT) and thus cannot substantiate conclusions about the effects of the intervention. We therefore request that the scientific record be corrected with a retraction of, or an erratum to, this article.

Scherr et al seem to brush off the concerns of those experts. “We respectfully submit that they may not be fully familiar with the challenges of designing and implementing community nutrition education interventions in kindergarten through sixth grade,” they replied.

Though they didn’t concede their error or retract their claims, Scherr et al did confess one thing. They’d like “large-scale  funding” for bigger studies of the same thing.

Beware Big Claims from Pilot Studies

These two studies are very different, but they have one thing in common. They are both small pilot studies. They’re designed to answer feasibility questions, not effectiveness questions. When people start making big claims from small pilot studies, odds are good that they’re stretching the data thin.

And that’s exactly what we’re seeing in these two publications.

Click here for the study by Everett et al. For the study by Scherr et al, click here. The independent analysis of that study is here, and the authors’ response is here.

Caught It! Photograph © Deven Dadbhawala / flickr

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


March 12, 2018

Leave a Reply