Creepy, Crawly, Rustling, Bustling

Ten Ways Bias Creeps into Randomized Studies

Improving health through nutrition is important. Relieving the considerable suffering that obesity causes is likewise important. But both of these tasks are difficult. That’s because clear evidence for cause and effect is hard to find in nutrition and obesity. Randomized studies are hard. Observational studies are more common, but they are subject to bias from confounding by the many factors that correlate with nutrition and obesity. So randomized studies – designed, implemented, and reported carefully – can be the best way to reduce bias in studies of nutrition and obesity.

Nonetheless, bias can find its way into randomized studies of nutrition and obesity. In the International Journal of Obesity yesterday, Colby Vorland and colleagues published an impressive inventory of what can go wrong.

1. Just Call It Random

Sometimes studies appear in the literature that the authors describe as an RCT, but the assignment of subjects to treatment or control groups was not random. That’s exactly what these investigators did.

2. Don’t Conceal the Allocation from Investigators

If investigators know in advance what the allocation will be, human bias can creep in. For example, a subject might find out that they will go into the placebo group and decide not to participate. This was one of the problems that tripped up the PREDIMED study and led to its retraction.

3. Don’t Deal with Changes in Allocation Ratios

When setting up an RCT, an essential step is deciding the ratio of assignment of subjects to each group. You might make it one to one between treatment and control groups. Or it might be necessary to assign twice as many people to the active treatment. This has a big effect on the statistics. But circumstances can change and require a change in allocation ratios. It’s OK, but it requires changes in the statistical plan.

4. Make Non-Random Replacements

Keeping patients in an RCT of nutrition or obesity treatment is a challenge. These studies involve rearranging some aspect of a person’s life – sometimes for a long time. So people drop out. Intention-to-treat analysis can take care of this. But sometimes it makes more sense to replace subjects. When that’s the case, the replacements must get a random assignment – even if most of the dropouts are happening on only one group. Otherwise, the results will be untrustworthy.

5. Ignore the Clusters

This is a big issue for studies that involve group interventions. In nutrition and obesity prevention studies, assignments to treatment and control groups might occur in groups – like a whole classroom. That makes it a cluster randomized study that requires special stat methods. But as we’ve noted before, it’s easy to screw this up.

6. Make Comparisons Within Groups

This is very tempting. A treatment group improves and the control group does not. But the difference between the two groups is not statistically significant. Unfortunately, those results do not add up to evidence of effectiveness. The improvement in the treatment group tempts investigators to suggest they’ve found evidence for an effect – even though they did not. This is called a DINS error and it happens with surprising frequency.

7. Pool Data Improperly

Sometimes researchers need to pool data – either within a single trial or from multiple trials. But without careful statistical adjustments, pooling can introduce bias and lead to unwarranted claims. For example, a group studying school programs to prevent summer weight gain pooled data from several RCTs but ignored this in their analysis. Thus, their claim that these programs worked was unreliable.

8. Overlook Missing Data

Missing data is inevitable – especially in nutrition and obesity research. The reasons are many, but the implication is clear. Statistical analysis must account for missing values. Nevertheless, one analysis found that only half of a sample of studies actually did account for them properly.

9. Don’t Fully Describe the Randomization

Transparency is essential. There are so many ways to make a mess of randomization. So researchers must explain the details of how they ensure the integrity of their randomization. Without those details, it’s impossible to know whether the results are reliable. Unfortunately this happens a lot, as Vorland et al show us with a dozen examples. Sometimes it turns out that a “randomized” study actually had no randomization in its methodology. Sigh.

10. Study One Thing, Draw Conclusions About Another

Details matter when we say that an intervention causes something to happen. Many interventions in nutrition and obesity involve multiple steps. Researchers coach subjects to do something – for example, to cut out 600 calories from their diet. That’s the intervention. The result will likely be that subjects cut calories by some lesser amount. Perhaps by half of what was suggested – 300 calories. And then they see a health benefit. The health benefit comes from trying to cut 600 calories.

But the reporting can easily become skewed. The goal of cutting 600 calories disappears, and the headlines tell us that all we must do is cut 300 calories. So it’s important to distinguish an intervention from the result of that intervention and any benefit that follows.

RCTs: Powerful and Vulnerable

The bottom line here is that randomized controlled trials are powerful tools, but they are vulnerable to bias – if we are not sufficiently vigilant. Vorland et al give us a good accounting of the rigor we need.

Click here for their paper in IJO.

Creepy, Crawly, Rustling, Bustling; artwork by Theodor Severin Kittelsen / WikiArt

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


 

July 30, 2021