Forgetting to Randomize a Randomized Study
Sometimes things are not what they seem. That’s a problem when something slips into scientific literature that’s not exactly true. We offer a prime example today. Here we have two papers where an RCT – a randomized controlled study – is not properly randomized. Apparently, the investigators, reviewers, and editors for these papers weren’t too fussy about the definition of random.
Lumping Together Mismatched Randomizations
The first example comes from Breast Cancer Research and Treatment. Tara Sanft and colleagues published an RCT of weight loss versus usual care in women with breast cancer. And they found that weight loss might be helpful for these patients. But unfortunately, they didn’t do a valid randomization.
Instead, they lumped together two different randomized data sets without accounting for the differences. First, they collected data from patients in a study where the randomization assigned patients to three different groups. A third of the patients received in-person counseling for weight loss. Another third received telephone counseling. Then the final third received usual care – no weight loss counseling. In other words this part of the data came from a 1:1:1 randomization. And two-thirds of the patients received a weight loss intervention.
Later, the investigators collected additional data, with people randomly assigned to either in-person counseling or usual care. That’s a 1:1 randomization.
However, the problem comes when they combine these two different randomization schemes. They lump together the two weight-loss interventions from the first data set. In that way the 1:1:1 randomization became a 2:1 randomization. And then they combined the 2:1 randomized data with the 1:1 randomized data.
That’s a problem. It’s not a fatal error, but it requires statistical adjustments that were not part of the published analysis. In a letter to the editor, Stephanie Dickinson and colleagues explain:
While the overall conclusions in the Sanft et al. paper may remain unchanged, this is an important methodological issue for researchers to avoid bias in analyses combining data across multiple strata, sites or phases of data collection in RCTs. We encourage Sanft and colleagues to consider re-analyzing their data taking these factors into account and publish corrected results, and we offer to assist in the reanalysis if needed.
Every Other Person Gets a Placebo
If that example was a bit too geeky for you, here’s one that’s simpler. It’s a study where the randomization isn’t random. Naoki Ito and colleagues studied a dietary supplement for treating mild cognitive impairment. With the title of their study in the Journal of Alzheimer’s Disease, they call it a double-blind, randomized controlled trial.
But in fact, it’s not randomized at all. Every other patient gets a placebo. So the researchers caring for these patients can easily know who’s in one group and who’s in the other. Thus, it’s neither randomized nor effectively blinded.
In a letter to the editor, Bridget Hannon, Michael Oakes, and David Allison explain the significance of the problem and how the results of this non-randomized study might be more appropriately analyzed and reported.
Keeping Scientific Literature Tidy
Of course, errors happen – even in reporting scientific research. Seldom is it fraud or an intent to deceive. Just simple human errors. But vigilance and corrections are important.
In the end, the bottom line for randomized studies is really simple. Randomized means something that’s important. It means that assignment of the test and control groups are truly and completely random. It’s either true or false. Random or not. Close doesn’t count.
Click here for the Sanft study and here for the LTE from Dickinson et al. For the Ito paper, click here, and then here for the LTE from Hannon, Oakes, and Allison.
Random, photograph © Vladimer Shioshvili / flickr
Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.
February 10, 2019
February 10, 2019 at 8:21 am, Al Lewis said:
Here is one where Aetna randomized the trial properly, but buried the randomized results. Instead they reported on participants-vs-non-participants within the study group. They found huge savings.
But comparing the entire study group (including non-participants) to the control group yielded no difference. http://thehealthcareblog.com/blog/2015/12/16/genetic-testing-the-new-frontier-of-wellness-madness