Children Playing

Child’s Play: Mixing Values with Data

How can insignificant results be clinically significant? It happens. Especially when researchers believes that their program must have a big effect. Consider this conclusion from a recent study of an program to promote more play:

Although the differences between intervention and control were not statistically significant, the effect size indicates clinical significance.

Promoting Physical Activity Through Child’s Play

The Journal of School Health is publishing a paper with that odd statement in its October issue. The paper is a new look at an old study first published in 2013. These papers describe results of the Sydney Playground Project.

The project aims to increase physical activity, social skills, and resilience for children. The method is simple, unstructured playground modifications. Researchers placed odd bits of junk on playgrounds – car tires, crash mats, crates. It has no obvious play purpose. The point is to spark more imaginative play. Researchers also conducted workshops with parents and teachers. The workshops focused on re-thinking ideas about playground risks. Worries about such risks tend to limit opportunities for play.

Small Gains in Physical Activity

The 2013 paper reported small gains (~12%) in a simple measure of physical activity. The measure was minutes of moderate or vigorous physical activity.

But these researchers were not satisfied with a small gain. They thought that these measures were making the benefits of their interventions look smaller than they really were. They said in their 2013 paper:

Using accelerometry as the sole measure of physical activity may underestimate the effect.

So now we have a 2017 publication of additional data. They analyzed video data to provide estimates of time spent in play and non-play activities. Unfortunately, the data would not cooperate. The difference between the test and control groups – in terms of play time – was insignificant.

Commenting on this analysis, Professor David Allison tells us:

It is clear that the data did not provide support for their hypothesis at the conventional (0.05) statistical significance level. What is not certain – and can never be certain by the rules of the game in statistical hypothesis testing – is whether their hypothesis is true. In other words they do not have evidence to convincingly support their hypothesis. But as the old saying goes, “absence of evidence is not evidence for absence of an effect.”

Placing a High Value on Child’s Play

Clearly, these folks think a child’s play is important stuff. We agree on that point.

But placing a high value on something doesn’t require puffing up the data. And in fact, puffing up the data has quite the opposite effect. Speculating about the clinical significance of an insignificant result is pointless. It draws attention to the fact that their data are not as strong as researchers would like.

It undermines credibility.

Click here for the 2017 publication and here for the 2013 publication.

Children Playing, photograph © Thomas Hawk / flickr

Subscribe by email to follow the accumulating evidence and observations that shape our view of health, obesity, and policy.


 

September 19, 2017

4 Responses to “Child’s Play: Mixing Values with Data”

  1. September 19, 2017 at 3:31 pm, Susan Burke March said:

    Whew, Ted! Never let the facts get in the way of a good story! (Again!) Thanks for clearly identifying the problems with over-massaging the data. You illustrate clearly how the researchers want to have it both ways.

    • September 19, 2017 at 4:51 pm, Ted said:

      Thanks, Susan! I had a lot of help from some people much smarter than I am.

  2. September 30, 2017 at 3:30 am, Shirley Wyver said:

    When we reported our results in The Journal of School Health, our intention was not to gloss over a result that was not significant. We made it quite clear in the abstract, the most public and likely the most read part of the paper (or perhaps the most publicly available part of the paper) that there was not a statistically significant change in time spent in play. Both effect size and statistical significance are important. When writing the paper, we aimed to be as transparent as possible with our results and hence we reported results that may be useful for practitioners, social science researchers, health researchers, researchers conducting meta-analyses, and researchers conducting systematic reviews. Working across disciplines, we are aware that some researchers/practitioners favour statistical significance, but others prefer to also see the effect size.

    Our reporting in 2017 was planned at the beginning of the project and is described in our 2011 protocol paper https://bmcpublichealth.biomedcentral.com/articles/10.1186/1471-2458-11-680 It was not additional data to embellish our 2013 results. We also reported results that were not significant in our 2017 paper. We stand by the concern we raised in 2013 that use of accelerometry as the sole measure of physical activity may have underestimated the effect. We saw children engaging in activities such as lifting, squatting, dragging, but we hadn’t anticipated these changes and therefore did not include an appropriate measure in our protocol. We want to alert anyone planning research in this area to include additional measures of physical activity if they can. In other words, we were trying to help others improve on what we have done. We were not trying to make unsubstantiated claims about the results.

    Lastly, we have to challenge the statement; “Especially when a researcher believes that his program must have a big effect”. Two of the thirteen authors can claim ‘his’ as their pronoun, but regardless of gender, we know that big effect sizes are unusual. We didn’t expect or claim a big effect size anywhere in the paper. We claimed the effect as small.

    We acknowledge that there are different views about the best way to report and interpret results. Criticisms should be made when researchers attempt to misinform readers by hiding or obscuring results. The Sydney Playground Project team will continue to be transparent in the reporting of our results, use practices such as publishing our protocols before we have our results and making public all results regardless of significance.

    • September 30, 2017 at 1:26 pm, Ted said:

      Thanks for your thoughtful reply. I’m glad we agree on the importance of clarity in reporting. Perhaps the main point where we see this differently is in the effect size of a statistically insignificant result. If it were proven to be real, it might be clinically significant. But because your finding was statistically insignificant, it might be zero.

      You can’t have an effect size if you don’t have an effect.