Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials. Martin, J., Taljaard, M., Girling, A., & Hemming, K. BMJ open, 6(2):e010166-2015-010166, 2, 2016.
abstract   bibtex   
BACKGROUND: Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. METHODS: We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. RESULTS: We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5-6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. DISCUSSION: The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs.
@article{
 title = {Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials},
 type = {article},
 year = {2016},
 identifiers = {[object Object]},
 keywords = {CONSORT,cluster,randomised trial},
 pages = {e010166-2015-010166},
 volume = {6},
 month = {2},
 day = {4},
 city = {School of Health and Population Sciences, University of Birmingham, Birmingham, UK.; Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Ontario, Canada Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa},
 id = {99b0444b-6a22-306f-b2fa-4e74130fa672},
 created = {2016-08-21T22:18:37.000Z},
 file_attached = {false},
 profile_id = {217ced55-4c79-38dc-838b-4b5ea8df5597},
 group_id = {408d37d9-5f1b-3398-a9f5-5c1a487116d4},
 last_modified = {2017-03-14T09:54:45.334Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 source_type = {JOUR},
 notes = {CI: Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/; JID: 101552874; OTO: NOTNLM; epublish},
 folder_uuids = {9e053857-f7f3-49e9-8c5e-5ece69931eee},
 private_publication = {false},
 abstract = {BACKGROUND: Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. METHODS: We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. RESULTS: We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5-6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. DISCUSSION: The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs.},
 bibtype = {article},
 author = {Martin, J and Taljaard, M and Girling, A and Hemming, K},
 journal = {BMJ open},
 number = {2}
}

Downloads: 0