The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed (original) (raw)

  1. Research
  2. The quality of reports...
  3. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed

Research BMJ 2010;340 doi: https://doi.org/10.1136/bmj.c723 (Published 24 March 2010) Cite this as: BMJ 2010;340:c723

Loading

  1. Sally Hopewell, senior research fellow1,
  2. Susan Dutton, senior medical statistician1,
  3. Ly-Mee Yu, senior medical statistician1,
  4. An-Wen Chan, assistant professor2,
  5. Douglas G Altman, director 1
  6. 1Centre for Statistics in Medicine, University of Oxford, Linton Road, Oxford OX2 6UD
  7. 2Women’s College Research Institute, Department of Medicine, University of Toronto, ON M5S 1B2, Canada
  8. Correspondence to: S Hopewell sally.hopewell{at}csm.ox.ac.uk

Abstract

Objectives To examine the reporting characteristics and methodological details of randomised trials indexed in PubMed in 2000 and 2006 and assess whether the quality of reporting has improved after publication of the Consolidated Standards of Reporting Trials (CONSORT) Statement in 2001.

Design Comparison of two cross sectional investigations.

Study sample All primary reports of randomised trials indexed in PubMed in December 2000 (n=519) and December 2006 (n=616), including parallel group, crossover, cluster, factorial, and split body study designs.

Main outcome measures The proportion of general and methodological items reported, stratified by year and study design. Risk ratios with 95% confidence intervals were calculated to represent changes in reporting between 2000 and 2006.

Results The majority of trials were two arm (379/519 (73%) in 2000 v 468/616 (76%) in 2006) parallel group studies (383/519 (74%) v 477/616 (78%)) published in specialty journals (482/519 (93%) v 555/616 (90%)). In both 2000 and 2006, a median of 80 participants were recruited per trial for parallel group trials. The proportion of articles that reported drug trials decreased between 2000 and 2006 (from 393/519 (76%) to 356/616 (58%)), whereas the proportion of surgery trials increased (51/519 (10%) v 128/616 (21%)). There was an increase between 2000 and 2006 in the proportion of trial reports that included details of the primary outcome (risk ratio (RR) 1.18, 95% CI 1.04 to 1.33), sample size calculation (RR 1.66, 95% CI 1.40 to 1.95), and the methods of random sequence generation (RR 1.62, 95% CI 1.32 to 1.97) and allocation concealment (RR 1.40, 95% CI 1.11 to 1.76). There was no difference in the proportion of trials that provided specific details on who was blinded (RR 0.91, 95% CI 0.75 to 1.10).

Conclusions Reporting of several important aspects of trial methods improved between 2000 and 2006; however, the quality of reporting remains well below an acceptable level. Without complete and transparent reporting of how a trial was designed and conducted, it is difficult for readers to assess its conduct and validity.

Introduction

The Consolidated Standards of Reporting Trials (CONSORT) Statement, first published in 19961 and revised in 2001,2 provides recommendations for authors regarding how to prepare reports of trial findings. The aim of this 22 item checklist is to facilitate complete and transparent reporting of trial findings and aid their critical appraisal and interpretation. The statement has been endorsed by the World Association of Medical Editors, the International Committee of Medical Journal Editors, the Council of Science Editors, and a significant number of journals worldwide.3

More than twelve years have now passed since the CONSORT Statement was first published and more than eight years have passed since the statement was last revised. A study of randomised trials published in December 2000, before publication of the revised CONSORT Statement in 2001, showed that many items on the CONSORT checklist—such as details of sample size calculations, primary outcomes, random sequence generation, allocation concealment, and handling of attrition—were inadequately described in more than half of the publications assessed.4

With further revisions of the CONSORT Statement expected in 2010, it is timely to assess whether the recommendations have led to improvements in the quality of reports of randomised trials. In part one of this study, we examine the extent to which reports of randomised trials published in 2006 incorporate recommendations from the CONSORT Statement. In part two, we assess whether the quality of reports of randomised trials has improved since a similar assessment was conducted in 2000,4 before the publication of the revised CONSORT Statement in 2001.2

Methods

Study sample

We used the Cochrane highly sensitive search strategy (phase 1)5 to identify primary reports of randomised trials published in December 2006 and indexed in PubMed by 22 March 2007. This search approach was developed specifically to retrieve reports of controlled trials from Medline by using PubMed.5 One reviewer screened the titles and abstracts of all retrieved reports to exclude any obvious reports of non-eligible trials. A copy of the full article was then obtained for all non-excluded reports, and each full article was assessed by the same reviewer to determine if it met the inclusion criteria. Any additional material about the trial included as an appendix on the journal website was also obtained, if available.

We defined a randomised trial as a prospective study that assessed healthcare interventions in human participants who were randomly allocated to study groups. Studies of cost effectiveness or the accuracy of diagnostic tests, and non-English language reports, were excluded. Studies of parallel group, crossover, cluster, factorial, and split body design were included. We defined parallel group trials as studies where each participant is randomised to one of the intervention arms. Crossover trials were defined as trials where each participant was exposed to each intervention in a random sequence. Cluster trials were defined as trials in which clusters of individuals (for example, clinics and schools) were randomly allocated to different study arms. Factorial trials were defined as trials where participants were randomly assigned to individual interventions or a combination of interventions, and split body trials were defined as those where separate body parts within each participant (for example, eyes) were randomised. These definitions mirror those used in the original survey by Chan and Altman, which assessed 519 randomised trials published in December 2000 and indexed in PubMed by July 20024; however, their cohort also included one trial report published in French.

Data extraction

We extracted details on a number of general and methodological items from each of the included articles. General items were trial design, journal type and medical specialty, type of intervention, number of data collection sites, number of randomised groups, and sample size. Methodological items were use of the term “randomised” in the study title and specification of the primary outcome, sample size calculation, method of random sequence generation and allocation concealment, who was blinded, and how blinding was achieved. Publication of a participant flow diagram, details of loss to follow-up in each arm of the trial, whether the trial stated that an intention to treat analysis had been carried out, details of trial registration, access to the trial protocol, and funding sources were also analysed (box). We also assessed whether a trial report was published in a journal that endorses the CONSORT Statement on the basis of the journals’ instructions to authors (accessed June 2008).

Definitions used to assess reporting of methodological details in publications of randomised trials

Primary outcome

Explicit definition of primary or main outcome(s) or an outcome used in the sample size calculation, or explicit description of a main outcome in the primary study objectives

Sample size

Sample size calculation stated to have been undertaken

Random sequence generation

Description of method for allocating participants to study groups, including computer generated sequences, random number tables, and coin tosses

Allocation concealment

Description of method used to prevent individual participants from knowing or predicting the allocation sequence in advance, including central randomisation or sealed envelopes

Blinding—who was blinded

Study participants, outcome assessors, care providers, or investigators with no knowledge of the participants’ group allocation stated to be blinded; or the trial stated to be blinded, single blind, double blind, or triple blind

Blinding—how blinding was achieved

Similarities between interventions or procedures described, or the trial stated to be placebo controlled or unblinded

Attrition—loss to follow-up for each group

Losses to follow-up, with reasons, enumerated for all study groups

Intention to treat analysis

Randomised participants with available data stated as having been assessed in an intention to treat analysis

Data extraction was carried out by three reviewers. To ensure consistency in the data extraction process, the three reviewers first completed data extraction for ten articles and any disparities in the data obtained were discussed. Differences were primarily owing to differing interpretation of the data extraction form; thus the form was modified and the exercise repeated using a further ten articles until there was agreement. Once agreement on the interpretation of the data extraction form was reached, the three reviewers carried out single data extraction on all remaining articles. If there was uncertainty regarding a particular article, the data obtained were checked by a second member of the review team and resolved by discussion.

Data analysis

Descriptive summary statistics were calculated for each of the general and methodological items specified, and outcomes were stratified by study design. The descriptive summary statistics were then used to compare the quality of reporting for randomised trials published in December 2006 with the quality of reporting in trials published in December 2000 (that is, in those trials assessed by the original survey by Chan and Altman4). STATA (version 10) was used to calculate risk ratios (RR) and absolute differences (AD) with 95% confidence intervals (CI) to quantify changes in reporting between 2000 and 2006. Finally, for articles published in 2006 we also compared the quality of reporting for randomised trials published in journals endorsing the CONSORT Statement with the quality in journals that did not, recognising that there is a time lag from when the journal article was published and when the journal instructions to authors were accessed (June 2008).

Results

Survey of randomised trials indexed in PubMed in December 2006

The Cochrane highly sensitive search strategy identified 1735 citations that possibly linked to reports of randomised trials. After screening the titles and abstracts of all retrieved citations, we reviewed 879 full text articles and identified 616 primary reports of randomised trials for inclusion in our final cohort (fig 1). General and methodological characteristics were stratified by trial design. In 2006, 316 different journals published primary trial reports, with 25 journals publishing five or more randomised trials in the single month studied. The majority of trial reports were published in specialty journals (555/616 (90%)). The highest number of reports appeared in the American Journal of Cardiology (n=9), Anesthesia & Analgesia (n=9), Journal of the American College of Cardiology (n=9), New England Journal of Medicine (n=8), Journal of Clinical Oncology (n=7), and Journal of Infectious Diseases (n=7). All these journals are monthly apart from the New England Journal of Medicine, which is weekly.

Table 1 provides information on general trial characteristics. Over three quarters (477/616 (78%)) of reports were of parallel group trials, 16% (100/616) were crossover trials, and the remaining 6% (39/616) were classified as “other”—more specifically, cluster randomised (13/39), factorial (10/39), or split body (16/39) trials. More than half (356/616 (58%)) of the trials investigated drugs as the primary intervention of interest, whereas 21% (128/616) assessed surgical or procedural interventions, 18% (113/616) assessed counselling or lifestyle interventions, and 3% (19/616) assessed equipment or devices. Forty per cent (248/616) of reports explicitly stated that the trial was conducted at a single centre and 28% (172/616) stated that the trial took place at multiple centres; the number of study centres was not explicitly defined in the remaining reports (196/616 (32%)). The median number of participants recruited per trial was 62 (10th to 90th percentile 19 to 392). As expected, parallel group trials were larger (median 80, 10th to 90th percentile 28 to 418) than crossover trials (median 20, 10th to 90th percentile 9 to 61).

Table 1

General characteristics of randomised trials indexed in PubMed in December 2006

Table 2 provides information on the reporting of methodological items. A third (205/616 (33%)) of all trial reports stated that the study was randomised in the study title, 53% (324/616) defined the primary outcome, and 45% (279/616) stated that a sample size calculation had been undertaken. The method used to generate the random sequence for allocating participants to study groups was reported in just over a third (209/616 (34%)) of trial reports, and a quarter (156/616 (25%)) reported the method used to prevent the person enrolling participants from knowing or predicting the allocation sequence.

Table 2

Reporting of methodological characteristics for randomised trials indexed in PubMed in December 2006

Fifty-nine per cent (362/616) of publications reported details of any blinding. Of these, 44% (160/362) provided specific details on who was blinded after assignment to interventions (for example, study participants, outcome assessors, care providers), whereas the remaining 56% (202/362) simply used the terms “blinded,” “single blind,” or “double blind” without providing further details. In addition to reporting who was blinded, 41% (254/616) of trial reports provided information on how blinding was achieved; of these, 45% (113/254) specifically described any similarities between the interventions or procedures (for example, identical in size, colour, and taste), whereas the remaining 55% (141/254) simply used the term “placebo” without providing further details.

A participant flow diagram depicting, for each group, the number of participants who were randomly assigned, received the intended treatment, and were analysed for the primary outcome was included in 28% (172/616) of study reports. Sixty-nine per cent (422/616) of trials reported details of any loss of follow-up for each study group; of these, 86% (362/422) explicitly stated the reasons for attrition or reported that there was no loss to follow-up. A third (188/616 (31%)) of trial reports stated that an intention to treat analysis had been carried out. Very few trial reports provided details of trial registration (58/616 (9%)) or where the original trial protocol could be accessed (9/616 (1%)). Details of funding sources were provided in 62% (380/616) of trial reports: 17% (107/616) of trials were funded solely by industry, 11% (70/616) were part funded by industry, and 32% (196/616) were not industry funded. For all trials, the quality of reporting was generally poorer for crossover trials than for parallel group trials (table 2).

Comparison of randomised trials indexed in PubMed in 2000 with those indexed in 2006

In both 2000 and 2006, the majority of trials involved two study arms (379/519 (73%) in 2000 v 468/616 (76%) in 2006), had parallel group design (383/519 (74%) v 477/616 (78%)), with a median of 80 participants per trial, and were published in specialty journals (482/519 (93%) v 555/616 (90%)). The proportion of articles that reported drug trials decreased between 2000 and 2006 (from 393/519 (76%) to 356/616 (58%)), whereas the proportion of surgical trials increased (from 51/519 (10%) to 128/616 (21%); table 3).

Table 3

Reporting of general characteristics for randomised trials indexed in PubMed in 20004 compared with those indexed in 2006

We identified an increase between 2000 and 2006 in the proportion of trial reports that included details of the primary outcome (RR 1.18, 95% CI 1.04 to 1.33; AD 8%, 95% CI 2% to 14%), sample size calculation (RR 1.66, 95% CI 1.40 to 1.95; AD 18%, 95% CI 12% to 23%), and the methods of random sequence generation (RR 1.62, 95% CI 1.32 to 1.97; AD 13%, 95% CI 8% to 18%) and allocation concealment (RR 1.40, 95% CI 1.11 to 1.76; AD 7%, 95% CI 2% to 12%). There was no difference in the proportion of trials that provided specific details on who was blinded after assignment to interventions (RR 0.91, 95% CI 0.75 to 1.10; AD −3%, 95% CI −8% to 3%; fig 2).

Comparison of journals endorsing the CONSORT Statement with non-endorsing journals in 2006

We also compared the quality of reporting for randomised trials indexed in journals that endorse the CONSORT Statement with those in non-endorsing journals for reports published in December 2006. We identified a significantly higher rate of reporting of key methodological items in CONSORT endorsing journals (fig 3). Interestingly, we observed that the proportion of trial reports that provided specific details on who was blinded after assignment to interventions was higher in CONSORT endorsing journals (RR 1.53, 95% CI 1.17 to 1.99; AD 11%, 95% CI 4% to 18%), and a reverse scenario was true for those reports that used the terms “blinded,” “single blind,” or “double blind” (RR 0.93; 95% CI 0.74 to 1.16; AD −3%, 95% CI −10% to 5%).

Discussion

Principal findings of study

Our study provides a comprehensive assessment of the design and reporting characteristics of a cohort of PubMed indexed randomised trials published in December 2006. It also provides an important comparison with a similar cohort of randomised trials published in December 2000 that allows us to assess whether the quality of reporting has improved over time and, importantly, following the publication of the revised CONSORT Statement in 2001 and before its revision, which will be published in early 2010 (BMJ in press). To our knowledge, our study and that by Chan and Altman4 are the only studies to provide a comprehensive assessment of randomised trials that covers a specified time period and is not restricted by disease or journal type. Our sample is representative of trials accessible in PubMed, which is the primary information resource for clinicians and researchers.

Interestingly, we identified an increase in the number of reports of randomised trials in 2006 (n=616) compared with 2000 (n=519). This difference may reflect a true increase in the number of randomised trials being published or could reflect an increase in the number of journals being indexed on PubMed.

The majority of randomised trials reported in 2006 were two arm parallel group trials published in specialty journals. The proportion of surgical trials in 2006 was higher than in 2000, whereas the proportion of drugs trials decreased—a trend that has also been reported elsewhere.6 The worrying relatively small sample sizes seen in our cohort may be an indication that many trials are inadequately powered to detect clinically important treatment benefits. It is important to note that we only assessed reports of randomised trials included in the PubMed database; therefore, our findings might not be representative of all published reports of trials.

Despite clear evidence of improved reporting of several important aspects of trial methods, the quality of reporting remains well below an acceptable level. In 2006, more than half of all trial reports defined the primary outcome and provided information on blinding, loss to follow-up, and funding source; however, just under half of all reports detailed sample size calculations. Only a third or less provided information on the method of random sequence generation or allocation concealment, or whether an intention to treat analysis had been carried out. Reporting of trial registration details and access to the trial protocol was rare. In our study, we did not assess whether an intention to treat analysis had been carried out correctly, simply that the term was stated in the trial report; however, several studies have shown that this terminology is often used inappropriately.7 8 9

Comparison with other studies

Recent studies that have assessed the quality of reporting of randomised trials have generally been restricted to trials in specific disease areas10 11 12 13 14 15 or journal types.16 17 18 19 These studies show that considerable variability may exist in the reporting of some aspects of randomised trials, depending on the area assessed. For example, a systematic review of 122 surgical trials published in 2000 to 2003 found that no reports detailed how the randomisation process was implemented, only 24% reported details of the primary outcome, and 26% reported details of blinding.10 A separate study of 67 ophthalmology trials published in 2005 found that details of sequence generation, allocation concealment, participant flow diagrams, and sample size calculations were reported for less than half of the randomised trials assessed.11

The situation may be worse in specific journals. For example, a study of 7422 randomised trials published in Chinese medical journals found that important methodological components such as sample size calculation, randomisation sequence, and allocation concealment were documented in only a small minority of trial reports (1%, 8%, and 0.3%, respectively).16 More encouraging is a study of 253 randomised trials published in five leading medical journals between 2002 and 2003, which found that 80% of trials reported sequence generation, 48% reported allocation concealment, 55% reported the blinding status of study participants, 83% reported the sample size justification, and 86% reported a participant flow diagram.17 These findings may well be an indication of more rigorous reporting requirements in these journals.

There is considerable evidence to show that reports of randomised trials that have inadequate or unclear documentation of sequence generation, allocation concealment, and blinding lead to biased estimates of intervention effects. A meta-epidemiological study of 146 meta-analyses found that in trials with subjective outcomes, effect estimates were exaggerated when allocation concealment was inadequate or unclear (ratio of odds ratios 0.69, 95% CI 0.59 to 0.82) or blinding had not taken place (ratio of odds ratios 0.75, 95% CI 0.61 to 0.93).20 Another study found a trend towards larger estimates of treatment effects in publications with inadequate or unclear reporting of random sequence generation compared with those adequately reporting methods (ratio of odds ratios 0.81, 95% CI 0.60 to 1.09).21 Further details on the rationale and importance of these items can be found in the CONSORT explanation and exploration paper.22

When assessing the reporting of key methodological items in CONSORT endorsing as opposed to non-endorsing journals, we identified significantly better reporting in CONSORT endorsing journals. These results should be viewed with a certain amount of caution, however, because there is a time lag in our study between when journal articles were published and when the journal instructions to authors were accessed (June 2008). Nevertheless, these findings mirror those of a recent systematic review, which concluded that journal adoption of the CONSORT Statement is associated with improved reporting of randomised trials, although poor reporting is common.23 Our study provides important baseline data against which future revisions of the CONSORT Statement can be measured.

Limitations of study

Our study has several limitations. We carried out only single data extraction, and although we did our best to minimise any inconsistency in the reviewers’ interpretation, it is possible that errors may have accrued. In addition, data extraction in 2000 and that in 2006 was carried out by different teams of reviewers; however, all reviewers conferred to try to ensure consistency in the interpretation of data extraction items.

We also used slightly different sampling techniques between 2000 and 2006. The 2000 sample included all reports of randomised trials published in December 2000 and indexed in PubMed by July 2002 (to account for the time lag in PubMed indexing). For pragmatic reasons, the 2006 sample included trials published in PubMed in December 2006 and indexed as of March 2007. This approach meant that we were able to capture our sample of trials within one search, but we may have missed a small number of eligible trials that were indexed in PubMed after March 2007.

Finally, it is important to recognise that poor reporting does not necessarily mean poor methods. Several studies24 25 that compared the content of published reports with their protocol found that the methodological techniques in the actual trial were of better quality than reported in the final publication. Conversely, other studies26 27 have shown that important information about the trial methodology is often missing in the protocol but is included in the published report. Such studies recommend reviewing research protocols and contacting trialists for more information when assessing trial quality; we did not do this in our study.

Conclusions

Without complete and transparent reporting of how a trial was designed and conducted, it is difficult for readers to assess its conduct and validity or to differentiate trials with unbiased results from those with questionable results.22 We believe that the CONSORT Statement, and its extensions, is a key tool through which adequate reporting can be achieved. More journals should endorse the CONSORT Statement, and, most importantly, they should do more to ensure adherence. For example, journals could incorporate the checklist and flow diagram into their review processes and indicate this requirement in their published instructions to authors.28 Without wide endorsement, the CONSORT Statement cannot fully yield the benefits it was intended to produce.

What is already known about this topic

What this study adds

Notes

Cite this as: BMJ 2010;340:c723

Footnotes

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

References


  1. Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA1996;276:637-9.

  2. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med2001;134:657-62.

  3. Altman DG. Endorsement of the CONSORT statement by high impact medical journals: survey of instructions for authors. BMJ2005;330:1056-7.

  4. Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet2005;365:1159-62.

  5. Robinson KA, Dickersin K. Development of a highly sensitive search strategy for the retrieval of reports of controlled trials using PubMed. Int J Epidemiol2002;31:150-3.

  6. Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P. Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med2008;148:295-309.

  7. Hollis S, Campbell F. What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ1999;319:670-4.

  8. Kruse RL, Alper BS, Reust C, Stevermer JJ, Shannon S, Williams RH. Intention-to-treat analysis: who is in? Who is out? J Fam Pract2002;51:969-71.

  9. Gravel J, Opatrny L, Shapiro S. The intention-to-treat approach in randomized controlled trials: are authors saying what they do and doing what they say? Clin Trials2007;4:350-6.

  10. Agha R, Cooper D, Muir G. The reporting quality of randomised controlled trials in surgery: a systematic review. Int J Surg2007;5:413-22.

  11. Lai TY, Wong VW, Lam RF, Cheng AC, Lam DS, Leung GM. Quality of reporting of key methodological items of randomized controlled trials in clinical ophthalmic journals. Ophthalmic Epidemiol2007;14:390-8.

  12. Gagnier JJ, DeMelo J, Boon H, Rochon P, Bombardier C. Quality of reporting of randomized controlled trials of herbal medicine interventions. Am J Med2006;119:800-11.

  13. Rios LP, Odueyungbo A, Moitri MO, Rahman MO, Thabane L. Quality of reporting of randomized controlled trials in general endocrinology literature. J Clin Endocrinol Metab2008;93:3810-6.

  14. Anttila H, Malmivaara A, Kunz R, utti-Ramo I, Makela M. Quality of reporting of randomized, controlled trials in cerebral palsy. Pediatrics2006;117:2222-30.

  15. Scales CD Jr, Norris RD, Preminger GM, Vieweg J, Peterson BL, Dahm P. Evaluating the evidence: statistical methods in randomized controlled trials in the urological literature. J Urol2008;180:1463-7.

  16. Wang G, Mao B, Xiong ZY, Fan T, Chen XD, Wang L, et al. The quality of reporting of randomized controlled trials of traditional Chinese medicine: a survey of 13 randomly selected journals from mainland China. Clin Ther2007;29:1456-67.

  17. Mills EJ, Wu P, Gagnier J, Devereaux PJ. The quality of randomized trial reporting in leading medical journals since the revised CONSORT statement. Contemp Clin Trials2005;26:480-7.

  18. Pienaar ED, Volmink J, Zwarenstein M, Swingler GH. Randomised trials in the South African Medical Journal, 1948-1997. S Afr Med J2002;92:901-3.

  19. Greenfield ML, Mhyre JM, Mashour GA, Blum JM, Yen EC, Rosenberg AL. Improvement in the quality of randomized controlled trials among general anesthesiology journals 2000 to 2006: a 6-year follow-up. Anesth Analg2009;108:1916-21.

  20. Wood L, Egger M, Gluud LL, Schulz KF, Juni P, Altman DG, et al. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ2008;336:601-5.

  21. Juni P, Altman DG, Egger M. Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ2001;323:42-6.

  22. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med2001;134:663-94.

  23. Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust2006;185:263-7.

  24. Devereaux PJ, Choi PT, El-Dika S, Bhandari M, Montori VM, Schunemann HJ, et al. An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods. J Clin Epidemiol2004;57:1232-6.

  25. Soares HP, Daniels S, Kumar A, Clarke M, Scott C, Swann S, et al. Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiation Therapy Oncology Group. BMJ2004;328:22-4.

  26. Chan AW, Hrobjartsson A, Jorgensen KJ, Gotzsche PC, Altman DG. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols. BMJ2008;337:a2299.

  27. Pildal J, Chan AW, Hrobjartsson A, Forfang E, Altman DG, Gotzsche PC. Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study. BMJ2005;330:1049.

  28. Hopewell S, Altman DG, Moher D, Schulz KF. Endorsement of the CONSORT Statement by high impact factor medical journals: a survey of journal editors and journal ‘Instructions to Authors’. Trials2008;9:20.

View Abstract