Bad trials– ones where we have little confidence in the results– are not just common, they represent the majority of trials across all clinical areas in all countries. Over half of all trial participants will be in one. Our estimates suggest that the money spent on these bad trials would fund the UK’s largest public funder of trials for anything between a decade and a century. It’s a wide range but either way, it’s a lot of money. Had our random selection produced a different set of reviews, or we had assessed all those published in the last one, five, ten, or 20 years, we have no reason to believe that the headline result would have been different. Put simply, most randomised trials are bad.
Despite this, we think our measure of bad is actually conservative because we have only considered risk of bias. We have not attempted to judge whether trials asked important research questions, whether they involved the right participants, whether their outcomes were important to decision-makers such as patients and health professionals nor have we attempted to comment on the many other decisions that affect the usefulness of a trial16,17. In short, the picture our numbers paint is undoubtedly gloomy, but the reality is probably worse.
Five recommendations for change
Plenty of ideas have been suggested about what must change1,3-8 but we propose just five here because the scale of the problem is so great that providing focus might avoid being overwhelmed into inaction. We think these five recommendations, if implemented, would reduce the number of bad trials and could do so quite quickly.
Recommendation 1: do not fund a trial unless the trial team contains methodological and statistical expertise
Doing trials is a team sport. These teams need experienced methodologists and statisticians. It’s hard to imagine doing, say, bowel surgery without involving people who have been trained in, and know how to do, bowel surgery. Sadly, the same does not seem to be true for trial design and statistical analysis of trial data. Our colleague Darren Dahly, a trial statistician, neatly captured the problem in a series of ironic tweets sent at the end of 2020:
These raise a smile but make a very serious point: we would not tolerate statisticians doing surgery so why do we tolerate the reverse? Clearly this is not about surgeons, it is about not having the expertise needed to do the job properly.
Recommendation 2: do not give ethical approval for a trial unless the trial team contains methodological and statistical expertise
As for Recommendation 1, but for ethical approval. No patient or member of the public should be in a bad trial and ethical committees, like funders, have a duty to stop this happening. Indeed, we think public and patient contributors on ethics committees should routinely ask the question ‘Who is the statistician and who is the methodologist?’ and if the answer is unsatisfactory, ethical approval is not awarded until a name can be put against these roles.
Recommendation 3: use a risk of bias tool at trial design
This is the simplest of our recommendations. Risk of bias tools were developed to support the interpretation of trial results in systematic reviews. However, as Yordanov wrote in 20155, by then the horse has bolted and nothing can be changed. Applying a risk of bias tool at the trial design phase, correctly interpreting the results and making any necessary changes to the trial, would help to avoid some of the problems we highlight. Funders could ask to see the completed risk of bias tool, as could ethics committees. No trial should be high risk of bias.
Recommendation 4: train and support more methodologists and statisticians
Recommendations 1, 2 and 3 all lead to a need for more methodologists and statisticians. This has a cost but it would probably be much less than the money wasted on bad trials. See Recommendation 5.
Recommendation 5: put more money into applied methodology research and supporting infrastructure
Methodology research currently runs mostly on love not money. This seems odd when over 60% of trials are so methodologically flawed we can’t believe their results and we are uncertain whether we should believe the results of another 30%.
In 2015 David Moher and Doug Altman proposed that 0.1% of funders’ and publishers’ budgets could be set aside for initiatives to reduce waste and improve the quality, and thus value, of research publications6. That was for publications but the same could be done for trials, although we’d suggest a figure closer to 10% of funders’ budgets. All organisations that fund trials should also be funding applied work to improve trial methodology, including supporting the training of more methodologists and statisticians. There should also be funding mechanisms to ensure methodology knowledge is effectively disseminated and implemented. Dissemination is a particular problem and the UK’s only dedicated methodology funder, the Medical Research Council-NIHR ‘Better Methods, Better Research’ Panel acknowledges this in its Programme Aims17.
Money will always be tight. Our work, and that of others before us1,3-8, make clear that a large amount of the money we put into trials globally is being wasted. Some of that money could be repurposed to fund our five recommendations. This might lead to fewer trials overall but it would generate more good trials and mean that a greater proportion of trial data is of the high quality needed to support and improve patient and public health.