The underlying hypothesis that changes made to ethics and governance processes because of the COVID-19 pandemic altered times to ethics approval and times to governance approval was proven. Approval times were shorter in the post-pandemic period when compared to the pre-pandemic era and COVID-19 projects had shorter approval times compared to projects addressing other disease areas. The coherence of these observations makes changes to ethics and governance processes resulting from the pandemic by far the most likely explanation.
Differences in pre- and post-pandemic times to approval were broadly consistent for both ethics and governance approval processes across the states that contributed data. The variation between states that was present seems more likely to be a consequence of the play of chance than a real difference, with no state observing a statistically significant effect going in the opposite direction to the overall findings. In every state where timeframes appeared to deteriorate rather than improve, there were small numbers included in the sample and the reliability of the results was low. There do, however, appear to be differences between approval times for states for both ethics and governance processes. For example, ethics approval times for state 1 were short both pre- and post-pandemic (26 and 23 days, respectively) whereas they were both much longer in state 5 (142 and 66 days, respectively). These differences are based on fairly large numbers of ethics approvals and are likely to be a consequence of real differences in ethics processes. Likewise, the much shorter governance approval times for state 5 pre- and post-pandemic (17 and 7 days, respectively) compared to state 8 (58 and 41 days, respectively) are also very likely to reflect real differences in the processing of governance applications.
This study documented approval times to test the hypothesis that ethics and governance interventions implemented because of the COVID-19 pandemic had delivered changes. It did not seek to explore the reasons why the changes were achieved, and this should be the focus of subsequent work now that the hypothesis has been proven. It is, however, possible to hypothesise some possible reasons for the differences observed. For example, a prior systematic review has identified that triaging of applications by their risk category can result in shorter approval times. The separation of projects into COVID-19 studies versus other studies is a form of triage and this may therefore partly explain the reductions in approval times we saw. Anecdotal reports indicate that COVID-19 projects were prioritised to the head of the agenda by review committees which would also be expected to reduce approval times for those studies. The same overview also identified that using scope guidelines that define (and limit) the breadth of review by committee can speed review though we do not know if this was done.12 Likewise, mutual acceptance of ethics approval provisions can greatly reduce ethics approval timeframes once the initial review at the first site has been completed. Past literature reviews show that jurisdictions seeking to reduce their ethics and governance approval times most commonly implement several such interventions and disentangling the effects of each can be difficult.13 Some sites have recently reported use of “integrated models” where trial design and ethics review are done concurrently with members of the ethics review panel attending and contributing to project planning meetings. This was in place in at least one state in Australia during the pandemic years.14
The differences between approval times pre- and post-pandemic were more marked for governance review (42 days versus 28 days) than for ethics review (46 days versus 42 days), though there were large improvements for both when comparing the post-pandemic review of COVID-19 versus other types of studies. The inference is that the governance review processes implemented to speed approval post-pandemic benefited all projects, COVID-19 and others, while the ethics interventions that were implemented benefited the COVID-19 projects alone. The reasons for this are unclear and not easily explained by anecdotal reporting of more frequent ethics committee meetings, increased research funding for COVID-19 projects,6 or the adoption of the national mutual acceptance for ethics review by two further states during the pandemic period.15
The variance in approval processes observed in the meta-data collected for this project was also experienced by the researchers doing this project which involved multiple states across Australia. Some states determined that the de-identified meta-data required could be released without an ethics review while some required full ethics committee assessment. Some states were part of a national mutual acceptance of ethics review program, some were not, and some were but still required a secondary approval of their own. Governance processes were similarly varied and while rich data were obtained from some within relatively short time frames it proved very difficult to negotiate the requirements for others. Challenges in accessing data were most substantive in the states that lacked state-wide coordination of processes and had no centralised mechanism for submitting and tracking approvals.
Strengths and weaknesses
The large size of the overall data set provided good statistical power to minimize random errors and maximise the capacity to test the primary hypothesis. Numbers of approvals were smaller for individual states, which meant that there was more uncertainty about between state comparisons, though the large differences in approval times observed between some mean that they are unlikely to all have arisen by chance. The same is true for the comparisons of approval times between COVID-19 projects versus other project types. The inability to collect all data for all states during the period of interest limits the generalisability of the findings to all Australian institutions though the breadth of data available has provided considerable insight. The evidence of differences between states is of particular interest because it suggests that subsequent research might identify actions taken by one, but not by another, that could explain the differences observed.
It is possible that the data may be confounded by the incomplete collection of all data, which could mean that selections of studies with systematically shorter or longer approval times have been preferentially included or excluded. It is also possible that differences in characteristics of sites or study types not captured in the dataset could explain the differences observed. For example, it may be that the phases of clinical trial projects differed between states and that early phase projects had different average timelines than later phase projects. Additionally, it is possible that the volume of applications processed by each site changed considerably during the pandemic years as both by total numbers and by each site. The requirement to anonymise the data will limit the direct interpretation of the data by the states of Australia, but we will work with each to better understand how the findings might support enhanced approval processes. Finally, this study does not examine the quality of the reviews undertaken and how this may have been impacted by the faster times seen during the pandemic.