The majority of UKCRC registered CTUs responded to the survey (86% (43/50)), including five who clarified that they do not run phase III CTIMP trials. CTUs that do not carry out 24 hour randomisation were less likely to respond to the survey (Table 1).
Table 1: A comparison of invited CTUs response status and four key characteristics
CTU characteristics
|
Answered survey
N (%)
|
Did not answer survey
N (%)
|
Chi square and p value for answering survey comparison
|
Those eligible for survey
N (%)
|
Registration status
Full
Provisional
|
39 (91)
4 (9)
|
6 (86)
1 (14)
|
0.2
P=0.7
|
36 (95)
2 (5)
|
Cancer trials
Yes
No
|
30 (70)
13 (30)
|
4 (57)
3 (43)
|
0.4
P=0.5
|
28 (74)
10 (26)
|
24 hour randomisation
Yes
No
|
38 (88)
5 (12)
|
4 (57)
3 (43)
|
4.4
P=0.04
|
33 (87)
5 (13)
|
International trials
Yes
No
|
29 (67)
14 (33)
|
6 (86)
1 (14)
|
1.0
P=0.3
|
25 (66)
13 (34)
|
A maximum of 38 CTUs completed at least one survey question and their characteristics are described in the last column of Table 1. The questionnaire took a median 19 minutes to complete, interquartile range (8.5, 64.0). As some CTUs did not answer every question, the actual number doing so for each question is given.Many CTUs considered their typical Phase III randomised CTIMP trial to have 101-1000 patients and 11-49 sites (Table 2)
Table 2: Number of participants and sites for phase III randomised CTIMP trials run by included CTUs
Number of sites
|
Number of patients
|
Total
|
1-100
|
101-1000
|
1001-2499
|
2500+
|
No answer given
|
1-10
|
1
|
4
|
0
|
0
|
0
|
5
|
11-49
|
2
|
14
|
5
|
0
|
1
|
22
|
50+
|
0
|
5
|
3
|
3
|
0
|
11
|
Total
|
3
|
23
|
8
|
3
|
1
|
38
|
28/38 (74%) of CTUs had some non-UK sites. For all CTUs an assessment of risk informed their monitoring approach, although for one CTU this was only sometimes and for four CTUs this assessment of risk was done by the sponsor.
For one CTU, the sponsor had responsibility for all monitoring, so they could not complete the sections on central and on-site monitoring. Thirty-four of the remaining 37 CTUs completed the questionnaire.
Central monitoring
Almost all CTUs use centrally available data to evaluate site performance (34/37, 92%) with two further CTUs (total 36/37, 97%) using a central monitoring process to guide, target or supplement site visits. One sixth (6/36, 17%) reported never using a centralised monitoring process to replace on-site visits, while two reported always doing so (2/36, 6%).
Over half reported running central monitoring processes at least once per month on each trial (20/35, 57%) with only one (3%) running them just annually.
For more than half of CTUs (19/36, 53%), central monitoring is not explicitly programmed i.e. standard reports may be used, but a monitoring report is not automatically produced. For the remainder, 5 (14%) use the same monitoring programming code for all of their trials, 4 (11%) choose pre-written modules with some bespoke programming and 8 (22%) write bespoke software programming for each trial.
The assessment of triggers showing a site should be visited is not solely defined by software for any CTU, instead there is always human input in choosing which sites to visit. Figure 1 show the factors likely to trigger an on-site monitoring visit.
Figure 1: Graph showing the frequency of factors likely to trigger an on-site monitoring visit.
Figure 1 Legend: CTU could choose multiple options. (No: number, pt: participant)
On-site monitoring
All 34 CTUs responding to the on-site questions performed on-site monitoring at least sometimes, with most CTUs finding the on-site visit to take one day (27/34, 79%) and needing one person (27/34, 79%). This person was often a trial manager or dedicated monitor (18/34, 53%) but in some cases could be a member of the CTU’s Quality Assurance team, the Chief Investigator or a member of staff from a contract research organisation (CRO) (Additional file 3).
Most CTUs used formal triggers to decide whether or not to conduct an on-site monitoring visit (31/34, 91%). Of these, only one (1/31, 3%) solely used triggers to choose whether to conduct an on-site visit , with the remainder also conducting some on-site visits after fixed time periods, based on the number of patients that had been recruited, because of a trial event (e.g. independent data monitoring committee - IDMC), or for a mixture of these reasons.
The stated reasons behind the frequency of on-site visits are given in figure 2.
Figure 2: Graph showing the reasons for frequency of on-site monitoring visits. CTU could choose multiple options.
The pre-defined analysis of risks, the study design and the monitoring plan were each listed by more than 20/34 (59%) of CTUs as being a reason behind the frequency of on-site visits. We asked how much on-site source data verification (SDV) was done for various classifications of data. Eight CTUs commented that the question was too difficult to answer as a unit policy across all their trials due to the variability of SDV even within phase III randomised CTIMP trials and so this question was attempted by 24 CTUs (Table 3). One CTU (1/24, 4%) never did any SDV.
Table 3: Table showing how much Source Data Verification (SDV) was done for differing classifications of data.
|
%SDV
|
Total
|
100
|
60
|
50
|
30
|
20
|
15
|
10
|
5
|
0
|
All data
|
0
|
0
|
2
|
1
|
3
|
1
|
5
|
1
|
2
|
15
|
Consent
|
19
|
0
|
2
|
1
|
0
|
0
|
0
|
0
|
0
|
22
|
Eligibility criteria
|
13
|
0
|
2
|
1
|
0
|
0
|
1
|
1
|
1
|
19
|
Primary endpoint reports
|
13
|
0
|
0
|
1
|
0
|
0
|
3
|
1
|
0
|
18
|
Secondary endpoint reports
|
4
|
1
|
1
|
1
|
1
|
0
|
6
|
1
|
2
|
17
|
SAE: Serious adverse event reports
|
14
|
0
|
1
|
1
|
0
|
0
|
1
|
1
|
1
|
19
|
AE: Non-serious adverse event reports
|
4
|
0
|
0
|
1
|
1
|
0
|
7
|
1
|
3
|
17
|
Selected priority data
|
8
|
0
|
0
|
1
|
0
|
0
|
3
|
1
|
5
|
18
|
Table 3 Legend: Bold font shows where there appears to be a consensus ie where more than two thirds of the respondents gave the same answer. Columns represent the %SDV that CTUs gave in response to the question
Many CTUs reported doing 100% SDV of consent, eligibility criteria, primary endpoint reports and serious adverse events at any given visit.
Other activities achieved during on-site visits are detailed in Table 4
Table 4: Table showing other activities achieved during on-site visits.
|
Always
|
Freq-
uently
|
Occas-
ionally
|
Never
|
N/A
|
Not sure
|
Total
|
Assess staff’s understanding of study procedures
|
12
|
14
|
1
|
0
|
0
|
0
|
27
|
Assess the ability of staff to explain study to participants
|
4
|
4
|
12
|
10
|
0
|
2
|
32
|
Assess the adequacy and timelines of additional information provided to participants
|
3
|
5
|
15
|
7
|
1
|
1
|
32
|
Assess informed consent updates/modifications
|
20
|
9
|
1
|
1
|
1
|
0
|
32
|
Assess regulatory documents and communications
|
12
|
13
|
6
|
0
|
2
|
0
|
33
|
Assess the security of study data and documentation
|
10
|
11
|
9
|
2
|
0
|
1
|
33
|
Check the site file is complete
|
15
|
16
|
2
|
0
|
0
|
0
|
33
|
Check adherence to GDPR
|
7
|
11
|
7
|
6
|
0
|
1
|
32
|
Table 4 Legend: Bold font shows where more than two thirds of the respondents always or frequently did an activity.
Many CTUs frequently or always assessed the site staff’s understanding of study procedures (96%, 26/27), informed consent updates/modifications (91%, 29/32), regulatory documents and communications (76%, 25/33) and checked the site file was complete (94%, 31/33).
Other results
Table 5 shows what aspect of monitoring the CTUs would most like to change. We gave the top 3 options plus an ‘other’ category.
Table 5: Table showing what aspects of monitoring the CTU would most like to change
Frequency
|
Aspect of monitoring CTU would most like to change
|
27
|
Optimise central monitoring
|
5
|
Stop or reduce SDV
|
1
|
Stop or reduce the number of on-site visits
|
1
|
Other - have funding for more on site monitoring visits
|
34
|
Total
|
The majority of CTUs would most like to optimise central monitoring (27/34, 79%).