Data
We used anonymised data on daily referrals received by DVA specialists from general practices in two boroughs, referred to as borough B and borough C in line with our previous work [15,19]. In these places IRIS has been implemented since 2013, but had a disruption of service for a period of time (Table 1). In borough B this was due to funding of IRIS temporarily stopping, with general practitioners being advised to refer women affected by DVA, to a different but longstanding DVA service provider. In borough C the service disruption was due to staff leaving with no replacement of this staff although funding was still in place. For each borough, in our study, we included data from female patients aged 16 and above, registered at each general practice within the two boroughs. As part of the IRIS intervention, within each practice women affected by DVA were identified by the general practitioner and offered a referral to the named IRIS advocate-educator. Referrals also included self-referrals to the IRIS educators by women who may have seen IRIS publicity material. The primary outcome was the number of daily referrals received by the DVA service provider from each of the 36 and 37 general practices in boroughs B and C respectively over the 48 months (March 2013 and April 2017) in borough B and 42 months (October 2013-April 2017) in borough C (details in Table 1).
IRIS service provision was disrupted for a period of six and three months respectively in boroughs B and C. The timeline of data availability, noting the dates of data collection, of the start of the IRIS implementation, of the start of the disruption of IRIS service, of the end of disruption of IRIS service and the end time of data collection (respectively times ,,and in Figures 1(a)-(b)) are listed in Table 1 and labelled in Figures 1(a)-(b)).
Statistical analysis
Our outcome of interest was the number of daily referrals received by the DVA service provider from general practices, with the rate per 10,000 patients calculated as . We modelled this outcome separately for each borough and testing different regression models (negative binomial, mixed-effect negative binomial models or mixed-effect Poisson model- details in Appendix A). Practice size was included in the model as an offset term. The model allowed for differences in referral rates between GP practices via a random intercept for GP practice. Since the daily number of referrals contained a large proportion of zeroes, we also assessed whether a zero-inflated mixed effects negative binomial model or a zero-inflated mixed effects Poisson model improved the fit to the data. For each regression model, we calculated the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) to compare models. The best-fit model was chosen based on the smallest values of these quantities (details are in Table S1 of Appendix A).
Table 1 shows, separately for each borough, the dates of data collection start and end, the times of IRIS service disruption, as well as the average referral rate in the periods before, during, and after the service interruption. Exploratory analysis showed that the referral rate during the IRIS implementation period was not constant over time, even outside the period of interruption. We therefore modelled the post-implementation trend of the referral rate as a non-linear function of time. This allows us to derive a model-based estimate of the referral rate over the whole period under consideration in this analysis. By adding an indicator variable for days falling into the disruption period, we could estimate the difference in the referral rate due to the interruption during the period within which it occurred.
We used fractional polynomials, with two time transformations as well as an indicator variable for the disruption period, to identify the optimal transformations of time for our models separately for each borough (Model 1 in Appendix A with details of the transformations in Tables S3 and S4). For graphical display, we smoothed the observed average daily referral rate over all practices using a moving average with uniform weights (101 and 45 lagged and forward terms for each referral respectively in boroughs B and C respectively).
Sensitivity analyses
To investigate the robustness of our model fit and account for different ways of modelling temporal fluctuations in the referral rate, we conducted a sensitivity analysis for each borough by fitting both a simpler and a more complex model in comparison to Model 1. The simple model (Model 2 in Appendix A) assumes that the referral rate is constant over time, other than during the disruption period, and calculates and tests the simple difference in the average referral rate between the implementation and disruption periods, albeit controlling for between-practice differences in the base referral rate. Within this Model 2, the model included two predictors: one time transformation, as the random intercept for time, and the indicator variable for the disruption period (see Appendix A for details). In contrast, for the more complex model (Model 3 in the Appendix A), we allowed 5 predictors within the mixed effects negative binomial model: four time transformations as well as the indicator variable for the disruption period. By allowing the fractional polynomials to have higher number of terms, we allowed a closer fit of the modelled referral rate to the observed referral rate over time (see Appendix A for details).
For each of the Models 1-3, and for both boroughs B and C, we calculated the incidence rate ratio (IRRs), their 95% CI and the p-value, quantifying the impact and significance of the IRIS service disruption (details in tables S3 and S4 in Appendix A). To add robustness to the results, we added bootstrapped calculations for the standard errors with 500 replications. All analyses were done in STATA version 15.1.