Study design
In this hybrid effectiveness-implementation study [14], a parallel-group, assessor-blind, multi-centre cRCT will be conducted, in which 24 clinical units (as the unit of randomization) at eight sites in four European countries are randomly allocated using an unbalanced 2:1 ratio to one of two conditions: (a) the experimental condition, in which participants receive the DMMH and other implementation strategies in addition to treatment as usual (TAU) or (b) the control condition, in which participants are provided with TAU. Outcome data in service users and clinicians will be collected by assessors masked to random allocation of clinical units at four time points: at baseline (t0), 2-month (t1), 6-month (t2) and 12-month post-baseline (t3). The cRCT will follow Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines [18] and relevant extensions [19, 20]. Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)[21] and Template for Intervention Description and Replication (TIDiER)[22] checklists are available as additional files 6 and 7. The primary outcome will be patient-reported service engagement assessed with the SAQ at 2-month post-baseline, a measure of service users’ experience of, and engagement with, their treatment and service [23]. The cRCT will be combined with a process and economic evaluation to provide in-depth insights into in-vivo context-mechanism-outcome configurations as well as economic costs of implementing the DMMH in routine care.
Study population and clusters
Service users and clinicians comprise the study population in the cRCT, process and economic evaluation in the four European countries, which will be recruited from three clinical units (clusters) in each of the eight clinical sites and, thus, a total of 24 units (clusters). Due to differences in mental health care systems across the four European countries, clinical units (clusters) will vary in structure, service type, and size, which will enhance generalizability of findings and allow implementation to be examined under various conditions. Clinical units (as cluster and unit of randomization) will be community mental health teams (Scotland), tracks (Germany (at the Central Institute of Mental Health Mannheim (CIMH)), inpatient, outpatient and community-based services (Germany (at Psychiatric Centre Nordbaden), Belgium, and Slovak Republic). A clinical unit typically will comprise a multidisciplinary team (i.e., psychiatrists, psychologists, nurses, social workers, occupational therapists). In each of the eight clinical sites, 54 service users will be recruited from three clinical units (i.e., 18 service users per unit), and thus a total of 432 service users (n = 288 in the experimental condition, n = 144 in control condition). In addition, around 100 clinicians of these service users from 24 clusters across all sites are estimated to be recruited in a period of 6 months (see Fig. 1). Recruitment and consent procedures as well as eligibility criteria are described in more detail in additional file 1.
[Figure 1]
Experimental and control condition
Control condition: treatment as usual (TAU)
Participants in the control condition will be provided with TAU (i.e., continue to receive all the treatment they received prior to the start of the study). This will include good standard care delivered according to local and national service guidelines and protocols by their general practitioner, psychiatrist and other mental health professionals. Service contacts will be assessed for the duration of the trial using the Client Service Receipt Inventory (CSRI) [24] to monitor variation in delivery of, and engagement with, mental health services and digital technology.
Experimental condition: DMMH + additional implementation strategies + TAU
The DMMH and additional implementation strategies supporting its use will be provided in clinical units allocated to the experimental condition in addition to TAU. The DMMH reflects the core strategy for implementing ESM-based monitoring, reporting, and feedback in routine care and consists of (1) the MoMent App and (2) the MoMent Management Console (see above), which are hosted on the Therapy Designer platform by the movisens GmbH (Karlsruhe, Germany). In addition, technological implementation strategies, implementation strategies for clinicians and service users, and organisational implementation strategies will be delivered to facilitate the use of ESM-based monitoring, reporting and feedback in routine care via the DMMH. Both DMMH and additional implementation strategies are described in detail in additional file 2 and 6.
The DMMH and additional implementation strategies will be delivered over a 6-month period, including an initial 2-month period for focused delivery of DMMH and implementation strategies, in which service users will be asked to use ESM-based monitoring via the MoMent App, and both service users and clinicians will be provided with detailed numeric and visual reporting and feedback via the MoMent Dashboard (in addition to basic visual feedback via the MoMent App) for at least four weeks. In the remainder of this 6-month period, service users and clinicians will continue to have access to the DMMH and additional implementation strategies. After the end of this period, there will be a 6-month maintenance period, in which service users and clinicians still have access to the DMMH but additional implementation strategies for service users and clinicians requiring active support by the research team will be discontinued.
Outcome measures
Following the RE-AIM framework [16], the evaluation in this hybrid effectiveness-implementation trial will focus on a range of outcomes relating to Reach, Effectiveness, Adoption, Implementation, and Maintenance of the DMMH, covering service user experience, implementation outcomes, and mental health outcomes. The primary outcome has been selected for investigating the Effectiveness of implementing the DMMH. The primary outcome will be patient-reported service engagement assessed with the SAQ total score [23] at 2-month post-baseline, a measure of service users’ experience of, and engagement with, their treatment and service. It captures a proximal effect of DMMH implementation on service users’ interaction with mental health services and, hence, reflects a primary indicator of implementation success. All primary and secondary outcomes will be assessed at baseline (t0), 2-month (t1), 6-month (t2) and 12-month post-baseline (t3). Secondary outcome data collected using ESM will follow the protocol from previous experience sampling studies [1]. Please see Fig. 2 (based on SPIRIT statement) and additional file 3 for details of all measures used to examine reach, effectiveness, adoption, implementation, and maintenance. The assessment of safety based on the EU Medical Device Regulation (MDR 2017/745) is described in additional file 4.
[Figure 2]
Process evaluation
A process evaluation will be performed using semi-structured interviews with service users, clinicians and managers/system administrators during or at the end of the initial 6-month period. The interviews of the process evaluation will be semi-structured and take a realist evaluation approach [17], which will be combined with the Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework [25] to establish what works, for whom, in what circumstances, in what respects, to what extent, and why. This implies that configurations of contextual factors, mechanisms of implementation, and outcomes of the implementation and intervention are explored across all levels of agents within the intervention and its implementation (i.e., individual participants, clinicians, managers and system administrators, and socio-economic and contextual factors that may impact their intentionality, behaviour and decision making at different stages of the intervention). This will also allow us to examine how service users and clinicians appropriate DMMH to serve their particular needs. Initial programme theories will be developed based on initial semi-structured interviews. Overarching programme theory and accompanying context-mechanism-outcome configurations will be tested among intervention users (individual interviews with participants who have completed the DMMH) as well as those who deliver the intervention (i.e., clinicians) and providing the context of intervention delivery (i.e., managers/system administrators), through iterative data collection. We will also explore unexpected consequences (positive or negative) on service users and health care professionals, such as impacts on clinical teams and organizations. Taken together, this will allow us to identify key aspects of successful and effective implementation of the DMMH in routine clinical pathways and treatment settings.
Randomization and blinding
A validated and concealed procedure for randomization will be applied independently of the research team using an unbalanced 2:1 ratio for allocating clinical units to the experimental and control condition stratified by the eight clinical sites without contamination (cross-exposure to the experimental condition). An unbalanced allocation ratio of 2:1 will be used to allow for more detailed investigation of implementation aspects and protect against attrition. This will include an option in the concealed randomization procedure to allocate additional clinical units at each of the eight clinical sites if recruitment rates are lower than expected for some clinical units.
After random allocation of clinical units, clinicians in the experimental condition will be informed about allocation status. This will be done through an independent researcher and not the outcome assessors, who will be blind to allocation status for assessments at baseline, 2-month, 6-month and 12-month post-baseline. Further, there will be an independent contact person, who will not be involved in any assessments, for any questions regarding the recruitment and assessment procedure by service users and clinicians. The trial cannot be fully “blind” because clinicians and service users cannot be masked towards the allocation of clinical units to the experimental or control condition. However, outcome assessors will be blinded to allocation status when assessing eligibility, baseline scores and outcomes at baseline, 2-month, 6-month and 12-month post-baseline. Any data specific to the experimental condition (e.g., on clinical feasibility) will be stored in a separate database. Any breaks in masking will be documented in the trial master file and another blinded assessor will be allocated to repeat the assessment and complete the next set of assessments where possible. To maintain the overall quality and legitimacy of the trial, code breaks will only be done in exceptional circumstances when knowledge of the treatment allocation is absolutely essential for further management of the service user.
Given outcome data will be collected using ESM at baseline, 2-month, 6-month and 12-month post-baseline, and ESM forms a key part of the DMMH (as the experimental manipulation of the cRCT), we will control for any potential confounding of ESM outcome data collection by randomly allocating service users to either participation in collecting outcome data (on momentary quality of life, social functioning, and mental ill-health) using ESM or no participation in ESM outcome data collection. This secondary randomization will use a 1:1 ratio in a validated and concealed procedure that will be applied independently of the research team. Notably, this randomization does not address any of our study aims on reach, effectiveness, adoption, implementation, and maintenance but will control for (and investigate in sensitivity analyses) the potential effect of ESM data collection (by including a covariate on random allocation to participation in ESM outcome data collection in our statistical models for testing hypotheses on primary and secondary outcomes). The period of additional ESM data collection is independent of the use of the DMMH and other implementation strategies (as the experimental condition of the cRCT). Please see additional file 3 for further detail.
Sample size calculation
We will test the primary hypothesis of the effect of the experimental vs. control condition (i.e., DMMH + additional implementation strategies + TAU vs. TAU) on service engagement as primary outcome (measured with the patient-reported SAQ total score as the dependent variable). We will use a fixed effects regression model controlling for unit effects, with a dummy variable for the condition (DMMH + additional implementation strategies + TAU vs. TAU), the SAQ engagement total score at baseline, and a dummy variable coding for the random allocation to ESM data collection at baseline. Ignoring in a first step the correlations between values taken from the same cluster, the total number of service users required to detect an effect of size d = 0.4 (i.e., an effect size slightly lower than reported for service engagement in a previous study investigating the effects of an App-based mobile mental health solution [26]), with power 1- β = 0.80 at an alpha level of 0.05 with sample size ratio 2 (experimental condition) : 1 (control condition) is computed to be N0 = 201 for the null hypothesis that the difference between both experimental and control condition in terms of the mean SAQ score is zero at 2-month (t1) post-baseline, versus the alternative hypothesis that there is a difference (non-directional alternative). The statistical test will employ a fixed effects linear regression model with a variable representing treatment vs control as the focal coefficient. The coefficient test will be performed at significance level α = 0.05, with the SAQ score at baseline as control variable and controlling for unit clustering. If all 24 units to be randomized have size n0 = (201/24=) 8.38 and assuming an intraclass coefficient of ICC = 0.05 [27], then N0 has to be increased to account for the effect of clustering by a factor of DEFF = 1 + 7.38×ICC = 1.37 for unit clustering. After appropriate upward rounding, this yields a total sample size of N = 288 service users, of whom 16 × 12 = 192 will be randomly assigned to the experimental condition (DMMH + additional implementation strategies + TAU) and 8 × 12 = 96 to the control condition (TAU). Further, based on previous research by IMMERSE partners and a meta-analysis investigating attrition in smart phone-based interventions [28], we expect an attrition of 35.5% from inclusion to 2-month post-baseline. For each of the eight clinical sites, this implies that a minimum of 54 service users will be recruited from three clusters (i.e., n = 18 service users per cluster) and thus a total sample of 432 service users across all sites at baseline (i.e., with n = 16 × 18 = 288 in the experimental condition, n = 8 × 18 = 144 in the control condition), which allows for a 35.5% attrition rate to detect a medium effect size of d = 0.4 (with a power of 0.80, ICC = 0.05, and α = 0.05). We will allow for an increase in recruitment target at all sites to up to n = 108 per site (i.e. 36 service users per cluster) in order to compensate for delays in recruitment that may lead to overall under-recruitment across sites.
Statistical analysis
Statistical analysis will be performed according to the intention-to-treat principle based on a pre-registration and statistical analysis plan (SAP) published on the OSF [15]. Please see the published SAP [15] for further details on the statistical analysis. The primary hypothesis of the effect of condition (i.e., DMMH + additional implementation strategies + TAU vs. TAU) on service engagement as primary outcome (measured with the patient-rated SAQ total score as the dependent variable) will be tested using a fixed effects regression model controlling for unit effects, with a dummy variable for the condition (DMMH + additional implementation strategies + TAU vs. TAU), the SAQ engagement total score at baseline, and a dummy variable coding for the random allocation to ESM outcome data collection. The focal coefficient of the model is the coefficient for the dummy variable for condition, tested via t-test for the null hypothesis of no difference between the two conditions against the two-sided alternative hypothesis that there is a difference at 2-month post-baseline. The experimental condition will be interpreted as having a statistically significant effect in the hypothesised direction if the estimated coefficient indicates a higher SAQ score (i.e. more service engagement) for this condition and the associated t-test is significant at α < .05. The primary endpoint analysis will be based on observed data using Full Information Maximum Likelihood estimation for the linear regression model. As there is minimal missing data expected in baseline and structural variables in the statistical model for the primary endpoint, we will evaluate whether more robust approaches in sensitivity analyses will be needed and would report results based on multiple imputations (please see the SAP [15] for further details).
We will test the secondary hypotheses of the effect of condition (DMMH + additional implementation strategies + TAU vs. TAU) on personal recovery, self-management, shared-decision making, personal therapy goal attainment, social functioning, social participation, quality of life, and symptom improvement/severity as secondary outcomes using a mixed model (restricted maximum likelihood estimation). First, we will use α < 0.05 to indicate statistical significance, i.e., evidence against the null hypothesis of no difference between the two conditions across all three time points (an average difference between conditions across 2-month, 6-month and 12-month post-baseline) against the two-sided alternative hypothesis that there is a difference; second, we will use, for each secondary outcome, linear contrasts at each of the three follow-up time points to investigate whether there is a potential difference between the conditions (adjusted nominal level α/3 each). The covariates included in the model in addition to the condition indicator will be the respective secondary outcome score at baseline, time (as a three-level factor), the baseline × time interaction, the condition × time interaction, and a dummy variable coding for the random allocation to ESM data collection at baseline. In addition to p-values, 95% confidence intervals for the time-specific treatment effects will be calculated. Clustering of measures within clinical units (and of repeated measures within participants) will be taken into account by allowing residuals within clinical units (and participants) to be correlated with an unstructured variance-covariance matrix. As ESM data have a multilevel structure, multiple ESM observations (level 1) will be treated as nested within time points (i.e., 2-month, 6-month, and 12-month post-baseline) (level 2) and time points will be treated as nested within participants (level 3). Additional analyses will investigate between-site effects (via condition-site interactions) and associations with unit-level characteristics. Another set of additional analyses will investigate a potential ESM method effect on primary and secondary outcomes due to ESM outcome data collection alone (based on the within-trial randomisation of participation in ESM outcome data collection).
Economic evaluation
The economic evaluation serves the dual objective of providing information on the costs and cost structure of delivering the DMMH and other implementation strategies under different health care systems and of establishing the intervention’s value for money. As such, it will include both a micro-costing study, cost-effectiveness and cost-utility analysis conducted based on data collected as part of the cRCT. Specifically, we will adopt an activity-based approach to costing and estimate the economic costs of all key intervention activities including administration of DMMH by clinicians and delivery of implementation strategies during the implementation phase. We will use the EQ-5D-5L [29] to assess quality-adjusted life years (QALYs) and the CSRI [24] to assess use of health services, social care, informal care, and production losses as a basis for the economic evaluation. The cost-effectiveness analysis will combine cost data with patient-reported service engagement using the SAQ total score at 2-month post-baseline as the primary outcome of the cRCT. Incremental cost-effectiveness ratios (ICER) will be calculated as a measure of the incremental cost incurred by the DMMH and implementation strategies relative to their incremental benefits (based on the SAQ total score at 2-month post-baseline) in the experimental vs. control condition. Cost-utility analysis will combine cost data with quality-adjusted life years (QALYs) derived from the EQ-5D-5L and relate the incremental cost incurred by the DMMH and implementation strategies to QALYs gained at 2-month, 6-month and 12-month post-baseline in the experimental vs. control condition. In addition, we will investigate the potential cost savings which is likely generated by the expected reduction in the use of care services monitored by the CSRI [24].
Regulatory requirements and research governance
Regulatory requirements by EU MDR 2017/745, relevant DIN EN ISO norms and IEC standards need to be addressed and overcome for successful implementation of ESM-based monitoring, reporting and feedback via the DMMH in mental health care practice. The current study is carried out as an ‘Other Clinical Investigation’ according to § 82 of the EU MDR 2017/745 and its national implementation in Belgium, Germany, and Slovak Republic, relevant national legislation in Scotland and the UK, EN DIN ISO 14155 and associated DIN EN ISO norms and IEC standards. CIMH is the sponsor of this ‘Other Clinical Investigation’, which forms part of of Work Package 7 (WP7) “Implementation Strategies, Processes, Outcomes and Costs” of the EU IMMERSE consortium, with the sponsor of this EU consortium being KU Leuven. The ‘Other Clinical Investigation’ has received ethics approval by IECs and, where required by national legislation, formal notification of (i.e., BfArM (Germany), or approval by (i.e., FAGG (Belgium), ŠUKL (Slovak Republic)), relevant regulatory authorities was obtained (Belgium, EUDAMED No. CIV-22-08-040547-SM01; Germany, DMIDS No. DE-22-00013961; Scotland, Ref. No. 22-WS-0125; Slovak Republic, EUDAMED No. CIV-SK-22-08-040547). Amendments to the study protocol will be submitted to the relevant IEC and regulatory authorities, then communicated to all relevant parties (DMEC, TSC, the sponsor of the clinical investigation (CIMH), funder, and collaborating centres) and will be updated in the clinical trial registry. The trial has been prospectively registered with the ISRCTN registry (ISRCTN15109760, registration date: 03/08/2022). Deviations from the study protocol are monitored across all sites and managed centrally by the sponsor of the clinical investigation (CIMH). The handling of the data will be in compliance with the European General Data Protection Regulation (GDPR) and relevant DIN EN ISO norms and IEC standards. The Therapy Designer and movisensXS (for ESM outcome data collection) platforms are hosted by movisens GmbH. The research database, analysis and compute area are hosted by the data management team at Friedrich-Alexander-University Erlangen-Nuremberg. All outcome data collected will be checked for quality on an ongoing basis, archived and integrated for analysis by the data management team. Access to the locked trial data set will be provided by the data management team to the trial statistician and investigators only after completion of data collection, checking/cleaning as well as publication of the SAP on the OSF. Further details on research governance are reported in additional file 5.