Methods are described according to SPIRIT guidelines.(33) Completed SPIRIT checklists for the two trials are provided as Additional Files 1 and 2.
Study setting
The two trials will be conducted in six Government-run secondary schools in New Delhi, India. The schools were purposively selected in consultation with the Department of Education, Government of New Delhi, to focus on relatively under-served, low-income communities. Of the six schools, three are boys’ schools, two are girls’ schools and one is co-educational. As of August 2018, there were 172 divisions in grades 9-12 with a total student population of 8448 (ranging from 1050 to 1632 per school; mean =1408, SD=225), including 4694 (56%) boys and 3754 (44%) girls.
Participants
Figure 1 summarizes the participant timelines and flows for both trials in a combined CONSORT flowchart.(34, 35)
Embedded recruitment trial. Seventy classes (corresponding to 3448 registered students) will participate in the embedded recruitment trial. These classes will be selected at random using computer-generated random numbers, stratified by school and grade, drawing from a pool of 118 eligible classes (excluding 54 classes that had received sensitization during earlier pilot work in these schools). A small block size of 2 will be used to ensure balance between the arms, as the number of classes within each grade at individual schools is relatively small. In the rare instance that a selected class has been dissolved or merged with another class, the next class in the random list will be included to replace the unavailable class. Each class will switch over from the control to the intervention arm at 4-week intervals (excluding holidays and exam breaks), over 2 steps. During the first 4-week period, only school-level sensitization activities will be implemented. In the next 4-week period (first step), 35 randomly selected classes will receive the classroom sensitization intervention. In the final 4-week period (second step), the remaining 35 classes will receive the classroom sensitization intervention. Schedules for sensitization in the allocated classes will be shared with the schools in advance to ensure access.
Host trial. The host treatment trial will recruit participants originating from the 70 classes sampled in the embedded recruitment trial, as well as participants drawn from other classes as needed. The precise schedule of recruitment activities in the remaining 102 classes will be calibrated according to referral patterns and caseload capacity for intervention providers in the various schools. Referrals to the host trial can be generated through self-referrals (either by presenting oneself to the counsellor or by completing a self-referral form and posting it in a drop-box), teacher referrals or from others like friends, siblings and parents. All referred adolescents will be followed up by a researcher and screened for eligibility to enrol in the host trial (Table 1).
Consenting participants (see section on consent procedures below) will be enrolled by researchers and randomized to the intervention or the control arm after the baseline outcome assessments are completed. The researchers will escort the participants randomized to intervention arm to meet the counsellor. The randomization list will be developed by an independent statistician (HW), applying stratification by school (and gender for the co-educational school) using randomly sized blocks of four or six. The randomization code will be concealed using sequentially numbered opaque sealed envelopes to maximize allocation concealment(36). Errors in randomization will be recorded and reported.
Sample size and power calculations
Embedded recruitment trial. We based our power calculation on a within-period comparison for a stepped wedge design (58) using Stata package “clustersampsi” (37). For a comparison of referral rates (proportions) between the intervention and control arms, we considered a feasible sample of 70 classes (average class size of 50 student), and an estimated intra-cluster correlation coefficient of 0.124 (calculated from referral data obtained from 11 classes included in pilot work). Based on our pilot data, the trial will have 92% power to detect a difference of 5% and 15% in the control arm and intervention arm respectively, at a significance level of 0.05.
Host trial. Sample size estimations were produced for two co-primary outcomes: mental health symptoms (SDQ Total Difficulties score) and idiographic problems (YTP score). We based the estimations on two data sources. First, we obtained uncontrolled effect sizes (ES=difference in means/SD) for both co-primary outcomes from a group of 52 adolescents who received the problem-solving intervention during pilot work in the six secondary schools in New Delhi. Among these students, all of whom met the same baseline eligibility criteria as intended for the current trial, the mean SDQ Total Difficulties scores changed from 23.4 (SD 3.4) at baseline to 16.1 (SD 5.9) at the end of the intervention (ES=1.4). The mean YTP scores for the same group changed from 5.6 (SD 2.0) at baseline to 2.9 (SD 2.6) at the end of the intervention (ES=0.9). Second, we obtained a paired effect size on the SDQ Total Difficulties score from another cohort of 47 adolescents participating in a later phase of piloting, including 29 students who received the problem-solving intervention and 18 waitlisted controls (ES=1.03). YTP data were unavailable for this second cohort. Effect sizes in trials are often smaller than in pilots so we conservatively hypothesized that our intervention will be associated with an ES=0.5 on both the co-primary outcomes with 90% power. We assumed a 1:1 allocation ratio of individual participants within each of the six schools, loss to follow up of 15% over 6 weeks (based on piloting), and a Bonferroni correction to adjust for multiple primary outcomes. Based on these assumptions, we will need to recruit N=240 participants in total. This sample size provides 80% power to detect an ES of 0.44.
Interventions
Embedded recruitment trial
Intervention arm. The intervention arm will comprise a single 30-minute classroom session intended to improve understanding about signs and symptoms of mental health problems, raise awareness about the school counselling service, and generate demand for these services. The session is delivered to each individual classroom separately (approximately 50 students per classroom) by the same counsellor responsible for the problem-solving intervention in the host trial at that school. The counsellor will be assisted by a researcher who has additional responsibilities for processing referrals and conducting eligibility assessments. The classroom session will start with a short animated video (link to video) which provides age-appropriate information about types, causes, impacts and ways of coping with common mental health problems. The video is followed by a guided group discussion, structured around a standardized script that builds on the topics covered in the video. In case of technical difficulties that may prevent the video from being shown, a flipchart based on still illustrations from the video will be used.
At the end of the session, students will be handed a self-referral form which includes normalizing information and question-based prompts to assist with self-identification of mental health problems. Interested students can approach the facilitators immediately after the session with self-referral forms, or else deposit the forms discreetly in a secure drop-box located outside or near to the counsellor’s usual room. The counsellors and researchers delivering the classroom sensitization sessions will be provided with a structured manual and complete a one-day office-based training. Training will be conducted by master’s level psychologists (prospective supervisors) and comprise lectures, demonstrations and role-plays. The training will be followed by a period of supervised field practice, when the counsellors and researchers are required to complete at least two classroom sessions independently, under observation from supervisors. Quality of intervention delivery will be assessed on a checklist of observable procedures which have been distilled from the intervention manual. Each procedure will be rated on a three-point Likert scale (not done, partially done, fully done). Refresher training, will be conducted once before the trial began.
Control arm. The control arm will comprise whole-school sensitization activities. The supervisor will meet the Principal of each school individually to inform them about planned counselling and research activities and to seek their cooperation for the same. This meeting will also provide structured information about common mental health problems faced by adolescents, and address any concerns related to planned procedures and resource demands. Teachers will be invited to participate in separate group sensitization meetings (up to 30 teachers at a time). A standardized script will mirror the topics covered in the meetings with the school Principals, but with additional emphasis placed on referral procedures (e.g. use of referral forms) for the host trial. Up to three meetings will be held in each school to maximize coverage of teaching staff. The meeting will be conducted by the same counsellor and researcher pairing responsible for delivering the classroom intervention. Posters will be put up in each school at locations with assured visibility such as noticeboards or common corridors, in addition to signage on the drop-box, which will remind students (and teachers) of the counselling service.
Host trial
Intervention arm. A problem-solving intervention will be delivered to individual students across 4-5 face-to-face sessions spread over three weeks. Each session will last for up to 30 minutes (aligned with the usual duration of school periods) and will be delivered in the local language (Hindi). The sessions will be conducted on school premises, in private rooms or, where private rooms are not available, behind screens and curtains. Session 1 will focus on fostering engagement, understanding the participant’s difficulties, and introducing the structure and process of the treatment. Over the next three sessions, the participant will be helped to learn and apply a structured problem-solving strategy involving three steps, each with its own specific goals (following the acronym “POD”): (1) to identify and prioritize distressing/impairing problems (“Problem Identification”); (2) to generate and select coping options for modifying the identified problem directly (problem-focused strategies) and/or to modify the associated stress response (emotion-focused strategies) (“Option Generation”); and (3) to implement and evaluate the outcome of this strategy (“Do it”). The intervention may be concluded after four sessions or else extended to a fifth session, depending on the adolescent’s preferences and logistical barriers to treatment completion such as exam breaks and holidays. The concluding session will focus on consolidating learning and generalizing problem-solving skills across different contexts. With permission, all sessions will be audio-recorded for office-based quality and fidelity assessments. Adolescents will be encouraged to practice problem-solving skills between the sessions, aided by a set of three “POD booklets” which explain problem-solving using illustrated vignettes and describe corresponding between-session practice exercises. Each booklet covers one of the steps of problem-solving and they will be distributed sequentially over the first three treatment sessions. At the end of treatment, the adolescents will be handed a full-color POD poster that summarizes the three steps of problem-solving.
Each school will have one or two counsellors, depending on demand. The counsellors will be Hindi-speaking college graduates aged 18 years or above, with no formal training or qualifications related to psychotherapy or mental health. They will be recruited through online job portals commonly used in the NGO/public sector in India. Selection will be based on reasoning capacity (assessed by written test) and interpersonal skills (assessed by structured role-plays and interview). Selected candidates will receive a structured manual and complete one week of classroom-based training involving a combination of lectures, demonstrations and role-plays. This will be followed by a 6-week period of field training in which counsellors will carry out casework (with at least four cases) under the supervision of psychologists. Trainees’ performance will be evaluated using structured role-plays at the end of classroom-based training, as well as supervisors’ ratings of audio-recorded treatment sessions.
Counsellors’ will participate in weekly peer group supervision meetings, based on an approach tested in the PREMIUM trials, where it was found to be an acceptable, effective and scalable supervision model for lay counsellors in low-resource settings(38). Each 2-hour meeting will be facilitated by one of the counsellors in rotation and overseen by a supervisor. Counsellors will review and discuss one or two audio-recorded sessions in each meeting. Audio-recordings will be rated by all group members using a therapy quality rating scale that incorporates elements from two established scales(39, 40) and assesses skills specific to problem-solving as well as non-specific therapeutic skills (e.g. empathic understanding). Recurrent skills deficits noted by supervisors will be addressed through supplementary training workshops held on a monthly basis. The supervision schedule will ensure a representative selection of audio-recorded sessions, with the intention all counsellors receive equal opportunities to discuss their cases. In addition, supervisors will undertake weekly telephone calls (20-30 minutes) with each counsellor in order to monitor the progress of their caseload, and identify and manage risks. The counsellors will be able to initiate ad hoc calls if immediate help is needed with any case.
Control arm. There are no mental health services in the participating schools. A standardized control arm was therefore devised. Participants allocated to this arm will receive the same printed problem-solving materials used in the intervention arm but without any counsellor contact. Immediately following random allocation to this condition, a researcher will provide a set of POD booklets and explain their purpose and contents using a standardized script. Students will be encouraged to read through the booklets in sequence, and complete the specified practice exercises. No further guidance will be provided.
Screening and outcome measures
Embedded recruitment trial. The primary outcome (referral rate) will be collated from referral logs maintained by researchers in each school. Referral data will be aggregated over each 4 weeks calendar period. Secondary outcomes pertaining to the eligibility and clinical characteristics of students referred to the host trial will be derived from the Strengths and Difficulties Questionnaire (SDQ)(41, 42) (see below).
Host trial. All screening and outcome assessments will be undertaken using standardized self-report measures that have been translated into Hindi. Clinical eligibility criteria (i.e. severity, chronicity and impacts of mental health symptoms) will be assessed using the adolescent-reported form of the SDQ and associated Impact Supplement. The same screening data will also serve as the baseline SDQ/Impact Supplement outcomes for eligible participants who are subsequently enrolled in the trial; baseline assessments for other outcome measures will be completed as soon as possible after completing consent procedures (ideally within 2 working days). The adolescent-reported SDQ/Impact Supplement will be repeated at 6- and 12-weeks post-randomization, along with the parent-reported SDQ/Impact Supplement, and adolescent-reported Youth Top Problems (YTP),(43) Perceived Stress Scale-4 (PSS-4)(44-46) and Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) (47). These measures are described in Table 2. The SDQ will also serve as the basis for assessing remission at both end-points, defined as falling below cut-offs for eligibility on both the SDQ Total Difficulties score (< 19 for boys and < 20 for girls) and Impact score (< 2).
Process measures
Process data on enrollment, randomization and assessment procedures in both trials will be obtained from researcher-completed record forms. These will be collated to obtain assent/consent rates of adolescents and parents (and reasons for missing assent/consent); randomization rates (and reasons for randomization errors); completion rates of baseline and follow-up outcome assessments (and reasons for non-completion); and time lags between intended and completed assessments (and reasons for deviating from targets). In addition, motivations for help-seeking and expectancies for the school counselling program will be explored at the time of eligibility assessment through a brief qualitative interview with a sub-sample of referred students. Similarly, experience and expectancies of ineligible students will also be explored with a sub-sample of ineligible students; assent/consent to use this interview data in the research will be obtained as part of the consent process for the embedded recruitment trial.
Intervention processes will be assessed using additional data sources. In the embedded recruitment trial, counsellor-completed record forms will provide data on key participation indicators including attendance rates and duration for all teacher meetings and classroom sensitization sessions, in addition to number of posters and drop-box installed in the schools.
In the intervention arm of the host trial, counsellor-completed session record forms will be used to obtain process data on duration, spacing and frequency of attended sessions (and reasons for non-attendance); and intervention uptake and completion rates (and reasons for pre-treatment and mid-treatment drop-out). Participants’ adherence to treatment and potential engagement challenges will be assessed using checklists within the same record forms, indicating whether or not the student completed practice exercises at home, used the POD booklets at home, brought the POD booklets to the session, and demonstrated understanding of POD booklets and session content. Summative and sustained use of POD booklets will be assessed in each arm of the trial at 6- and 12-week follow-up assessments using a brief adolescent-reported measure that asks about estimated frequency of home use and perceived helpfulness of POD booklets in the preceding 6 weeks. Service satisfaction data will also be obtained from participants in each trial arm at 12 weeks using the Client Satisfaction Questionnaire-8 (CSQ-8)(55). Supplementary questions will elicit open-ended written feedback on the most helpful aspects of treatment and suggested modifications.
Intervention fidelity will be assessed in both trials using independent ratings of audio-recorded sessions. For the classroom sensitization intervention, 20% of all recordings will be selected at random and rated by a psychologist who is not directly involved with supervision of the intervention providers. A similar approach will be taken to fidelity ratings for the problem-solving intervention, for which 10% of all audio-recorded sessions will be rated independently. Reliability of the independent raters will be established initially by comparison with intervention quality ratings from supervisors (see above).
Blinding
Embedded recruitment trial. The researchers who co-facilitate the classroom sensitization sessions will also record referrals and conduct the host trial eligibility assessments. Blinding of the outcome assessors is therefore not possible.
Host trial. Baseline and outcome assessments will be conducted by separate teams of researchers. All trial investigators, apart from the data manager (BB), will be blind to allocation status until the trial arms are revealed in the presence of both the Trial Steering Committee and Data Safety and Monitoring Committees. However, unblinding of individual participants will be undertaken in case of a serious adverse event and requested by the Data Safety and monitoring Committee (DSMC).
Data collection, management and analysis
Data collection. There will be a seamless flow of adolescents from the embedded recruitment trial to the host trial. The schedules for enrollment, interventions and assessments are summarized in separate SPIRIT diagrams for the embedded recruitment trial (Figure 2) and host trial (Figure 3). A team of school-based researchers will process the referrals, undertake eligibility assessments for the host trial (within a target of <=3 working days from the date of referral) and obtain adolescent assent/consent (within the same day if possible). A separate team of community-based researchers will visit parents/guardians (within a target of <=2 working days after confirming an adolescent’s eligibility) to obtain consent and complete baseline outcome assessments (within the same day if possible). The school-based research team will complete baseline outcome assessments with adolescents once all consent procedures are completed (within a target of <=2 working days). All assessment procedures should therefore be completed within 7 working days from the date of referral.
The community-based research team (blinded to allocation) will complete follow-up assessments at 6 and 12 weeks post-randomization. Assessments will take place at participants’ homes or other convenient locations, within a maximum period of 7 calendar days from the due-date. For each scheduled contact, researchers will make up to four approaches.
Process data from researchers’ logs and counsellors’ session records will be captured on paper forms. All other measures, except for the YTP (which rates idiographic problems and could not be readily converted to a digital format), will be administered via a tablet computer.
Data management. Data will be collected digitally using the customized STAR software program(56), and will be remotely uploaded as comma-separated values (CSV) files on a server which is compliant with Good Clinical Practice (including date and time stamps for original data entry, and an audit trail documenting any subsequent changes). All paper-based data will be entered manually in SQL Epi-info forms and linked by participant ID with digitally collected data. Range and consistency checks will be performed at weekly intervals, with all inconsistencies and corrections logged to maintain an audit trail. All data will be anonymized and backed-up on external hard disks on a daily basis. All session audio-recordings will be linked with the participant ID and stored in a separate, secure, password-protected folder. A separate password-protected file linking names and participant IDs and the random allocation code will be maintained securely by the data manager and not accessed until the unblinding of the trial. All data will be shared in an encrypted form in password-protected files and through secure electronic transfer, when necessary.
Data analysis. Quantitative analysis will be conducted using STATA (version 15). A detailed analysis plan will be agreed with the Data Safety and Monitoring Committee towards the end of the trial and before any analysis is undertaken. Findings will be reported as per CONSORT guidelines(35) for the host trial, and the CONSORT extension for reporting of stepped-wedge cluster randomized trials for the embedded recruitment trial(34).
Embedded recruitment trial. The baseline characteristics of the participating 70 classes, including class size and gender composition, will be described and assessed for any systematic differences across the trial arms. Analysis for the primary outcome will be based on cluster-level summaries (i.e. proportion of students referred to the host trial as a proportion of the total class size). Analysis will be based on the ‘within period’ comparison method (46). This estimates the intervention effect by comparing intervention and control conditions in a given period using cluster‐level data corresponding to exposure. Similarly, differences in the proportions of referrals meeting the host trial eligibility criteria, referrals by source, and severity and patterns of SDQ scores will be assessed for differences across the two arms. All the classes randomized to receive the classroom sensitization intervention throughout the trial will be included in the analysis and will be considered to be in the respective arm to which they were assigned initially, irrespective of the receipt of the intervention. Missing data will be handled by multiple imputations. All analyses for primary and secondary outcomes will be repeated for the sub-group of students who self-refer, as that is only mechanism addressed by both the school-level and the classroom sensitization activities. No interim analysis will be undertaken.
Host trial. The trial flowchart will include the number of students referred, screened, eligible, randomised and analysed for the primary outcome at the 6- and 12-week endpoints respectively. The number refusing or excluded (with reasons), actively withdrawing, and passively lost to follow-up will be shown by arm. These will be summarised by means (standard deviation), medians (interquartile range) or numbers and proportions as appropriate by key relevant subgroups (defined by age, gender and baseline outcome score). For continuous outcomes, histograms within each arm will be plotted to assess normality and whether transformation is required.
The primary analyses will be on an intention-to-treat basis at the 6-week end-point, adjusted for baseline values of the outcome measure, school (as a fixed effect in the analysis) to allow for within-school clustering, counsellor variation (as a random effect), and variables for which randomization did not achieve reasonable balance between the arms at baseline, or those associated with missing outcome data (57). Analyses of outcomes will be conducted using linear mixed-effects regression models for continuous outcomes with normally-distributed errors (e.g. SDQ Total Difficulties score) and generalized (logistic) mixed-effects regression models for binary outcomes (e.g. remission rate). Intervention effects will be presented as adjusted mean differences and effect sizes (ES), defined as standardized mean differences. We will use 95% confidence intervals (CIs) for continuous outcomes, and adjusted odds ratios with 95% CIs for binary outcomes. Additionally, treatment effects for students who receive fewer sessions than prescribed would be estimated using the Complier Average Causal Effect structural equation model(58). Given that we have two follow-up time points (6 and 12 weeks), the analyses will be conducted and interpreted separately for each of these end-points, as well as for repeated measurements. The repeated measures analysis will include an interaction effect between arm and time to allow for differential effects at the two end-points. Multiple imputations will be used to deal with missing data. No interim analyses of outcomes will be undertaken.
We will explore potential moderators of intervention effects, with respect to a priori defined modifiers (i.e. age, gender, chronicity of mental health difficulties, severity of mental health difficulties). We will fit relevant interaction terms and test for heterogeneity of intervention effects in regression models. A mediation analysis will be conducted to examine whether the theoretically-driven a priori factor (perceived stress at 6 weeks) mediates the effects of the intervention on primary outcomes (i.e. mental health symptoms and idiographic problems) at 12 weeks.
Process evaluation. We will undertake descriptive statistical analysis of quantitative process data in order to explore the differential implementation of intervention procedures. In addition, thematic analysis will be used to code and organise qualitative interview data on treatment expectancies (assessed prior to enrolment in the host trial) and qualitative written feedback on treatment satisfaction (assessed at 12-week follow-up in the host trial). Findings from the various data sources will be triangulated and used to develop explanatory hypotheses about potential differences in intervention delivery and participation across schools, subgroups of participants and providers. This process evaluation analysis will be completed before unblinding of the trial results. Once both analyses are complete, process evaluation findings will be used to facilitate interpretation of the main trial results. The trial statisticians may conduct further analyses to test hypotheses generated from integration of the process evaluation and trial outcome data.
Cost analysis. The costs associated with the introduction of the experimental and control arm interventions will be estimated by adding the personnel costs for counsellors and supervisors, together with fixed costs of training (venue/per diem), furniture and supplies. All costs will be reported in Indian Rupees and then converted to US Dollars at the average daily exchange rate over the preceding 12 months.
Trial governance
Monitoring and governance for both trials will be provided by a Trial Management Group (TMG), Trial Steering Committee (TSC) and Data Safety and Monitoring Committee (DSMC). The TSC and DSMC are independent of the funders and comprise of lead investigators and independent domain experts. The TMG and TSC would review trial process indicators (e.g. rates of screening, eligibility, consent, outcome assessments, adverse events) fortnightly and quarterly respectively. The DSMC will receive reports of serious adverse events (as per criteria below). Any trial protocol amendments will be agreed and formulated in conjunction with the TSC and DSMC, and submitted to relevant Institutional Review Boards for approval.