Checklist Development
In the initial developmental phase, existing evaluation frameworks and reporting checklists, focusing on less linear processes than traditional randomized controlled trials (RCTs), offered valuable input. Our “iterative phased approach that harnesses qualitative and quantitative methods” (Campbell et al., 2000, p. 696) helped to develop our envisioned checklist with its purpose to “assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes” (Craig et al., 2008, p. 34)(Medical Research Council, 2000) and to evaluate “more complex, less advantageous settings” (Glasgow et al., 1999, p. 1322). Eventually, the existing knowledge was imperative for defining our final checklist’s ‘backbone’, comprising four categories: context and preparation; description of intervention; execution and delivery; and mechanisms of impact (Table 1).
Apart from frameworks for the evaluation of worksite-based programs, including those that distinguish facilitating and impeding factors (Fleuren et al., 2004; Wierenga et al., 2016), we sought input from team training interventions targeting healthcare providers (Weaver et al., 2010; Zhu et al., 2015; Marlow et al., 2017). We consulted several frameworks and guidelines describing how to report interventions best, for example the CReDECI 2 guidelines (Criteria for Reporting the Development and Evaluation of Complex Interventions in healthcare), which build on the aforementioned Medical Research Council framework (Campbell et al., 2000; Möhler et al., 2015). During our investigation we also searched for resources focusing on specific study designs, since our main objective was the reporting of interventions and their implementation. In addition to the aforementioned sources, we reviewed a broad range of reporting guidelines and frameworks (and their more recent extensions) to find missing components or elements, such as: TIDieR for reporting interventions (Hoffmann et al., 2014); CONSORT and STROBE for simulations (Moher et al., 2010a; Cheng et al., 2016); StaRI for reporting the implementation of CIs (Pinnock et al., 2015; Pinnock et al., 2017); PARIHS for evaluating the implementation of evidence into practice (Rycroft-Malone et al., 2008; Kitson, 2008; Ward et al., 2017); RE-AIM/PRISM for evaluating an implementation (Glasgow et al., 1999; Glasgow et al., 2019); SQUIRE for reporting quality improvement (Goodman et al., 2016); and CReDECI for reporting quality improvement (Möhler et al., 2013; Möhler et al., 2015). These efforts mainly contributed to textual adaptations and alterations, as well as some changes to the composition and ordering of the elements, but did not provide significant additional topics for our checklist.
During the final developmental stages, we formulated components by combining the terminology used in the studied frameworks. Several papers described the prolific growth of terminology in implementation evaluation science and presented guidelines or ‘meta-frameworks’ for them (Damschroder et al., 2009; Bragge, 2017; Pfadenhauer et al., 2017). Therefore, we attempted to use the definitions and terminology provided by these guidelines as often as possible in the Reporting Complex Multidisciplinary Healthcare Teamwork Training (ReCoMuTe) checklist. However, more specific domain-related terminology (i.e., CTTIs) was also included in our final version (Table 1).
Table 1. Categories, components and elements of the ‘Reporting Complex Multidisciplinary Healthcare Teamwork Training’ (ReCoMuTe) checklist
Category
|
Key components
|
Elements to describe when reporting
|
I - Context and Preparation
|
C1 Needs & barrier assessment
|
§ Assessment informing (tailored) design, deployment and evaluation
§ Clear and aligned (e.g. organizational) strategies and justifications
§ Specification of local problems/challenges
|
C2 Engagement & endorsement
|
§ Motivation, engagement and readiness of the organization and participants (activities and assessments related to, for example: sense-of-urgency; sense-making; shared understanding; coalition formation; resistance reduction; incentives)
§ Endorsement and support (e.g., management; executives; medical staff)
§ Clear communication and information (e.g., on program objectives)
|
C3 Contextualization
|
§ Context optimization (e.g., appreciative enquiry envisioned participants; organizational culture change)
§ Anticipation of simultaneous (possible conflicting) interventions
§ Intervention adaptations (e.g., design, content, planning)
§ Feasibility assessment of and pilot testing the conceptual program
|
C4 Organization
|
§ Structure and roles implementation within the organization and members
|
II - Description of Intervention
|
D1 Objectives, content, planning & participants
|
§ Objectives and outcomes*
§ Participants (e.g., characteristics; selection; recruitment; enrolment); team(s) (e.g., authority differentiation; temporal stability; tasks and skills differentiation)
§ Content and materials*
§ Detailed planning activities* (e.g., timing; duration; location; frequency; timeline visualization)
§ Required resources (e.g., time; finances; materials; location)*
§ Anticipated causal relation objectives vis-à-vis content and activities*
§ Metrics on progress and outcomes monitoring*
§ Theories and evidence (e.g., effectiveness) underpinning the rationale for the design, deployment and evaluation
|
D2 Facilitation
|
§ Planned facilitation strategies, tactics and processes (during and outside planned sessions)
§ Main facilitator’s (e.g., master trainer) characteristics (e.g., background; position; selection; gender)
§ Additional facilitation (e.g., individual leadership coaching)
§ Informal facilitation (e.g., support to and from champions)
§ Resistance/‘change fatigue’ reduction
§ Implementation communication (e.g. progress; milestones; updates; materials, such as: pens, badges, (online) nudges)
|
D3 Sustainability
|
§ Sustainment strategies and activity (e.g., integration in organizational quality and educational cycles; periodical evaluation)
§ Post-intervention (e.g., refresher or new-comers’ training)
|
III - Execution and Delivery
|
E1 Fidelity / adaptability
|
§ ‘Reach’; dose delivered/ received
§ Details of delivery activities (temporal; including (un)planned or -intended)
§ Intervention vis-à-vis context interactions, including (longitudinal) accounts of: (a) (Psychological and other types of) fidelity to intended delivery; (b) adaptations to intervention; (c) changes in context (including those imparted by the intervention)
§ Contextual detail (e.g., duration; unintended experiences)
|
E2 Learning, training & transfer
|
§ Used training and educational strategies, tactics and methods (e.g., simulation/ didactic sessions; multi-source feedback/ self-reflection; uni-/multidisciplinary approach)
§ Reception and acceptance of activities (e.g., experiences; responses; satisfaction; feedback)
|
E3 Faculty
|
§ Trainers, coaches, guest lecturers, etc. (e.g., profession; background; (hierarchical) position; motivation; selection; tasks; experiences)
|
IV - Mechanisms of Impact
|
M1 Evaluation & analysis
|
§ Methodology for (process) evaluation (including theoretical rationale)
§ Descriptions and/or selective narratives reflecting topics relating change management to organization behavioral and other social structures (What happened?)
§ Impact mechanisms (e.g., anticipated versus unexpected)
§ Mediating (expected and unexpected) factors, pathways and consequences
§ Strengths and limitations of intervention’s components
§ Lessons learned
§ Possible (causal) explanations of activities’ impact on outcomes
§ Description of the control group and conditions (if applicable)
|
(*at various (organizational) levels)
Face Validity Testing
Our initial search resulted in 1,100 potential abstracts (Figure 2). After removing duplicates (n = 347), the remaining abstracts (n = 753) were screened independently by two researchers (RW and WK), after which they discussed any disagreements. When criteria were not described fully in the title and abstract, or in cases where the researchers doubted the nature of the quality of the improvement, the article was included for full-text analysis. One article (Motley & Dolansky, 2015) in this literature review could not be retrieved. Snowballing reviews did not result in new records. The full text of the papers (n = 135) were then assessed by two researchers (RW and WK) for eligibility, and any disagreements were discussed. A third reviewer (CW) was available for the latter process but was not needed. The entire literature-selection process produced a total of 27 articles for this review (Table 3).
Figure 3. Flowchart of the literature-selection process
Study characteristics
The majority of the included studies originated from U.S. healthcare facilities (n = 24), or U.S.-based organizations such as the U.S. military (n = 1; see, Table 2). Other studies were often conducted in acute or surgical clinical settings (n = 18). Six studies described TS implementation across a hospital system or healthcare region. Additionally, about half of the included studies were conducted in university hospital settings (n = 14), while the remaining studies were performed in mental health, rural hospital, military, or general hospital settings. Two publications did not report the studied TS setting.
Fifteen studies used a pre-post intervention study design. Other study designs used longitudinal (n = 6); mixed methods; cohort study; cluster design experimental study; or cross-sectional comparisons with a pre-post interventional design. One study did not report its research design. Surveys were the most used data collection method (n = 22). Only four studies used qualitative data collection methods, such as interviews, focus groups, or anecdotal evidence. Observational data were used in nine of the 27 studies.
Five studies applied all of Kirkpatrick’s four levels of team training assessment as evaluation methods to measure intervention effects (Kirkpatrick, 1994; Table 2). Three articles measured three levels, nine measured two levels, and eight measured one level. Two papers did not report any level of training evaluation.
Quality of reporting
An assessment of the included publications’ levels of applying the eleven elements of the ReCoMuTe checklist revealed that a majority of the studies scored low on most of them (Table 3). The most extensively reported were from the checklist’s second category, description of intervention, namely the elements ‘objectives, content, planning, and participants’ (D1) whereby more than half (52%) of the studies had moderate or high scores (Table 3). All the other components were, on average, reported insufficiently, scoring nil or minimum.
Applying the four-level ratings to the E2 (‘Learning & Transfer’) and E3 (‘Faculty’) elements was difficult due to the absence or meager reporting details. Therefore, we decided to rate these elements based on a two-tier scoring system: ‘reported’ or ‘not reported’ (Table 3). Consequently, the summed ratings for these two elements are less comparable than the others.
The Thomas and Galla (2013) study was optimal on almost all the checklist’s components, followed by four others (Stead et al., 2009; Turner, 2012; Brodsky et al., 2013; Jones et al., 2013b).
Table 2. Study characteristics of the included (n=27) studies reporting on I implementation, including their application of Kirkpatrick’s levels of training evaluation
Study
|
Country
|
Hospital type
|
Setting
|
Study design
|
Data collection method
|
Kirkpatrick’s levels of training evaluation
|
|
|
|
|
|
|
Level 1: reaction
|
Level 2: learning
|
Level 3: transfer
|
Level 4: results
|
Stead et al. (2009)
|
Australia
|
Mental health clinic
|
Hospital wide
|
Pre-post training
|
Surveys/observation
|
●
|
●
|
●
|
●
|
Capella et al. (2010)
|
USA
|
Academic
|
Trauma Center
|
Pre-post training
|
Observations/clinical outcome data
|
○
|
○
|
○
|
○
|
Deering et al. (2011)
|
USA/Iraq
|
US Military Healthcare System
|
Multiple Combat Support Hospitals
|
Pre-post training
|
Clinical outcome data
|
○
|
○
|
○
|
●
|
Forse et al. (2011)
|
USA
|
Academic
|
OR
|
Pre-post training
|
Surveys/clinical outcome data
|
●
|
○
|
○
|
●
|
Mayer et al. (2011)
|
USA
|
Academic
|
Pediatric & surgical ICU
|
Longitudinal
|
Surveys/interviews/ observations/clinical outcome data
|
○
|
●
|
●
|
●
|
Mahoney et al. (2012)
|
USA
|
Mental health
|
Hospital wide
|
Pre-post training
|
Surveys
|
○
|
●
|
○
|
○
|
Turner (2012)
|
USA
|
Academic center
|
ED
|
Not reported
|
Anecdotal
|
○
|
○
|
○
|
○
|
Brodsky et al. (2013)
|
USA
|
Academic center
|
NICU
|
Pre-post training
|
Surveys
|
●
|
●
|
●
|
○
|
Jones et al. (2013A)
|
USA
|
Multiple hospitals
|
ED
|
Pre-post training
|
Surveys
|
○
|
○
|
○
|
●
|
Jones et al. (2013B)
|
USA
|
Multiple critical access hospitals
|
Hospital wide
|
Cross-sect., comparison; pre-post training
|
Surveys
|
○
|
○
|
●
|
●
|
Sawyer et al. (2013)
|
USA
|
Army Medical Center
|
NICU
|
Pre-post training
|
Surveys/observation
|
○
|
●
|
●
|
○
|
Sheppard et al. (2013)
|
USA
|
Non-profit hospital
|
Hospital wide
|
Pre-post training
|
Observations
|
○
|
○
|
●
|
●
|
Thomas & Galla (2013)
|
USA
|
Non-profit hospital
|
Hospital wide
|
Pre-post training
|
Surveys
|
●
|
●
|
●
|
●
|
Klipfel et al. (2014)
|
USA
|
Academic center
|
Urology surgery
|
Pre-post training
|
Surveys
|
●
|
●
|
○
|
○
|
Spiva et al. (2014)
|
USA
|
Unknown
|
Acute care
|
Longitudinal
|
Surveys/observation
|
○
|
●
|
●
|
●
|
Amaya-Añas et al. (2015)
|
Colombia
|
Unknown
|
OR
|
Pre-post training
|
Surveys
|
●
|
○
|
●
|
○
|
Beitlich (2015)
|
USA
|
Rural clinics
|
L&D and NICU
|
Pre-post training
|
Surveys
|
○
|
○
|
○
|
●
|
Fischer et al. (2015)
|
USA
|
Tertiary military center
|
OR
|
Cohort study
|
Surveys
|
○
|
○
|
○
|
●
|
Gupta et a. (2015)
|
USA
|
Academic center
|
Interventional ultrasound
|
Pre-post training
|
Surveys
|
○
|
●
|
○
|
○
|
Scotten et al. (2015)
|
USA
|
Academic center
|
Pediatric
|
Longitudinal
|
Surveys
|
○
|
●
|
○
|
●
|
Sonesh et al. (2015)
|
USA
|
Academic center
|
Obstetrics
|
Mixed methods
|
Surveys/observations
|
●
|
●
|
●
|
●
|
Treadwell et al. (2015)
|
USA
|
Primary care
|
Primary care
|
Cluster design experimental
|
Surveys
|
○
|
●
|
○
|
○
|
Gaston et al. (2016)
|
USA
|
Academic center
|
Oncology
|
Mixed methods
|
Surveys/focus groups
|
●
|
●
|
●
|
●
|
Lisbon et al. (2016)
|
USA
|
Academic center
|
ED
|
Longitudinal
|
Surveys
|
○
|
●
|
○
|
●
|
Rhee et al. (2016)
|
USA
|
Academic center
|
Peri-OR
|
Longitudinal
|
Observations
|
○
|
○
|
●
|
○
|
Wong et al. (2016)
|
USA
|
Academic center
|
ED
|
Longitudinal
|
Surveys
|
○
|
●
|
○
|
●
|
Peters et al. (2017)
|
USA
|
Academic center
|
ED
|
Pre-post training
|
Surveys/observations/clinical outcome data
|
●
|
●
|
●
|
●
|
|
|
|
|
|
|
9
(33%)
|
16 (60%)
|
13 (48%)
|
17 (62%)
|
Legend to Table 2
Not reported (○); reported (●)
Percentages represent the number of papers using the Kirkpatrick level for training evaluation compared with the total (n=27).
OR, Operation Room; ICU, Intensive Care Unit; ED, Emergency Department; NICU, Neonatal Intensive Care Unit; L&D, Labor and Delivery.
Table 3. Scoring of the publications on TeamSTEPPS implementation (n=27) with the ReCoMuTe checklist components, including summated scores
Author
year
Code
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
|
|
Stead et al. (2009)
|
Capella et al. (2010)
|
Deering et al. (2011)
|
Forse et al. (2011)
|
Mayer et al. (2011)
|
Mahoney et al. (2012)
|
Turner (2012)
|
Brodsky et al. (2013)
|
Jones et al. (2013A)
|
Jones et al. (2013B)
|
Sawyer et al. (2013)
|
Sheppard et al. (2013)
|
Thomas & Galla (2013)
|
Klipfel et al. (2014)
|
Spiva et al. (2014)
|
Amaya-Añas et al. (2015)
|
Beitlich (2015)
|
Fischer et al. (2015)
|
Gupta et al. (2015)
|
Scotten et al. (2015)
|
Sonesh et al. (2015)
|
Treadwell et al. (2015)
|
Gaston et al. (2016)
|
Lisbon et al. (2016)
|
Rhee et al. (2016)
|
Wong et al. (2016)
|
Peters et al. (2017)
|
Low score (0 or 1)
|
High score (2 or 3)
|
C1
|
●
|
○
|
○
|
●
|
●
|
●
|
○
|
●●
|
○
|
●●
|
○
|
○
|
●●
|
●●●
|
○
|
○
|
○
|
○
|
●●
|
●
|
●●●
|
○
|
○
|
○
|
○
|
●●●
|
●
|
70%
|
30%
|
C2
|
●●
|
●
|
●
|
●
|
●
|
●●●
|
●●
|
●●
|
●
|
●●
|
○
|
●●
|
●●●
|
●
|
●
|
●
|
○
|
○
|
●
|
●
|
○
|
○
|
○
|
○
|
●●
|
●●●
|
○
|
67%
|
33%
|
C3
|
●●
|
●
|
○
|
○
|
●●
|
●
|
●●●
|
○
|
●
|
○
|
○
|
●
|
●●●
|
○
|
○
|
○
|
○
|
○
|
○
|
○
|
●
|
○
|
○
|
○
|
○
|
●
|
○
|
85%
|
15%
|
C4
|
●
|
●
|
●
|
●●
|
●
|
●●●
|
●
|
●●
|
○
|
○
|
○
|
●●
|
●●●
|
●
|
●●
|
○
|
●●
|
●
|
●●
|
●
|
○
|
●
|
●
|
○
|
●●
|
○
|
●
|
63%
|
37%
|
D1
|
●●
|
●●
|
●●●
|
●
|
●●
|
●●
|
●
|
●●●
|
○
|
●●
|
●●●
|
●
|
●●●
|
●●●
|
●
|
●●
|
○
|
○
|
○
|
○
|
○
|
●●
|
○
|
○
|
○
|
○
|
●●●
|
48%
|
52%
|
D2
|
●●●
|
○
|
●
|
○
|
●
|
○
|
○
|
●
|
○
|
●
|
●
|
●
|
●●●
|
●
|
○
|
●
|
○
|
○
|
○
|
○
|
●
|
●
|
●●
|
○
|
●●
|
●
|
○
|
85%
|
15%
|
D3
|
●
|
●
|
○
|
○
|
○
|
●●●
|
●●
|
●●
|
○
|
●●●
|
●
|
○
|
●●●
|
●
|
○
|
○
|
●
|
○
|
○
|
●●
|
○
|
○
|
●
|
○
|
●●●
|
●●●
|
●
|
70%
|
30%
|
E1
|
○
|
○
|
●●
|
○
|
●
|
○
|
●●●
|
●
|
●
|
●
|
●●
|
○
|
●●
|
●
|
○
|
●●●
|
○
|
○
|
○
|
●
|
●
|
●
|
○
|
●
|
○
|
●
|
○
|
81%
|
19%
|
E2 *
|
□
|
■
|
□
|
□
|
□
|
■
|
□
|
■
|
□
|
□
|
□
|
■
|
■
|
■
|
■
|
□
|
■
|
□
|
□
|
□
|
■
|
□
|
■
|
■
|
□
|
■
|
■
|
52%
|
48%
|
E3 *
|
■
|
■
|
■
|
■
|
■
|
■
|
■
|
■
|
■
|
■
|
□
|
■
|
■
|
■
|
■
|
□
|
□
|
■
|
□
|
□
|
■
|
■
|
■
|
■
|
■
|
■
|
□
|
22%
|
78%
|
M1
|
●
|
●
|
●
|
●●●
|
○
|
○
|
●●
|
●●●
|
○
|
●●●
|
●●
|
●
|
●●●
|
●●
|
○
|
○
|
●
|
●
|
●
|
○
|
●
|
●
|
●
|
○
|
○
|
●
|
●
|
74%
|
26%
|
Low score
(0 or 1)
|
44%
|
89%
|
78%
|
78%
|
78%
|
56%
|
44%
|
33%
|
89%
|
44%
|
67%
|
78%
|
0%
|
67%
|
89%
|
78%
|
89%
|
100%
|
78%
|
89%
|
89%
|
89%
|
89%
|
89%
|
5%6
|
67%
|
89%
|
|
|
High score
(2 or 3)
|
56%
|
11%
|
22%
|
22%
|
22%
|
44%
|
56%
|
67%
|
11%
|
56%
|
33%
|
22%
|
100%
|
33%
|
11%
|
22%
|
11%
|
0%
|
22%
|
11%
|
11%
|
11%
|
11%
|
11%
|
44%
|
33%
|
11%
|
|
|
Legend to Table 3
Note the Table Four-level rating on ‘reporting level’; ‘nil’ (○), ‘minimum’ (●), ‘moderate’ (●●), ‘high’ (●●●).
*Components E2 (Learning, training & transfer) and E3 (Faculty) scored with two-level rating: ‘not reported’ (□), ‘reported’ (■).
Low score percentages represent total scores on reporting levels ‘nil’ (○) or ‘minimum’ (●); high score percentages represent total scores on reporting levels ‘moderate’ (●●), ‘high’ (●●●).
Cells with summated percentages scores > 50 are accentuated with bold text.
Final version
During the checklist’s development as well as validity evaluation, while assuring no element was left out and the meaning of the descriptions remained intact, we searched for an overlap between the elements and for well-balanced appropriateness, clarity, and conciseness in wording. While validity testing helped to augment the elements’ description conciseness, it did not reveal new categories, components, or elements for our checklist, suggesting completeness in our design for assessing reports on the implementation of TS or similar CTTIs.
I - Context & preparation (C1-4)
The first category of the checklist (see, Table 1) describes the context and preparatory activities before actual deployment of an intervention. The environment or setting in which the intervention is implemented and deployed is referred to as ‘context’ (Pfadenhauer et al., 2017; Booth et al., 2019). Further, the checklist sets out to address four questions: Did the implementers asses the needs and requirements? (C1) Is there support from leadership as well as from participants? (C2) How is the training intervention contextualized? (C3) How and by whom is the intervention organized? (C4)
The local infrastructure, availability of resources, knowledge, and experience can vary significantly between healthcare organizations, subsequently affecting implementation potential and execution (Kitson et al., 2008). Moreover, the significance of barrier assessments to inform anticipatory sustainability efforts and the involvement of decision-makers and management have been discussed in previous studies (Kemper et al., 2014; Moffatt-Bruce et al., 2017). Adapting interventions to contextual settings has been described as a tactic to create, for example, shared ownership of improvement among staff (Haerkens et al., 2018).
II – Description of intervention (D1-3)
The second category provides an overview of the elements pertaining to the CI characteristics, including its participants, facilitation, and sustainability efforts. The following questions are typically addressed: What are the intervention’s objectives, content, and planning? (D1) Which team(s) and participants are involved and what characterizes them? How is facilitation planned and who will facilitate? (D2) What are the sustainability strategies and activities? (D3). This category also comprises planned and anticipated support outside the training sessions and sustainability activities.
Detailed descriptions of the interventions’ objectives, practicalities (e.g., content, planning, resourcing), as well as their theoretical rationale and related strategies and tactics are imperative for scientific meta-analysis and the replicability of studies (Mohler et al., 2010; Golub & Fontanarosa, 2015; Cheng et al., 2016). Additionally, successful intervention implementation is contingent upon facilitation, scaffolding its roll-out and supporting participants’ engagement and learning. Likewise, the involved team(s) characteristics are essential, such as task types, the differentiation of authority across the team(s), and their stability over time (Hollenbeck et al., 2012; Wildman et al., 2012). Thus, effective reporting comprises detailed descriptions of strategies, tactics, and processes used by an individual (i.e., facilitator) to help others improve their knowledge, skills, or attitudes and thereby improve the intervention’s likelihood of success, including specifications of the facilitator’s qualifications and subject matter expertise (Kitson et al., 1998; Salas et al., 2002). Furthermore, contextual information can be relevant since, for example, small facilities can lack implementation strength, which has to be addressed and anticipated (e.g., with more appropriate facilitation) (Kitson et al., 2008).
Various frameworks also emphasize the relevance of describing sustainment, or maintenance activities, since team training effects at various levels (i.e., reactions, learning, transfer, and outcome) often diminish over time (Arthur et al., 1998; Wierenga et al., 2014). Reports on TS implementation note a decline in training effects within six months to a year (Forse et al., 2011; Thomas & Galla, 2013). Strategies such as regularly planned competency refresher training can help mitigate such declines (Sonesh et al., 2015). Further, concurrent workplace-based coaching and mentoring of staff, as well as post-training sustainability activities, can be applied (Morey et al., 2002; Marshall & Manus, 2007).
Strategies and activities focusing on sustaining the long-term effects of healthcare team training implementation have been suggested to be an underexplored part of implementation and research (Lee et al., 2017). Since “team training is not a one-day or single-session event” (Gillespie et al., 2010, p. 655), the sustainment phase of TS and similar interventions is imperative (AHRQ, 2014). However, only 15 of the 27 articles reviewed here reported such sustainment activities.
III – Execution and deployment (E1-3)
The third category provides components and elements that facilitate reporting effectively and objectively on what occurred during the intervention implementation, including related educational and pedagogical perspectives pertaining to training activities. We thus address the following questions: What and how much was done—planned and unplanned? What were the deviations from the planned implementation (and why)? (E1) How was it done and received? (E2) Who delivered the intervention? (E3)
Our difficulties with rating the included studies’ reports of elements E2 and E3 might have been due to the fact that we selected studies in which a TS master trainer was present. TS master trainers follow a (minimum of) two-and-a-half-day standardized training session, and their roles and tasks are well described in the TS documentation, which possibly kept the researchers from providing extensive reports on this issue. Additionally, we realized that the checklist’s E3 (‘Faculty’) component overlapped somewhat with the D2 elements (‘Facilitation’). While comprehensive multidisciplinary team training curricula, such as TS, are on the rise, healthcare organizations are increasingly applying customized ‘self-made’ approaches, using a variety of trainers, coaches, and other faculty members (Gross et al., 2019). We suggest that, due to the essential role of these individuals in teaching, coaching, and assisting others, the reports should describe their roles in detail, using both elements.
IV – Mechanisms of impact (M1)
The fourth category of the ReCoMuTe checklist provides a set of elements to assist in detailed reporting on impact evaluation. These help to assess possible causal mechanisms and relationships between actual training and facilitatory activities vis-à-vis effects, or lack thereof. Such evaluations of interventions and detailed analyses of either facilitating or impeding mediators, unexpected pathways, and consequences help to answer: Why and how did the change (not) happen the way it happened?
Explaining the intervention’s mechanisms of impact can encompass the dynamics that interrelate or overlap across the checklist’s categories and their components. Reports that observed or narrated facilitating and impeding factors as part of a process evaluation provide essential information for, for example, further successful replication. Additionally, an initial inventory of determinants serves authors with a practicable instrument for more complex assessments of what happened. Explicating in detail the ‘why’ of the observed and measured effects, as well as the often unexpected and tacit dynamics imparted by the implementation efforts, requires authors’ reports to be based on a convening of viewpoints, including change management and implementation science. The table resulting from our thematic analysis of the 27 included publications on TS implementation provides an exemplary multi-study overview of determinants (Table 4).
Moderators
Facilitating and impeding factors are moderators that intentionally or unintentionally regulate or determine an implementation process and/or its outcomes (Fleuren et al., 2004). Such factors can be described by their characteristics, which can be classified into five categories: (1) socio-political context (e.g., other interfering interventions; regulations; professionalism); (2) organization (e.g., culture; leadership; resources); (3) facilitator/ implementer (e.g., skills; background; profession); (4) intervention program (e.g., timing; content; complexity); and (5) participants (e.g., participation; attitude; profession; prior experiences) (Wierenga et al., 2012; Wierenga et al., 2013). Our thematic coding of the included studies as ‘facilitating’ or ‘impeding’ revealed that various, mostly facilitating, factors were reported (Table 4).
Table 4. Factors reported as (a) facilitating or (b) impeding TeamSTEPPS implementation in the 27 publications
Categories1
|
Subgategories
|
Facilitating factors
|
Impeding factors
|
1. Socio-political context
|
|
· Hospital-wide central meetings to share best practices [12]2
|
· Other interfering safety interventions [16]
|
2. Organization
|
Culture
|
· Open communication and mutual respect [5]
|
· Work environment did not support training, learning, or transfer [5]
|
Leadership
|
· Senior physician leadership met with other physicians [15]
· Support by leadership [1, 2, 4, 5, 6, 7, 8, 12, 13]
· Change team recognized a need for change [2]
· Change ownership was taken up and driven by local change team [8]
|
· Difficult to convince leadership [13]
|
Resources
|
· Plenty of time [10]
· Infrastructure was already present [4, 27]
· Overtime training hours were budgeted [12]
|
· Lack of resources (sufficient finances, training time) [4, 8, 10, 23, 26]
|
Turnover
|
· Low staff turnover [4, 12]
|
· High staff turnover [10]
|
Job position of the implementer
|
· Trainers from different disciplines [6, 8, 13, 19]
· Physicians trained by other physicians [13]
|
|
Collaboration/interaction
|
· Allowing participants to contribute their own ideas/opinions [7, 8, 14]
· Creation of video vignettes and scenarios achieved a high level of staff resonance and buy-in [6]
|
|
3. Facilitator
|
Degree of rewards
|
· Encourage use of learned skills by handing out small aids [5, 27]
|
|
Preparation
|
· Participating in questionnaires about patient safety [10]
· Providing information before training [5]
|
|
4. Intervention program
|
Sustainment
|
· Encouraging use of learned skills by team leaders [5]
· Additional learning opportunities (to classroom training) [10]
· Repetition of training during iterative simulations and pre-shift briefs [17]
· Coaching of staff in desired teamwork behavior [1]
· Success stories narrated by staff (‘storytelling’) [18]
|
|
Complexity
|
|
· Lack of guidance on training deployment [2]
|
Timing of intervention activities
|
· Free from clinical duties to enable participation [1, 10, 19, 25]
· Trainings scheduled at convenient times, minimizing work interruptions [1, 14]
· Debriefings during lunch [12]
· Sessions scheduled during department meetings to improve attendance [15]
|
· Difficult for physicians to attend training while
on duty [25]
|
‘Fit’ of intervention
|
· Teaching principles specifically targeted at adult learners [8]
· Linking concepts of training to practice/clinical experiences [9]
|
|
5. Participants
|
Participation
|
· Multidisciplinary participation [5, 8, 13, 24]
· Physician participation [13]
· Voluntary enrolment [1, 27]
|
· Lack of multidisciplinary participation [15, 16, 21]
· Lack of physician participation [9, 12, 26]
· Physicians had to be trained with an abbreviated program [4, 12]
|
1 The categories in this table are based on Wierenga et al. (2013).
2 The numbers in square brackets correspond to the numbers of the articles in Table 3.