Between August 2016 and August 2017, we observed three simulation sessions for each program, for a total of 21 sessions (30 hours). Between March 2019 and September 2019, we interviewed two facilitators and/or program developers from each program. At Program E, we interviewed one program developer because this program shut down in the period between observations and interviews. Altogether, we interviewed 13 program developers and facilitators. All programs were held in situ and structured the simulations around patient care scenarios that constituted clinical emergencies with an aim to allow team members to practice patient management and interprofessional teamwork in the context of the simulation setting.
Application of Principles
Table 3 summarizes our framework matrix, demonstrating the application of IPSE principles (codes) across IPSE programs (cases). We found that all 12 principles we identified were applicable in the context of IPSE and were endorsed by interviewees. However, we noted considerable variation in the application of the 12 principles across the seven programs, with some principles applied by most programs (e.g., “active learning”, “psychological safety”, “feedback during debriefing”), whereas others were rarely applied (e.g., “interprofessional competency-based assessment”, “repeated and distributed practice”). We also noted that some programs applied most principles (programs A, B, C, E), whereas other programs applied fewer principles (programs D, F, G). None of the programs fully applied all principles. Instead, they often applied principles in a partial way, meaning that they consistently applied principles but not to their full extent. For example, “institutional support” was partially applied: all programs were recognized by their institution and participants’ attendance was encouraged but the programs often lacked sufficient resources.
Table 3. Application Matrix of Principles of Interprofessional Simulation-Based Education at Seven Programs
Principle
|
Equitable Distribution
|
Active Learning
|
Interprofessional Competency-Based Learning Objectives
|
Interprofessional Competency-Based Assessment
|
Psychological Safety
|
Repeated and Distributed Practice
|
Attention to Differences and Hierarchy
|
Feedback during Debriefing
|
Sociological Fidelity
|
Program Evaluation
|
Train Facilitators
|
Institutional Support
|
Program
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
A
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
Not Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
B
|
Fully Applied
|
Fully Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Not Applied
|
Partially Applied
|
C
|
Partially Applied
|
Fully Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
D
|
Partially Applied
|
Partially Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Not Applied
|
Not Applied
|
Partially Applied
|
E*
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Not Applied
|
Partially Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Partially Applied
|
Partially Applied
|
F
|
Not Applied
|
Fully Applied
|
Not Applied
|
Not Applied
|
Partially Applied
|
Fully Applied
|
Not Applied
|
Fully Applied
|
Partially Applied
|
Fully Applied
|
Partially Applied
|
Partially Applied
|
G
|
Partially Applied
|
Partially Applied
|
Partially Applied
|
Not Applied
|
Partially Applied
|
Not Applied
|
Not Applied
|
Partially Applied
|
Fully Applied
|
Fully Applied
|
Partially Applied
|
Partially Applied
|
*Incomplete data with only 1 interview
Interviewees often emphasized the distinction between what they considered to be an ideal for IPSE, with the full application of the principles, and what they were able to do in their programs. An interviewee compared how facilitators in the program would implement “equitable distribution” in an ideal situation with what tended to happen during simulation sessions:
“So, in the ideal world, there’s co-facilitation between the physician and the nurse. I think every single session that I’ve been at, the physician typically takes, opens up the conversation. But, ideally, the nurse facilitators actually take on a big piece.” (Program A, Interview 1)
For programs B and F, we identified a lack of congruence between observation and interview data. In our observations, some principles were not fully applied (e.g., “psychological safety” for program B; and “equitable distribution”, “interprofessional competency-based learning objectives”, “attention to differences and hierarchy”, and “sociological fidelity” for program F). However, interviewees reported aiming to apply these principles in their programs.
Affordances and Barriers to IPSE
Data from interviews with program facilitators and developers helped us understand some of the affordances that supported sustainable IPSE programs as well as some of the barriers encountered. In addition, they helped us understand why some principles were more easily applicable than others in some programs. We describe these facilitators and barriers below.
Interprofessional “Buy-in”: Participant, Facilitator, and Institutional
Interviewees emphasized that getting people at all levels of training and from all professional backgrounds to believe in the value of the IPSE sessions was an important factor in the success of programs. This buy-in needed to come from participants, facilitators, and institutions. Buy-in had the potential to grow over time, as people’s experiences and interactions with the IPSE programs increased. Institutions’ buy-in led to the allotment of more resources, including money, space and time, thus increasing “institutional support”.
“Once [participants are] there…they're very engaged…that wasn't always the case. When we first started this out they would all sort of stand against the wall and be like, ‘I'm not doing anything, this is scary’ … But, that has changed. People have really started to realize, ‘this is important and I can learn something here and I want to participate.’” (Program A, Interview 1)
Interviewees discussed how choosing the right type and level of fidelity, or realism, was important to achieve participant buy-in. In particular, they noted that “sociological fidelity” – the extent to which the simulation mimics how people in real life interact (19) – leads to increasing buy-in from participants. Interviewees described attempting to achieve sociological fidelity by having everyone participate in their usual role, and developing scenarios that were similar to real patients that participants would encounter in clinical care. Interviewees also highlighted the importance of equipment fidelity in achieving buy-in, especially from participants. They noted that having simulation equipment that looks, feels, and responds in the same way it would in the clinical setting was important.
“If the monitor doesn't look realistic people really lose their ability to understand what's going on. So we work very hard to make sure that our monitors are in place of the actual patient monitor that would be there, that they look, in terms of color and sound, as realistic as possible, that the equipment that they use is all real equipment, kept in the right location…that adds to the learner's perspective of realism in ways that are more meaningful.” (Program G, Interview 1)
Resources: Money, Time and Space
Interviewees cited resources such as money, time and physical space as important to the success of IPSE programs. In addition to increasing buy-in, these resources enabled the application of principles such as “program evaluation”, “interprofessional competency-based assessment,” and “repeated and distributed practice.”
“I think to do it more frequently would probably require additional support. For now, I think we are able to maintain what we have… I think there is interest from everyone, and most people who come say that we should do it more frequently. I think the challenges are to try to schedule a time that's convenient for everyone.” (Program B, Interview 1)
“That funding kind of waxes and wanes, so right now we don't have that much funding… So right now we're in a little bit of a coasting phase where we're just keeping the sims going, but we're not really trying to improve the program or make any curricular changes. But we were able to do quite a bit of curriculum development and quality improvement within the simulation program. Previously over the last like two to three years where we had a half time patient safety coordinator who was really instrumental in that.” (Program C, Interview 1)
As these quotes show, more resources potentially enabled facilitators to run more sessions, thus allowing participants to attend more simulation sessions over time (“repeated and distributed practice”). More resources also enabled programs such as Program C to hire someone to evaluate teams during simulated sessions (“interprofessional competency-based assessment”) and to analyze the impact of the program (“program evaluation”) with the goal of improving the usefulness of the program. Yet, we found that most programs operated with limited resources and sometimes solely depended on the commitment of facilitators.
“It’s one of the things people find a lot of value in, at least by word of mouth. It was something that we definitely wanted to continue. The residents […], I think they find usefulness in it and we did too so we kept it on the schedule because it's pretty important. […] If we weren't to do it [run the program], I don't know that anyone would put up much of a fuss but it's something that people find a lot of value in. But we do have a lot of ownership of it to make it actually happen.” (Program D, Interview 2)
Lack of outcome measures
In addition to limited resources, interviewees mentioned that their limited knowledge of instruments to assess team performance in simulation acted as a barrier to applying the principle of “interprofessional competency-based assessment.”
“I don't think that we've had a good tool. And, then it's also simply bandwidth; who's going to do it, how do you record it, what do you do with the information, what's really the purpose of the assessment? … I haven't come up with a non-labor-intensive way to do it and we're already so crunched right, for time... So, how do we fit it all in and then, what's the cost benefit analysis of doing an assessment.” (Program A, Interview 1)
Furthermore, interviewees viewed the small-scale nature of their program as a barrier to undertaking “program evaluation.” All the programs involved in our study focused on a small number of professions at one department in a hospital, which limited the number of sessions and of participants per year.
“I don't think there is enough N for that [evaluating the program], and then, additionally, to attribute it to a single course versus other medical center initiatives, I think, would be also difficult. I suppose you could look at codes that happened prior to 2010 and codes that have happened subsequently, after that, but we don't have any of that information currently.” (Program B, Interview 1)
“[F]rom the standpoint of having the desired effect and again, our numbers aren't big enough to measure impact but there's other literature out there like from larger systems like in Massachusetts, they pulled three or four different birthing hospitals that started simulation programs and it did show an improvement in outcomes.” (Program C, Interview 1)
Power discrepancies
The last major factor that we identified as influencing the application of principles in IPSE programs was power. Interviewees described how multiple forms of power discrepancies influenced the programs and sometimes prevented the full application of the principles “attention to differences and hierarchy,” “equitable distribution,” and “psychological safety”.
“I've never heard a nurse participant speak up and give feedback about the sim when a trauma attending is there or when trauma nursing leadership is there. The nurses are very quiet. I don't hear interns asking questions. It seems a lot more constrained when there are high level administrators there.” (Program G, Interview 2)
Experience also created power discrepancies between simulation participants, preventing participants from sharing feedback to people whom they considered have more experience:
“And so I think it's definitely awkward to critique one of your colleagues that's an experienced ICU nurse as well. And most of them have more ICU experience than I do, or a couple of them do.” (Program B, Interview 2)
Interviewees also noted that power discrepancies between professional groups influenced interactions between facilitators and participants, as well as among participants. Some programs sought to mediate this by involving facilitators from multiple professions at each session (e.g., Program A had two physician and two nurse facilitators at each session), applying the principle of “equitable distribution” for both participants and facilitators. Other programs chose to only use facilitators from one professional group (e.g., Program C was facilitated by midwives).
“Physicians are not, in my experience, super well-suited to facilitate simulations because there's an already imposed hierarchy that comes into play when a physician is running the code, and I feel like it does, to me, dampen that group participation when there's a physician” (Program C, Interview 1)