High-fidelity simulation (HFS) is a term that healthcare educationalists are well-acquainted with; although this approach to teaching and learning may represent something very different between the professional communities it serves. The wider literature highlights ambiguity surrounding the definition of HFS [1, 2, 3]; and there is strong argument to suggest the term is misunderstood [4]. This is perhaps because HFS can be designed and delivered in many different ways; however, the amalgamation of a believable scenario, immersive environment/s, realistic props and Standardised Patients (SPs) (and/or mannequins) complete with moulage, collectively represent its hallmark traits. Nevertheless, the universal goal of HFS is the creation of engaging and authentic learning experiences [5] which directly mirror situations faced in professional practice [6, 7].
Creating immersive environments with heightened levels of realism is time-consuming and frequently costly [8, 9]; and even the highest quality exercises possess limitations [10]. HFS cannot replace ‘real-world’ practice learning, even though learning attainments in the emergency environment will always be ringfenced by the fact clinical need and patient safety remain the priority [11]. The emergency environment can therefore never provide safe spaces for learners to make mistakes without consequence; whereas HFS does have scope to provide this. When HFS is utilised effectively, important gaps between theory and practice can be bridged [12]; whilst evoking strong emotional responses capable of cultivating enhanced levels of emotional preparedness and mental resilience [9].
Pedagogic stance, instructor experience and resource availability are major influencers in simulation design and delivery; but at any level, ascertaining the most effective fidelity and modality are fundamental considerations [4, 13]. Simulation can be described as a continuum ranging from low to high fidelity [25]; and if we accept that low-fidelity simulation builds knowledge, mid-fidelity develops competence and high-fidelity augments performance [4], learning can be more effectively scaffolded. Careful consideration of student needs, method of assessment and the intended end-point, will help enable delivery of quality outputs which are safe, enjoyable and in-keeping with the desired learning objectives [14]. However, when considering the level of fidelity, more is not necessarily better [4], because learning is not proportionate with the level of realism provided [5].
There are various types of fidelity which can be included in HFS [15] and the three primary types are: physical, conceptual, and psychological [16]. Each should correspond with a different aspect of authenticity and represent a control measure for preventing learning impedance [17]. Physical fidelity depicts the depth a simulated environment reflects what a participant sees, hears, feels, or smells [4, 19, 20]; and conceptual fidelity depicts the depth of scripting and planning of the scenario so that it accurately reflects the way the scene would present in a real-world practice [7]. Finally, psychological fidelity reflects the emotional responses and behaviours elicited by participants as if the simulation were in fact real [1, 21]. These three types of fidelity can complement and overlap, yet they can also detract if not efficiently balanced [18]. To help contextualise this notion, an educator could heighten the physical fidelity of a scenario by adding realistic background noises to a scene (such as a crowd of people shouting or screaming), however this stimulus might raise an individual’s stress levels beyond their threshold and thereby hinder learning. Intuitively, we may think that the more realistic an environment, the more it will enhance learning - but without having first provided the learner with the requisite skillset to navigate specific stimuli, exposure will in fact serve as an unhelpful obstacle [22].
Orchestrating satisfactory harmony between these three fidelities will mentally signpost learners into spaces where ‘real’ learning will occur. This will typically be outside their comfort zone, but not far enough to harm confidence, clinical ability or wellbeing. When fidelity is robustly considered, participants are able to suspend disbelief and accept the simulation as if it were in fact real [1, 3, 5, 8, 23, 45]. Creating exercises with sufficient depth to reach this disposition is routinely challenging, but lapses in physical fidelity are preferable to lapses in conceptual fidelity [24]. Learners need a scenario that makes sense, and if educators deliver something which feels true-to-life, learners are better able to accept the artificial aspects of the physical environment being showcased [3]. The common denominator for lapses in physical fidelity (particularly in higher education settings) will typically be the fact a classroom or skills lab is being utilised; and this will inherently lead to losses in environmental realism. However, if the scenario feels genuine, learners have the opportunity to apply knowledge and problem-solving skills through a heuristic and experimental approach. Recruiting subject matter experts to help develop and review the proposed scenario and perform pilot tests is considered best practice for securing conceptual fidelity [24]. It is also beneficial to deliver simulation as an integrated component of standard curriculum, instead of an extra-ordinary event [26].
It has been suggested that two further types of fidelity should be considered - functional fidelity and sociological fidelity. Functional fidelity refers to the dynamic interaction between participants and the assigned task, a notion considered important when teaching technical or psychomotor skills [4, 22]. The more precise the skill or procedure being performed, the higher the level of functional fidelity required [15]. Sociological fidelity relates to multi-disciplinary simulation and corresponds directly with the level of interactive realism between different groups of learners within a simulation [27, 28]. For example, if a road traffic collision were being simulated, paramedics, fire-fighters and police officers would need to interact at the scene. For the associated subtleties of these interactions to be authentically delivered, thereby heightening the level of sociological fidelity; input from educators attached to all three professional disciplines would be required.
HFS is almost always an expensive and a resource-intensive undertaking [9, 29] and the true value of an exercise cannot be evaluated without inclusion of a robust debrief. Healthcare educators have recognized the essential role of debriefing in simulation-based-learning [50] to help transform experiences into learning through reflection [51, 52, 53, 54]. Immediately congregating learners after the conclusion of a simulation to identify “things that went well” and “things that did not go so well” and outlining potential areas to enhance future practice helps cement this learning process [30, 31, 32]. The depth, exercise complexity and immersive realism inherent of HFS therefore requires careful navigation during the debriefing process because the emotional responses invoked can closely resemble those experienced in real-world practice; even though no actual harm to people, wildlife, property or possessions has occurred. Whilst a variety of tools exist to debrief learners following simulation-based-learning activities, little consensus exists to support the use of a specific model or approach. Some tools may also not extrapolate well to large-scale exercises, interprofessional working, or activities simultaneously spanning multiple geographical environments.
In real-world practice, ‘Hot Debriefing’ (HoD) describes a structured team-based discussion [38] following serious or unexpected incidents [33, 34]; and is typically conducted by operational team leaders immediately after an incident to support colleagues, uphold professional standards and pledge a duty-of-care [35]. HoD stems from a humanistic philosophy [40] and the paradigm that when humans are exposed to trauma, they instinctively desire to establish if those around them are ok. HoD has been shown to support psychological wellbeing of healthcare professionals and promote learning [46, 48, 62] by facilitating the sharing of situational awareness, mitigating for cognitive biases and promoting reflective practices. Yet despite these obvious benefits, the wider literature indicates HoD is infrequently undertaken in clinical practice [38].
Exposure to negative experiences within the emergency environment can seriously impact healthcare workers, giving rise to moral injury or burnout [46]. Mental health conditions such as Acute Stress Disorder (ASD) and Post-Traumatic-Stress-Disorder (PTSD) are at record highs within the emergency services [42, 43] and undertaking HoD may serve to protect (and in some cases prevent) service personnel developing mental health conditions. In serious emergencies or major incidents, responders will arrive at different time intervals and be subjected to a range of different tasks. Each clinician will therefore unlikely be exposed to the full spectrum of communications, decisions or proceedings and this prevents comprehension of a definitive incident timeline. This intrinsic disconnect creates emotive processing challenges, in an attempt to unpick, rationalise and comprehend a lived-experience. The human brain rarely stores lived-experiences as accurate accounts and instead the distressing incident will be reconstructed as a biased representation, tainted by personal knowledge, world views and occasionally events which never actually happened [64]. The human brain stores memory sequences in a reverse order [64], therefore encouraging staff to ask ‘who’, ‘what’, ‘why’, ‘where’, ‘when’ questions during HoD may therefore mitigate for this recognised phenomenon by facilitating discussions which cultivate more factually accurate incident accounting. Those not provided with the opportunity to debrief may therefore be predisposed to processing through falsification [64], experiencing recall bias [62], or developing tension and heightened anxiety [35, 36, 37, 38].
Nightmares, flashbacks, emotional outbreaks, digestive disturbances, difficulty sleeping and a state of sustained restlessness and hyper-arousal are just some of the symptoms a responder can endure following exposure to traumatic incidents [42, 43]. This can be a debilitating ordeal and the symptoms progressive in nature, occurring once the responder’s initial state of heightened adrenaline has subsided, the ‘threat’ extinguished and normal life has resumed [44]. This process typically manifesting at 48–72 hours post incident; and the symptoms displayed should be considered normal reactions to an abnormal event [63]. It is important to recognise clinical debriefing (i.e. HoD), differs from critical incident stress debriefing [36] - which is a psychological intervention aimed to reduce post-traumatic stress. Whilst HoD is not a therapy, its value should not be underestimated as it has scope to address unanswered questions at an early stage and help responders make sense of traumatic incidents [48, 49, 60]. However, the quality, duration and impact of HoD can vary significantly; and heterogeneity between individual responders and the confounding variables unique to every emergency call, make authenticating the reliability and validity of HoD challenging. As a result, limited evidence exists to guide developments within this domain.
The “TAKE STOCK” model for HoD is widely utilised in professions spanning the breadth of the emergency medicine world [38] (Fig. 1), advocated by the Royal College of Emergency Medicine (RCEM) [39] and is also frequently utilised in paramedic practice.
In larger scale emergencies or in major incidents, a Cold Debrief may also be conducted, typically 1-month post incident. At this stage it is anticipated that emotions will be ‘cold’ and whilst a similar structure to that used in the Hot Debrief may still be utilised, the primary objectives of the Cold Debrief are to (a) evaluate the reflective practices undertaken (b) identify key lessons an organisation has learnt and (c) ascertain if changes to future practice on a wider level are required. If we accept that HFS will produce a similar disposition and exploration of the same salient points, it is plausible to contemplate extrapolating real-world practice debriefing strategies into HFS to further enhance training and emergency service preparedness.
Cold Debriefing (CoD) is not a concept associated with Simulation-Based-Learning (SBL); likely because reviewing outcomes in such depth is not routinely required - especially following smaller classroom-based exercises utilising a low, or mid-fidelity approach. However, CoD could cultivate significant benefits for educational institutions undertaking large-scale HFS, whilst being of great value to learners and stakeholders alike. Providing this subsequent opportunity for everyone to come together and reflect, provides scope to instigate teaching and learning developments and would be advantageous for developing joint-working approaches. As part of a strategy for improvement in SBL, a new CoD tool was fashioned by the author, titled STOCK TAKE (Fig. 2). The tool incorporates elements of TAKE STOCK, but loaded with alternative questions to instigate evaluation into ‘what do we actually have’ at this point in time. The “STOCK” aspect provides a structure to guide reflective practices and the “TAKE” element facilitates opportunities for educators/university leaders to appraise the exercises key successes, areas for improvement, gained assets, future opportunities and sustainability.
This newly fashioned CoD tool (Fig. 2) supports the concepts underpinned in Kolb’s ‘Experiential Learning Cycle’ [75]; which is perhaps the most scholarly influential and cited model in reflective practice history [76]. Despite this, learners frequently struggle to advance their clinical practice due to being left with unanswered questions pertaining to their “concrete experience”. As part of a strategy for improvement, ‘TAKE STOCK’, a Q&A session and ‘STOCK TAKE’ were amalgamated to create a novel 3-step approach for HFS debriefing. The approach consists of (1.) a Hot Debrief using TAKE STOCK immediately post HFS, (2.) a Q&A session on completion of the module to address unanswered clinical questions, assess knowledge retention and evaluate student experience; and (3.) a Cold Debrief using STOCK TAKE 2–4 weeks later. A proof-of-concept study was then undertaken which aimed to: (a) evaluate the overall effectiveness of this newly fashioned 3-step approach to HFS debriefing, (b) assess knowledge gains and experience and (c) develop future teaching and learning practices.