We present the development of the Healthcare Team Observation for Patient Safety (HTOPS) platform and process chronologically over three stages; the timeline can be seen in Fig. 1. The first stage combines the first two cycles of learning as this was an exploratory phase. The work was refined over these three years, and we reflect on key learning points that fed into the development and refinement of the system.
Stages of the Observation Tool Development
Stage One (2016–2017)
The adaptation of the aviation observation process to identify patient safety concerns started with a Special Study Module (SSM) in 2016 and 2017 for final year medical students (n = 11 in each cohort; total n = 22). We started with aviation terminology, namely ‘Threats’ observed in the working environment and ‘Errors’, i.e. perceived noncompliance with rules/policy/guidelines. In discussion with a clinical team (senior nurse and consultants from a local hospital), a set of possible healthcare Threats and Errors was agreed through a brainstorming exercise. They included a list of possible ‘Threats’ relating to human factors, technology and building/environment. The ‘Error’ list included noncompliance with rules relating to prescribing, ordering investigations and their interpretation, patient and practitioner communication etc. We gave a code number to each possible Threat and Error. The students were asked to complete observations in clinical areas, two theatres receiving for orthopaedic and urology, outpatient fracture clinics, ante-natal wards and clinics and medical wards. Students were given training on patient safety, observation techniques and the coding system template with the Threat and Error code list to record what they observed during a session (morning or afternoon) (Table 1). The students spent six days observing holistic clinical practice moving between their allocated ward, theatre or clinic.
Evaluation Outcomes
In 2016, students recorded a large number of observations and we analysed a subset of the data (21 scripts) for checking the process (Table 2). We found that students confused Threat and Error in 13 instances. We refined the paper recording sheet to enable students to write more narrative to justify their findings. In 2017, we analysed all the outcomes from the student observations (n = 373 observations); 43 were illegible and were withdrawn, leaving 330 for full analysis of observed care practice on wards, clinics and theatres, of which there were 22 errors (Table 3). Students continued to have difficulty in differentiating between Threats and Errors in the midst of the complexity of everyday clinical practice and found the codes cumbersome. However, they were able to step back and observe care delivery in real time, noticing a plethora of concerns relating to both sloppy practice (e.g. hand hygiene) and systems issues (e.g. caused by poor geography). All students reported they had advanced their understandings of patient safety. The observations revealed the students’ lack of familiarity with the setting helped in identifying features that seemed inappropriate; whereas practitioners around them had normalised these practices. Some errors reported were incorrect and misleading, reflecting student unfamiliarity with speciality-specific safe practice.
The evaluation in 2016 and 2017 led us to the conclusion that the categories of Threat and Error were too simplistic to capture the complexity of healthcare environment and care delivery (Table 4).
We revised our categorisation framework to reflect clusters of themes identified in our analysis of the real-time student observations. There were termed Tags, as follows (Table 5):
Tag 1) Human Influences: The interactions amongst humans; ‘what I do when I am with others’ and other aspects of healthcare delivery. This includes the way in which one acts or conducts oneself professionally with patients and staff and individuals’ physical actions performed incorrectly or not completed.
Tag 2) Work Environment: Relating to the physical layout/style and content within the building.
Tag 3) Systems: Things or parts that function together. The way humans interact with the environment including the level of staff required to function adequately to manage the clinical area.
To help identify the level of concerns each Tag was awarded a weighted Scale from 1 (a little concern) to 5 (a great deal). In addition, the Tag could relate to an individual scored as one person (A-Alone) or for practitioners working together in a Team (T = Team) of practitioners. At this stage we left these senior students to allocate the weight of concern following their patent safety training which explored never events and serious incidents.
Stage Two (2018-2019)
In 2018, seven students used the revised paper recording system and worked in pairs in clinical areas. Of the 638 recorded Tags, 123 duplicates (students recording the same observation) were removed. At this time a new electronic database for recoding the data was completed and these 515 safety concerns were transferred to the electronic system. These recordings contained 170 scores rated as ‘1’ (low concern); 206 as ‘2’; 107 as ‘3’; 27 as ‘4’; and 5 scored as serious, ‘5’ (a great deal). The 5 serious Tags were all Human Influences (Tag 1):
-
Complacency - Action: Description = Anaesthetic drug not labelled during a spinal epidural
-
Confidentiality: Description = Computer system open with patient results for everyone to see
-
Action: Description = Sharps not disposed of correctly during a procedure
-
Team Functioning: Description = surgical whiteboard incorrect documentation of use of needles during surgery
-
Team Functioning: Description = change in surgical list led to preparation in theatre for the wrong patient
The concerns from this analysis revealed that it was hard for students to rate the severity of patient safety concerns on a 5 point scale. For this reason it was decided to reduce the weight of the scale to two points. The steering group reflected on the student feedback and realised that students were also verbally reporting seeing positive excellent behaviour, which the recording system did not allow them to record. It was therefore agreed to capture all that students were seeing and record observations of good practice, resulting in a two negative (-1 and − 2) and two positive (+ 1 and + 2) weighted scale for the new app (Table 6 - App design).
Sub-group 2018/2019 - new App
A total of six final year medical students worked with the new app using iPads and similarly standing in a range of clinical areas (again theatres, clinics and wards) in an acute city hospital. Two students used and tested the app in December 2018 and working individually made 28 observations in two half-days. The remaining four final year medical students were trained to use the new app in June 2019. They each spent three half days and made a total of 72 recordings. Together these findings totalled 100 observations of which 68 were negative and 32 were positive, highlighting good practice. The majority were again relating to Human Influences. The app shows these outcomes in a variety of different ways (Fig. 2).
Evaluation of Stakeholder Perspectives
Stakeholder perceptions were gained from five final year medical student ‘observers’ and 11 clinical staff members ‘observed’. These were doctors of various grades, scrub nurses, Advanced Nurse Practitioners and nurse ward managers. The data are presented as themes and extracts (Table 7).
The value of the observation method for learning was confirmed by both the students who were observing practice and the observed practitioners. Front-line practitioners perceived the value of the recordings to enhance individual and team learning in clinical practice. For the clinical teams, the work was perceived as a supplement to existing data, such as safer surgery theatre checklists and clinical audits, because it could record a wider range of habitual practices and take account of environmental factors. It was felt that the observation process captured both good and poor practice which teams required for implementing appropriate improvements. The observed practitioners referred to ‘a climate of negativity’ around patient safety and praised the data for allowing the recording of positive clinical practice to provide both balance and an accurate representation of everyday practice. This was something they felt was lost with other recording practices which focused solely on poor practice. Shared learning across clinical areas was discussed as advantageous, particularly the ability to learn from areas showing excellence (Table 7).
Senior medical students perceived this as a good method for student learning on patient safety as they were forced to now see the totality of practice. As observers making the recordings they recognised that this process helped them to reflect on how to take on an active role within a clinical team. Several assitantship students who had qualified by the time they were interviewed described how their observations had fed into their plans for improving their practice.
Acceptability and impact of the observation process was discussed by front-line staff and students. The majority of practitioners were happy to be observed and confirmed that it was acceptable to be observed. Students felt equally comfortable observing all levels of staff grade. There were some concerns from mainly non-medical practitioners, who spoke about feeling additional pressure and being suspicious of being watched (Table 7). At the start of the observations practitioners being observed displayed a kind of Hawthorn effect. Their practice followed protocols and their behaviour was perceived as attentive showing very good practice. However, after a little while all staff quickly returned to practising unaware of being observed because they were busy. The medical students sensed the tensions and described adopting a friendly persona to gain practitioners’ confidence and assert their position as helpful observers. Some students eased tensions by reminding observed clinical staff that they were ultimately looking at what they did as individuals but also within the systems and the environment where they were working.
Observed staff commented on the way the students introduced the work prior to commencing their observations. Some staff were not informed and unaware that observations were taking place as an exercise for patient safety and received limited information and, therefore, they felt threatened. In these situations students had to re-explain the project’s purpose and the anonymity of the process to ease concerns. Some students defused tensions by offering informal feedback afterwards with everyone on the observed team. This, too, appeared to allay concerns. The acceptability of the observations greatly increased with clarity about the reasons behind the process.
The anonymity of observations was a strong theme. All staff were aware that observations were anonymised and valued this, but some were not convinced this was the way forward. In these instances practitioners wanted the observer to draw attention to malpractice, either by overstepping the line of ‘observer’ by intervening in real-time in the situation, or by having permission to report the action(s) after the event. Such an approach would, of course, mean that observations had the potential to have negative consequences to individual staff members, rather than being used to identify higher-level trends across departments. It was also suggested that observations featuring good practice might support doctors training portfolios by providing specific objective examples of their work.
The use of the observation data post-collection revealed that staff wished to receive information from the observations as soon after data collection as possible due to shift patterns and the rotation of staff and for immediately learning. Those requesting personal feedback on their individual performance also asked for this immediately after the observation session. Daily briefings conducted by teams were signalled as a place for rapid feedback to be shared and in this way staff felt any required changes were more likely to be implemented. This was compared to the delayed Trust information distribution in the form of emails and bulletins. Monthly team meetings were mentioned as a means of reinforcing information given during daily briefings as well as the appropriate environment for reflecting on data trends over time.
The mechanism for recording observations revealed a strong preference for the use of the electronic recording device. Students and staff who had experienced both paper and app recordings commented on their preferences. Visually, being seen with a clipboard was described as off-putting. In contrast, observers and observed staff overwhelmingly favoured the electronic device as these were now familiar to patients and clinicians within clinical environments and, thus, both acceptable and inconspicuous.