In our study, we compared three interactive examination designs, all based on active learning pedagogy, in terms of nursing students’ engagement and preparedness, their learning achievement, and instructional aspects. The results indicate that the campus-based design with an active learning classroom was the design most appreciated by students. In particular, that design involved using a student response system during the QI lecture and an online qualifying quiz to sufficiently prepare students for the examination. A similar design conducted online, by contrast, received the overall lowest rating from students. Nevertheless, those students, especially ones with a fear of public speaking, largely praised the online design for allowing them to actively participate by means of written communication (i.e., digital chat).
Most of the significant differences, largely concerning students’ experiences with actively engaging during the examination, emerged between Cohorts 2 and 3. Differences between the cohorts regarding their experiences with learning achievement, however, were not as remarkable. Those findings suggest that the interactive campus-based examination particularly contributed to facilitating engagement, which upholds a fundamental aspect of active learning—that is, excitement—emphasized since the beginning of the concept’s development (3).
Students’ preparation and fair assessment
Of all participants, students in Cohort 2 seemed to be the most prepared and have the best group-work dynamic, which their written comments corroborate. In that cohort and Cohort 3, using a qualifying online quiz seemed appropriate, which supports past findings that digital learning methods (e.g., quizzes and online simulations) are valuable methods of stimulating nursing students’ reflections and can promote self-correction (36). In online learning environments, it is also important to consider both individual and social learning achievements (37). By using individual quizzes along with group work during the examination, we could assess students’ individual and collaborative performance.
To make assessments fair, it was important to have two examiners compare and discuss students’ performance. For that purpose, having 20 students per group in four or five groups allowed intimate lecturer–student interaction and, in turn, discouraged students from engaging in a documented problematic behavior in group work: acting as freeloaders (14). Compared with written examinations, the interactive assessments also allowed asking students follow-up questions that could reveal fundamental gaps in their understanding or else clarify and expand on their understanding, which facilitated making fair assessments of the depth or breadth of their knowledge. Thus, in line with previous studies, we believe that the design for Cohort 2, which integrated aspects of digital-based learning (e.g., digital quizzes) with face-to-face activities, cultivated an optimal learning style that sufficiently prepared students for the examination (36).
A particularly positive component—close lecturer–student interaction during the interactive examination—enabled teachers to discern whether and, if so, then how the examination captured aspects of students’ learning achievement that they wanted to assess. That direct feedback, together with students’ evaluations, indicated what needed to be revised in the course’s structure. As such, the examination also functioned as a pedagogical evaluation, one that would be difficult to undertake with a more summative examination (e.g., a written examination). This pertains to an advantage with formative assessments, as content saturation is a well-known concern in nursing education, and it is therefore important that nursing programs implement such educational methods with the potential to effectively examine and evaluate courses (15).
Despite significant differences in students’ experiences with the course layouts, the proportion of students per cohort who passed the examination—approximately 10% in each cohort—did not differ significantly, even though students in Cohorts 2 and 3 had to pass a digital qualifying quiz before attending the examination. Such differences may be explained by the fact that though the examination contents and objectives were identical, the conditions differed (e.g., Cohort 2 and 3 had one hour more than Cohort 1).
Students’ involvement and active participation
Many students in Cohort 3 appreciated the digital chat function, which allowed them to actively participate without speaking aloud to the entire group. Research has shown that in online education, communication strategies should be built into the design and cannot assume the same conditions online as in face-to-face settings. Otherwise, when communication strategies are poorly defined or inappropriately applied, restrictions to synchronous and asynchronous learning occur (16). Some of the students’ comments indicated that difference, either by applauding the function that allowed written feedback and comments or criticizing such parallel communication as distracting.
Research has also revealed that exposure to virtual environments can boost students’ confidence and enable them to face audiences of any size (38), which especially benefits students with a fear of public speaking. Such confidence is crucial for nursing students, who generally need to practice playing active roles for their profession and engage in QI work as an important skill for enhancing the quality of care and patients’ safety (39). Nurses should also be able to safeguard the interests of vulnerable patients during care planning processes and in meeting with interprofessional teams (40, 41). Added to that, registered nurses, who are typically expected to be clinical leaders for nursing aides and assistants, need to be confident and professional in their interpersonal communication, which will most likely occur in face-to-face interactions (42). In that light, while nonverbal communication enabled certain students to achieve the learning objective of active participation during the examination, the chat function allowed students, especially shy ones, to engage verbally since lecturers could address their written comments and ask them to elaborate.
Overall, the students’ evaluations indicate that the examination’s active focus on the process, commonly expressed as “doing” in the comments, contributed to their learning. It is precisely when students engage in activities—for example, design a project and present it—that they gain opportunities for higher-order thinking and that deep learning and retention are most likely to occur (43). We also believe that such doing was reflected in both the product (i.e., the QI project) and the process of working in groups, the latter of which students characterized as motivating, arguably because we instructed students to center their projects in well-defined clinical contexts. Group work is often most motivating when perceived by students to have a meaningful, real-world context and/or implications (14). In that light, students should base their QI projects on problems experienced during clinical placements or clinical work. As a result, aside from content, the processes underlying the assessment of group work skills (e.g., collaboration and negotiation) and other employable skills can be perceived as authentic (44).
Strengths and limitations
Among our study’s multiple strengths and limitations, a chief strength was the high response rates in Cohorts 1 and 2, most likely because those students used a paper-based questionnaire. As a result, we had groups of participants that were representative in terms their age and gender distribution. In Cohort 3, in which students completed the examination online, the response rate was significantly lower, which confirms that online questionnaires tend to have lower response rates than their paper-based versions (45). The results from Cohort 3 are thus prone to selection bias, which limits the generalizability and comparability of this cohort.
Another limitation was that the evaluative questionnaire was not psychometrically tested beyond face validity. However, we did use the same questionnaire with the same questions and layout in all cohorts, only that the mode of delivery for Cohort 3 was online. Such conditions thus likely enabled a reliable comparison of the cohorts.
At the same time, following the GRAMMS guidelines was a strength of our study. It allowed us to not only report research adequately so that all readers can critically appraise our study but also aid systematic reviewers in identifying key methodological issues in the literature.
Last, a potential limitation with mixed-methods designs is the possibility of contradictory findings—for example, when different data do not fully support each other (19). However, in our study, no contradictions in the data were apparent; in fact, the two data sources (i.e., numeric, and free-text responses) complemented each other well. For example, regarding experiences with preparedness, students in Cohort 3 had the lowest numerical scores despite taking the qualifying digital quiz, unlike the students in Cohort 1. The written comments highlighted that many experiences with being poorly prepared related to the digital mode of delivery, not to the content, as was the focus of the digital quiz. Thus, the written comments supported a nuanced understanding of the differences and understanding of the numerical ratings.
Implications and future research
Our results generally show that interactive examinations are feasible and appreciated by nursing students in their final year of study. Above all, to conduct such examinations successfully, educators need to focus on students’ preparation. To that end, they should consider using the strategies described in this article, which prioritized incorporating different digital technologies as resources and administering digital quizzes to complement students’ group work.
Several implications can be drawn from the experiences captured by the study. First, using individual response technology during lectures is one way to engage students (29). We used Mentimeter but several brands and products (e.g., clickers) are also available (46). Research has shown that such approaches allow students to provide input without fear of public embarrassment or having to worry about being sidelined by more outspoken students (46, 47). That type of active participation and interaction is particularly important in preparing for interactive examinations, which often expect students to actively participate in order to pass.
Second, the online quiz contained questions that clearly reflected the course objectives, all aimed to guide students in navigating the learning objectives. In response, students described the help as having supported their preparation, and other research has indeed shown that students can consult quizzes to direct their independent learning (48). Beyond that, because the quiz consisted mostly of multiple-choice questions and was self-graded, it was time-efficient for us as lecturers.
Third, educators have a range of videoconferencing platforms to choose from, including Zoom, Skype, and Microsoft Teams. When using such platforms, it is important to consider so-called “netiquette” by, for example, establishing clear guidelines for behavior and providing information about technical requirements. In our study, the links to meetings on Zoom were accessible only via the student learning platform, which is accessible to enrolled students only. We also enabled the “Waiting room” function to admit students one at a time, which allowed us to confirm their identities before they entered the meeting. On many videoconferencing platforms, it is also possible to create public links but restrict access with passwords that can be distributed internally. Those security aspects are important given research revealing students’ concerns with the integrity of online examinations (49).
Fourth, significantly more students in Cohort 3 had watched the online QI lecture than students attending the campus-based lectures in Cohorts 2 and 3. That result suggests that implementing a blended learning model (e.g., using both campus- and online-based educational activities in courses) might facilitate student attendance and engagement, and in turn, better support their preparation before examinations.
In light of those implications, future studies should evaluate educators’ experiences with interactive examinations conducted online. Research has shown that educators experience several challenges in transitioning from in-class lessons to online-based ones, including a lack of technological support and the need for professional development (50). A recent review has additionally indicated that few studies concerning digital technologies in higher education have involved evaluating interpersonal communication and collaborative learning from the perspective of students (51). Thus, as we intended in our study, research in the future should also consider those experiential interpersonal aspects of online education.