Study design and subjects
This was a prospective observational study involving two cohorts participating in an eight-week pediatric clerkship rotation (2016–17 and 2017–18 intakes) of about 30 fourth-year medical students at the University of Geneva's Faculty of Medicine, Switzerland.
According to a 2009 decision by the cantonal Ethics Committee of Geneva and the Teaching Committee Office of the University of Geneva's Faculty of Medicine, research projects in the field of medical education, dealing with existing anonymized data, and designed to evaluate the quality of undergraduate or postgraduate educational programs, are exempt from the need for a full review process by the cantonal Ethics Committee of Geneva. All the data were completely anonymized after merging and consolidation. All files and records were stored in local institutional data servers.
Learning activities
Traditional learning
Medical students participate in several supervised clinical activities during their pediatrics clerkship and attend a standardized program of traditional case-based seminars held for each clerkship rotation.
Online e-learning course format
Geneva Children's Hospital has set up an educational website that allows medical students to achieve various learning objectives in pediatrics through individual learning using case-based scenarios. These deal with the most common situations faced in general pediatrics, neonatology, pediatric orthopedics, and pediatric surgery. Most of the learning objectives are covered exclusively in e-learning modules available on the educational website, which is available 24/7.
E-learning modules use the MoodleTM (Moodle Pty Ltd, West Perth WA 6872, Australia) platform. Each learning activity is structured as follows:
-
An introduction including statements about specific learning objectives;
-
A topic covered using a step-by-step approach, including one or several case-based vignettes; question–answer–feedback sections alternate with theoretical learning content;
-
A quiz section enabling students to review the learning content; key features of diagnosis and management skills are tested using multiple-choice questions; several attempts per quiz are allowed, and students can review their attempts throughout their clerkship; quiz settings also allow students to see the correct answers, their scores, and get some feedback immediately after their attempt.
Pediatric exam evaluation method
Students take a written exam once, after they have completed their clerkship: students who did their clerkship between January and April (two groups) take the May exam; those who did their clerkship between June and December (three groups) take the January exam. The latter students thus have a more extensive overall clinical and theoretical background since they have completed clerkships in other fields in the meantime.
Exams are online and use either the CAMPUS (for computers) or tEXAM (for tablets) software provided by the Umbrella Consortium for Assessment Networks (UCAN, Heidelberg, Germany). Exam content is designed to test clinical reasoning skills and theoretical knowledge. Students must deal with the step-by-step management of several clinical situations presenting different common pediatric complaints. Supplemental patient information, given sequentially, allows them to move towards case resolution.[16]
Experimental questions were identical in both the May and January exam sessions in order to have a reasonable means of comparing the students. A retrospective study showed that our students were more likely to get higher scores in exams taken in the second session (January) than in the first (May). This difference may be explained by the fact that medical students taking the exam in May have less overall clinical experience.[17]
Comparing e-learning and traditional seminars
To compare the e-learning and lecture formats, two e-learning modules (on constipation and gastroesophageal reflux) were taught exclusively using traditional seminars (the 2016–17 academic intake) and exclusively using e-learning modules (the 2017–18 academic intake). Seminars were given by the authors of the e-learning modules so as to maximize similarities between the learning content in the seminars and e-learning modules. Students from the 2016–17 academic intake could not access any parts of the constipation and gastroesophageal reflux e-learning modules (introduction section, case-based vignette, or quiz) (Figure 1). We named the questions relating to the two subjects the teaching format experiment questions (TFEQ), and we analyzed and compared the TFEQ scores for these two academic intake groups.
Factors influencing scores in subjects taught using e-learning
Questions on subjects taught exclusively using e-learning (seven modules; Figure 1) were named the e-learning experiment questions (EEQs). The following factors were considered: exam scores (excluding TFEQs and EEQs), the number of quizzes taken, quiz scores, sex, and the cohort effect.
Access to every e-learning module made by individual students using their institutional login was documented. We considered the number of attempts each individual made with each quiz, as well as the highest score they obtained on each quiz.
Each exam session also included 34–36 other items (i.e., non-experimental) covering a range of related topics. These items were different in each exam session. The topics tested using these items were taught in seminars, Problem-based learning tutorials, and/or e-learning chapters. Thus, corresponding scores could be used as controlling factors for estimating each student's level.
Evaluation of student satisfaction
All the students were invited to evaluate teaching activities (on knowledge acquisition, the clarity of learning objectives, achieving learning objectives, curriculum adequacy, teacher preparedness, e-learning, and traditional lecture or tutorial learning activities) at the end of their eight-week pediatrics clerkship. Four-point Likert scale items were used in all our institutional surveys.
Analysis
Scores (0 to 100) were given as the percentage of points scored divided by the maximum points achievable. Data were summarized using mean, median, and inter-quartile range (IQR), and Student's t-tests were used to compare the two different groups' scores.
We used multivariate linear regressions to investigate associations between EEQ scores and all the exam's other item scores, numbers of quizzes taken, mean scores obtained for the different quizzes (if a quiz was attempted several times, we considered the highest score), sex, and a cohort effect. All the analyses were made using R software, version 3.6.3 (R Foundation for Statistical Computing, Vienna, Austria).