Research design
This study was conducted using an approach that integrates a quasi-experimental design with quantitative triangulation methods. The quasi-experimental design adopted in this study allowed for the examination of the effects of interventions on natural groups, while the quantitative triangulation method combined multiple quantitative research techniques and analysis methods to expand the accuracy and scope of the data obtained. This strategic approach has strengthened the internal validity of the research and enhanced the generalizability and reliability of the findings, seeking detailed and multi-faceted answers to complex research questions (Creswell, 2014; Shadish et al., 2002; Tashakkori & Teddlie, 2003). Furthermore, this methodological integration aimed to minimize limitations by merging the structural discipline provided by the quasi-experimental design with the methodological richness of quantitative triangulation, leveraging the strengths of both approaches. The implementation process and overview of this comprehensive methodology are presented in Table 1.
Table 1. Quasi-Experimental Design: Comparative Analysis of Educational Approaches in Medical Training*
Groups
|
|
Pre experimental
|
Experimental process
|
Post experimental
|
Control
|
CG
|
Pretest
(AT)
|
Traditional Educational Environment
|
Posttest
(AT + CBCTT)
|
Experimental
|
EG1
|
Case-Based Learning Approach
|
EG2
|
Case-Based Learning Approach Supported by Concept Mapping
|
*Note. AT: Achievement Test; CBCTT: Case-Based Critical Thinking Test; Independent variable: Educational Environment Approaches (Traditional, CBL, CBL with CMs); Dependent variable: Achievement. The groups evaluated in this study include the Control Group (CG), the CBL Experimental Group (EG1), and the CBL Supported by CMs Experimental Group (EG2).
|
In addition to the teaching programs applied to students, a two-stage evaluation process was followed to determine the effectiveness of the CBL approaches supported by CBL and CMs. In the first phase of evaluation, a pre-test/post-test format achievement test was administered to measure the level of comprehension of theoretical knowledge. During this process, the measurement tool developed by researchers was repeated after an eight-week interval, and the changes in students' academic achievements were examined in detail. In the second phase of evaluation, a customized test prepared on a new case, independent of the cases studied, was administered. This stage aimed not only to assess the students' theoretical knowledge but also how they applied this knowledge to new and uncertain situations. Thanks to this multi-layered evaluation approach, the effectiveness of the learning strategies examined was analyzed in a multi-faceted way, attempting to increase the generalizability and reliability of the results obtained.
Study group
Participants for the study conducted with medical school students were selected using purposive sampling methods, specifically criterion sampling. Attention was paid to meet the criteria established according to the study's objectives, and several factors were considered in selecting the study groups: ensuring continuity of experimental procedures, ease of access to subjects, similar pre-course knowledge levels among participants, and voluntariness. Accordingly, students from three different groups enrolled in an elective course on Security and Ethics in Medical Informatics, taught by the same instructor, formed the study groups. A pre-test was administered to the predetermined three groups to assess the homogeneity and normality of their baseline knowledge distribution. Based on the test results, the students were evaluated in the following groups:
- Control Group (CG): This group received no intervention and continued with traditional educational methods.
- Case-Based Learning Experimental Group (EG1): This group was educated with CBL methods.
- Case-Based Learning Supported by Concept Mapping Experimental Group (EG2): This group received education with both CBL methods and concept mapping techniques.
Table 2. Gender Distribution of the Study Group
Study groups
|
|
Gender
|
Total
|
|
Female
|
Male
|
|
N
|
%
|
N
|
%
|
N
|
%
|
Control Group
|
CG
|
14
|
51.9
|
13
|
48.1
|
27
|
34.2
|
Experimental Groups
|
EG1
|
13
|
50.0
|
13
|
50.0
|
26
|
32.9
|
EG2
|
14
|
53.8
|
12
|
46.2
|
26
|
32.9
|
Total
|
|
41
|
51.9
|
38
|
48.1
|
79
|
100.0
|
As detailed in Table 2, the study included a total of 79 students: 27 in the Control Group (CG) with 14 females and 13 males, 26 in the Case-Based Learning Experimental Group (EG1) with 13 females and 13 males, and 26 in the Case-Based Learning Supported by Concept Mapping Experimental Group (EG2) with 14 females and 12 males. The study was concluded with the participation of all students in the post-test applications.
Educational Methods and Standardization
This study was conducted under the approval of Çanakkale Onsekiz Mart University Ethics Commission, with the authorization number 2023-YÖNP-0614. All participating students received face-to-face instruction from the same lecturer, who adhered to a standardized curriculum throughout the duration of the research.
Creation and implementation of the achievement tests
This study was conducted within the scope of the "Security and Ethics in Medical Informatics" course, which focuses on the development of skills related to information security and ethical issues in the use of computing technologies in the medical field. This course, categorized as an elective in the medical education curriculum, was designed to measure each learning outcome through at least 3 questions prepared by field experts, totaling 40 questions. Before a preliminary trial of the questions, their clarity, understandability, and appropriateness to the course outcomes were reviewed by three faculty members specialized in medical informatics, ethics, and information security, as well as an academician expert in assessment and evaluation methodologies. The clarity and understandability of these multiple-choice questions were assessed from an interdisciplinary perspective and confirmed to meet academic standards. The questions, designed to be clear and have only one correct answer according to the course outcomes, underwent item analysis after validation of content validity by experts. This analysis involved 93 students from the university where the study was conducted and two other universities at the same educational level. Each multiple-choice question offered five options, with correct answers scored as 1 point and incorrect or unanswered questions scored as 0. The internal consistency of the test scores was examined using the Kuder Richardson-20 (KR-20) reliability, and the relationship between the scores obtained from the test items and the total test score was calculated using item-total correlation. It is indicated that items with an item-total correlation of .30 or higher discriminate well between individuals, and a test reliability coefficient of .70 or higher is generally sufficient for the reliability of the test scores (Fraenkel, & Wallen, 2011; Nunnally, & Bernstein, 1994). The analyses resulted in item-total correlations ranging from .32 to .65, a Mean Discrimination Index of .431, and a KR-20 reliability value of .818 for the achievement test. Students in the experimental and control groups were not involved in the development process of the achievement test and encountered these questions for the first time during the pre-test phase.
In the second phase of the assessment, the Case-Based Critical Thinking test was administered to measure students' metacognitive skills such as critical thinking, problem solving, and adaptation to new and uncertain situations. In the post-test phase of the study, analyses were conducted on the resolutions made by all groups based on a new case presented to the students. This test, comprising short-answer questions, was designed to evaluate how students could apply what they learned to previously unencountered situations, and was developed in collaboration with the same faculty experts who contributed to the creation of the first assessment tool. The short-answer questions used in the research were chosen to measure the academic success of students' metacognitive skills. This approach not only demonstrated the students' ability to link theoretical knowledge with practical scenarios and develop independent solutions, but also allowed for a more accurate assessment of their deep understanding and application skills. To evaluate the reliability and validity of the questions, a pilot test was administered to a small group of 13 students with similar characteristics to the study groups. Based on the results of this test, the clarity and measurement effectiveness of the questions were reviewed, and the discriminative power of the questions was determined through the analysis of student responses. At this stage, the questions were scored objectively by an expert group, and the scores obtained were used to assess the overall reliability of the test and its suitability as a measurement tool. Each question was structured to have one correct answer related to the final decision on the case.
Case Development and Integration into the Process
This study meticulously prepared cases focusing on the development of skills related to information security and ethics in the medical field, taking into account the outcomes aimed at enhancing these skills. Every step of this process was designed to reflect real-world scenarios that students might encounter, maximizing the applicability of theoretical knowledge to practical situations. Initially, core topics appropriate to the course's learning objectives and the curriculum's demands were selected. These core topics were determined to encompass various situations students might face in fields such as medical informatics, information security, and ethics. Following this, the existing literature related to the chosen topics was thoroughly reviewed, and real-world cases were investigated. At this stage, the opinions of expert faculty members who contributed to the development of the assessment tools were also sought. This expert group assessed the educational value, realism, and alignment with learning objectives of the cases, guiding their preparation. Based on the group's recommendations, detailed case scenarios for each topic were developed. These scenarios were designed to test students' critical thinking, problem-solving, and decision-making abilities and were organized to reflect realistic situations. The prepared cases were revisited by the expert group for content accuracy and educational value, and necessary revisions were made. Subsequently, a pilot test was conducted on a small group of students to evaluate the clarity of the cases, student interaction, and educational effectiveness. Feedback from the students provided the basis for the final adjustments to the cases. The revised cases were presented as part of the course, and the responses provided by the students were evaluated against predetermined criteria. This evaluation aimed to analyze the educational effectiveness of the cases and their impact on learning outcomes. When necessary, further improvements were made to the cases for future implementations. Throughout this process, the invaluable insights of the expert faculty members facilitated a multidisciplinary approach to the development of cases that met academic standards.
Implementation and Evaluation of Learning Methods
The course instructor administered an achievement test during the first week of the course, focused on the concepts covered in the curriculum, to determine the students' prior knowledge levels. The data from this pre-test were used as a basis for forming the study groups. No interventions were made in the control group during the process, and they continued with the existing educational programs. In the experimental groups where CBL (EG1) and CBL supported by CMs (EG2) were implemented, comprehensive information about the process and its evaluation was provided in the second week of the course (see Figure 1).
Figure 1. Comparative Flowchart of Educational Interventions for Experimental and Control Groups
In the third week of the course, the information session continued; to the EG2 group, in addition to the training conducted for the EG1 group, comprehensive training on concept mapping techniques was provided. During this training session, the relationship between the case and CMs was discussed, and strategies on how to create CMs (either manual or digital) were demonstrated with examples and presented to the students (for more details, see Appendix A1). At the end of this informational process, the first case expected to be evaluated in the fourth week of the course was provided with instructions (for an example, see Appendix A2). Each student was expected to submit their analysis of the first case, provided in print, before the next class session (either in print or digital format). In the EG2 group, unlike the EG1 group, students were asked to use the concept mapping strategy in their case analyses based on the information acquired during the course process and to submit the CMs they created along with their case analyses (for an example, see Appendix A3). During the class, discussions were held over the analyses prepared by the students, and in-class analyses were conducted. While the EG1 group focused on the cases, the EG2 group also included analyses made with concept mapping. At the end of the class, the case to be discussed in the next session was presented to the students in print, and the students were asked to submit their analyses of this case before the next class. This arrangement and sequencing led to a case analysis being conducted each week for a total of five weeks. In the study, where pre-tests and post-tests were conducted eight weeks apart, the achievement test containing the questions asked in the pre-test phase was administered to all groups the week after the case analyses were completed. Afterwards, the Case-Based Critical Thinking test developed for the second evaluation phase was administered to all groups, thus completing the study.
Data collection
In the study, a two-stage evaluation was conducted to observe changes in students' understanding of information security and computer ethics. In the first stage, an achievement test focusing on the concepts covered in the course was administered to the study groups at the beginning and end of the educational processes simultaneously. In the second stage, the Case-Based Critical Thinking test, which focuses on measuring metacognitive skills, was administered at the end of the educational processes, and the data collection process was completed. Achievement tests were administered to all study groups in a controlled classroom environment, following ethical standards and after obtaining informed consent. The data collection was conducted using standardized procedures to ensure equal conditions for each student.
Data analysis
At the beginning and end of our reserach study, the achievement test containing multiple-choice questions awarded 1 point for each correct answer and 0 points for incorrect or unanswered questions. Given the homogeneity among the groups studied, the assumption of normal distribution was met, and the three groups had similar levels of prior knowledge, parametric tests were used for data analysis. To test the effectiveness of the environments, a 3x2 split plot design was utilized, and a two-way ANOVA for repeated measures was performed to analyze this research question. Similarly, in the Case-Based Critical Thinking test administered during the second evaluation phase, 1 point was awarded for each correct answer, with 0 points for incorrect or unanswered questions. The data were analyzed using a One-Way ANOVA to compare the effectiveness of the three different learning environments. The significance of differences between group mean scores was interpreted at the .05 level, and post-hoc tests were used to determine the source of differences where necessary. Additionally, Cohen's d effect size analysis was performed to determine the magnitude of the learning environments' impact on student success. According to benchmarks set by Cohen (1988), effect sizes are typically classified as small (d=0.2), medium (d=0.5), or large (d=0.8). These evaluations help to determine the practical significance of the research findings and quantitatively demonstrate the impact of the learning environments on student success.