This is a pragmatically conducted mixed methods exploratory case study guided by the Framework for Integrated Methodologies (FraIM) (Plowright, 2011). Both quantitative and qualitative data were collected simultaneously. The quantitative data were focused on during analysis, followed by a search for confirming and disconfirming evidence within the qualitative data. Warrantability is a proxy for validity within FraIM (Plowright, 2011). This requires sufficient reporting transparency to allow readers to judge the extent to which claims are warranted within reported contextual constraints. In addition to attempting to provide this transparency, the second facilitator served as a critical reader. She confirmed that the descriptions, findings, and conclusions aligned with her experiences in the intervention.
Intervention
The year-long intervention was spread over 30 sessions, with 140 hours of contact time. All the sessions were conducted in a computer laboratory on a university campus, with each learner working at an internet-connected computer. The first six months of the intervention were intended for topic origination, involved 59 hours of contact time over 12 sessions, and were conducted by only one facilitator, the researcher. The term topic origination is taken to include explaining the main terms and concepts of the chosen topic, identifying independent and dependent variables, formulating a focus question, identifying variables to be controlled, treatments to be set up, and equipment needed, and describing the data collection process.
During these topic origination sessions, the learners worked through the first nine steps (Introduction, Topic, Literature, Referencing, Background knowledge and brainstorming, Evaluation, Variables and focus questions, Fair testing, Method and hypothesis) within the online programme Your Science Fair Investigation, available at https://www.learnscience.co.za/challenge-page/your-first-science-fair-investigation. This programme first exposes learners to previous learners’ project ideas through photographs, written stories, and videos, directs learners to explore the Science Buddies website, and requires them to list and brainstorm their interests. After this, they are taught and practice how to access, summarise and reference literature. They are guided to identify variables and their indicators, pose focus questions and hypotheses, design fair tests, and are introduced to quantitative and qualitative measurements, instruments, and their units.
The intervention also included a structured, intelligent Excel sheet called the Investigation guide that the learners incrementally completed, guided by the face-to-face teaching sessions, the online programme, and individual discussions with the facilitator. The learners were also occasionally required to explain their evolving ideas to one another in small groups and were frequently encouraged to help one another informally. Each learner’s Investigation guide, hosted on OneDrive, was always accessible to both the learner and facilitator. The facilitator could also view the learners’ responses within the online programme through its back end. Throughout this period, the facilitator accessed each learner’s work and provided individualised written feedback via email between successive sessions. Few of the learners had even basic computer literacy skills at the start of the intervention, so a session early in the intervention period was dedicated to setting up email accounts and teaching the learners computer usage basics. Even after this, considerable time was spent in each session, particularly at the start of the intervention, helping the learners develop computer literacy skills.
Each session began with an approximately hour-long direct instruction session in which the facilitator taught a face-to-face version of the corresponding part of the online programme. This was followed by two to six-hour periods of individual work. The length of each session depended on transport arrangements since no transport budget could be obtained, necessitating dependence on rather erratic help from various sources. During the individual work period, the learners were given a list of tasks to perform. These always included engaging with the written feedback from the facilitator, completing the relevant activities in the online programme and the relevant sections of the intelligent Investigation guide, and asking for help from the facilitator as required. After helping those who asked for help, the facilitator individually checked on as many others as time allowed.
Much of what was done in these first six months of the intervention, particularly during the first two hours of each session, was injection pedagogy (Aylward & Cronjé, 2022) since the aim was to teach the learners how to conduct an experimental investigation through direct face-to-face instruction, supplemented by activities in the online programme. It soon became clear that the learners’ knowledge of science, understanding of how to conduct an experimental investigation validly, and reading skills were limited, as is established in the literature, even for higher achieving learners from low-quintile South African schools (Stott & Beelders, 2019; Stott & Duvenhage, 2023). Therefore, beginning the intervention with a primarily teacher-centred injection pedagogy was appropriate (Aylward & Cronjé, 2022). The intelligent electronic Investigation guide was a scaffolding tool to support the learners to apply what they learned through injection to their project, moving them along the vertical, constructivist axis in Fig. 1.
Figure 2 represents the number of learners over the duration of the intervention, according to the origin of their project topic. This was informed by records, per learner, of attendance, facilitator guidance, and topic for each of the 30 sessions across the intervention, coupled with field notes written shortly after each session. As indicated on the left-hand side of this diagram, 24 of the 32 learners decided, early in the intervention, either on a topic they got from the internet (9) or their own idea (15). For six of the learners who formulated their own ideas, this was relatively drastically modified by the facilitator a few months into the process, to the extent that their topic could better be described as coming from an adult after the learner had displayed some competence. It should be noted that the facilitator continually gave all the learners individualised feedback and that all topics evolved.
Insert Fig. 2 here
As indicated in Fig. 2, an additional facilitator joined the intervention at the end of the first six months and provided a topic to the nine learners who still needed to formulate a viable topic. The 23 learners who completed their projects fell into the topic-origin categories: internet (5), self (7), adult after the learner has displayed competence, referred to hereafter as adult-after-competence (4), and adult (7). The facilitators each had over ten years of experience teaching the sciences at the high school level. During that time, they mentored learners to produce high-quality projects, some of which won medals at regional, national, and international science fairs.
These groups are not experimentally comparable since they differ in many ways, invalidating any positivist comparison. However, the data collected about members of each group are considered valuable in contributing to an understanding of the advisability of adopting each of the manners of arriving at a research topic for a science fair project when interpreted within the constraints of how these groups arose, as explained below and elsewhere in this article.
The facilitators sufficiently modified some learners' topics for them to be classified in the adult-after-competence group. Reasons for this include the learners’ receptivity to the facilitator's input, self-direction and introversion, the scope their self-originated topic offered for improvement, and local conditions. Four examples are given to illustrate this. Learner A started out showing an interest in the heat conductivity of various materials. The facilitator suggested asking elders in his community for indigenous forms of thermal insulation, such as local materials used in insulating hot pot mats. Over the next six months, he was absent for some sessions, failed to submit work for checking, rarely asked for help and only responded superficially to e-mail and verbal prompts from the facilitator. At that point, all learners without viable topics were given three options: to propose one by the following week, to receive a new topic from the new facilitator, or to drop out of the programme. Learner A chose the first of these, which involved testing the time it took water to boil in each of two pots of different materials. He, therefore, fell into the self-originated topic origin group. Learners B and C both serve as examples of the adult-after-competence group. Learner B completed an internet-inspired project within the programme's first six months. A few weeks after the second facilitator came, one of the learners to whom she gave an idea dropped out of the programme. Learner B took on that idea under the second facilitator's guidance and submitted that project, rather than his first one, at the science fair. Early in the intervention, Learner C showed interest in soil types. Guided by the individual written feedback that the facilitator gave to all the learners, her topic submissions evolved over the first few months from testing how well plants grow in various types of soil to measuring soil pH and soil clay content to the influence of soil clay content on water expansion. Under the facilitator's general guidance, Learner C provided a complete plan for performing an investigation for the last of these, therefore displaying competence in investigation design. At this point, the facilitator consulted an expert in soil mechanics, who provided Learner C with a modified topic: the rate of water expansion of montmorillonite relative to that of clay known to cause structural damage and the testing of a simple device to determine water expansion rate. This expert provided the necessary equipment and specialist guidance from that point onward. Learner D, an introverted and self-directed learner, chose a viable project from the internet early in the intervention and rarely asked for help. Since she was observed to work diligently and competently, the facilitators tended to forget about her as they focused on learners who more clearly appeared to need their help. The facilitator, therefore, did not suggest a change to Learner D’s internet-generated topic despite her display of competence.
Data collection
To answer the first research question regarding the help the learners received, the 23 learners who completed the intervention answered a 28-item closed-response and one-item open-response questionnaire at the intervention’s conclusion. The purpose of this research question was to evaluate the validity of the assumption that the learners received negligible help in deriving their topic beyond what was provided in the intervention and, therefore, known to the researcher.
To answer the second research question, regarding the learners’ perceptions of learning value and the extent to which their BPNs were satisfied, questionnaire data were collected at the end of the intervention from the 23 learners who completed it. This research question aimed to determine whether the first criterion of advisability listed in the Introduction was met for each group. The questionnaire data was of two main types. For both types, the Likert scale items used had five items with the descriptors: 1: strongly disagree, 2: disagree, 3: neutral, 4: agree, 5: strongly agree. The first of these two types comprised 58 Likert, and 4 open items regarding the learners’ perception of the learning they underwent, the enjoyment they experienced, and their perception of value, of various components of the intervention. The second was the basic psychological needs satisfaction index questionnaire (BPNS) obtained from Van der Kaap-Deeder et al. (2020) and modified slightly to refer to the intervention. This consisted of 28 Likert items measuring perceptions of competence (6), relatedness to peers (8), relatedness to adults (5), and autonomy (9).
An index of project quality was derived per learner to answer the third research question regarding the quality of the learners’ projects for the various ways of arriving at a project topic. To enhance validity, this index was derived from two equally weighted sources, as indicated in Table 1: (a) the quantified outcome of the EYS competition; (b) a facilitator quality rating derived from seven 5-item Likert scale questions, answered at the end of the intervention by the facilitator responsible for that learner. The answer to the third research question is foundational to answering the fourth research question.
Table 1
Derivation of the project quality rating (/12)
Information source | Item | Score |
---|
Outcomes of the EYS Competition | Medal at the regional competition | Bronze: 1 Silver: 2 Gold: 3 |
Shortlisting for national competition Special prize at regional or national competition / | 1 additional point for each item |
| 6 points maximum |
Facilitator rating | Topic quality | 5-point Likert scale for each of these, giving a maximum of 35 points possible |
Method rigour, depth, quality |
Quality of data representation |
Quality of data analysis, limitations, significance and conclusions |
Quality of report |
Quality of poster |
Quality of verbal discussion |
| Scaled to 6 points maximum |
TOTAL | 12 points maximum |
Insert Table 1 here
The fourth research question is related to the output quality compared to the input cost for each topic origin group. The quality index derived to answer the third research question was used as a proxy for output quality. To obtain an index to represent input cost, at the end of the intervention, each facilitator answered 5-point Likert questions for each learner they had facilitated regarding the amount of help they had provided throughout the intervention for each of six aspects of the project (choosing a topic, planning the method, collecting the data, representing and analysing the data, writing the report and poster and editing the report and poster). This research question aimed to determine whether the projects in the various topic source groups met the second criterion for advisability listed in the Introduction and to differentiate further those that met both advisability criteria based on their quality.