The three major findings of this study of nursing students in a pharmacology course are that for the passing students (i) marks are higher for ongoing assessment than examinations and (ii) there are very weak to moderate relationships between marks obtained in examination and ongoing assessment, and for completing students (iii) increasing the marks allocated to examinations decreased the number of students who passed the course, whereas decreasing the examination marks increased the number of students passing.
Marks are higher for ongoing assessment than examinations
This is the first study to show that marks for ongoing assessment are higher than for examinations for nursing students in a pharmacology course. Similar findings have been made previously for bioscience courses being undertaken by nursing students [22] or science students [14] and confirms previous findings of higher marks for ongoing assessment at the program level [9-12].
There are several possible reasons for this disparity between marks in examinations and ongoing assessment. The most obvious of these is that the examination results represent those of the individual student, whereas the ongoing assessment marks may represent that of individuals or groups of students. In the present study, the tutorial mark of 20% is partly a group mark and is composed of 10% for unsupervised preparation/homework, which can be individual or group, and 10% for participation, which is a group mark. This makes it possible that the performance of weak students, and their marks in tutorials, to be artificially enhanced by better students in the group. The assignment component of the ongoing assessment (20%) should represent work undertaken by the individual student, but as this was unsupervised, there was nothing preventing students colluding. One way to overcome this would be to remove group work from courses. However, it is well known that group work is very important skill for nursing students. Thus, we need to be able to overcome this ongoing problem with assessing individuals in group work [23,24] or use an alternative approach to make sure that students do not pass courses based on the work done by others in ongoing assessment.
For group assignments, self- and peer-rating has been used to overcome varying contributions by students in the humanities [25] and in postgraduate nursing/midwifery studies [26]. However, this method is not usually applied to weekly tutorials for students, including nursing students. When it was applied to problem-based learning tutorials for medical students, it was shown that self-ratings did not correlate, and peer- ratings only weakly correlated, with tutor-ratings of the students [27]. Thus, it is not proven that this method gives a reliable outcome of the student’s achievements in weekly tutorials. Furthermore, it would be very time consuming and expensive to undertake such assessment for weekly hourly tutorials in a large cohort. For instance, the pharmacology tutorials for nursing students in the present study were weekly over 13 weeks, in groups of 25, for cohorts of 250 or 350 students. Thus, self- and peer-ratings of tutorials are not routinely undertaken for large groups on a regular basis.
In the pharmacology course, 55% of the 60% of the marks allocated to examinations were in the form of MCQs. When MCQs are used, the fairest option is to focus on the number of questions attempted and penalize wrong answers, as with this option, blind guessing will on average not help the student [28]. Many universities, including the one that this study was undertaken at, do not deduct marks for incorrectly answered MCQs, and this inflates the MCQ marks [28]. In the pharmacology course studies, this could have inflated the marks for MCQs by ~20% and the overall mark in in the examination by 11% of the 60% of marks. Thus, the students who fail the examination in pharmacology by achieving less than 30% of the 60% of marks available are clearly demonstrating a poor knowledge of pharmacology, especially as the some of the marks may be due to blind guessing.
Performance in ongoing assessment is a very weak to moderate predictor of performance in exams
In this study, we showed that for nursing students in pharmacology, marks in a written assignment were very weak to moderate predictors of performance in examinations. A previous study showed a weak correlation (like this study, using Pearson’s coefficient) between marks in a research project and the final examination in a pharmacy course [13]. It would be of interest to know whether this finding relating to assignments/projects applies to students in other disciplines.
In addition, the present study showed that marks in tutorials, which included a homework component, are not good predictors of academic performance in examinations. This is the first time that this has been shown for nursing students or in a pharmacology course. However, this finding is not consistent for all disciplines, as marked tutorials have been shown to improve marks for courses in calculus, macroeconomics [16], finance [17], and law [20].
Altering the marks allocated to examinations changed the number of students who failed or passed
Increasing the marks allocated to examinations increased the number of students who failed the course and decreased the number who passed. With the allocation of marks of 60% to examinations and 40% to ongoing assessment, in the present study, the number of students who failed the pharmacology course was low (5-8%). With this low failure rate, the likelihood of increasing the passing rate by changing the allocation of marks was low, and our modelling confirmed this by showing that the passing rate could only be increased by 2-6 percentage points by increasing the marks allocated to ongoing assessment. With this allocation, the passing rate was high, 92-95%, and this occurred despite 20-26% of students failing the examination component of the course.
The major finding of the modelling part of our study was to show that increasing the marks allocated to examinations would have decreased the number of students who passed the course in pharmacology, with 19-25% failing overall if all the marks had been allocated to the examination. In Australia, the allocation of marks for examination in pharmacology or pharmacology-related courses from nursing programs ranges is variable (85%, University of Adelaide; 70%, University of Queensland; 50%, Edith Cowan University, RMIT University; 40% University of Tasmania [2-6]). Thus, if the standard trend of there being higher marks in ongoing assessment than examination occurs in these courses, for the same marks in ongoing assessment and examinations, a smaller percentage of students enrolled at Adelaide where examination marks predominate, would have been successful than if they had been enrolled at Tasmania, where marks for ongoing assessment predominates.
Although our modelling was done for a pharmacology course, the findings will apply to any course where the students have weaker outcomes in examinations than ongoing assessment, which is common [10-13]. As, to our knowledge, there are no previous studies of the either the relationship between marks in examination and ongoing assessment in an individual course, or of modelling the effect of changing the allocation of marks, for nursing or other students, these are novel findings.
Implications of these results
As marks are higher for ongoing assessment than examinations, the concern is that the nursing students, who pass the ongoing assessment by obtaining 50% of the allocated marks, but not the examinations, may not have assimilated the necessary knowledge in pharmacology or other courses, to continue their program of study. Thus, the disparity between marks in examinations and ongoing assessment needs to be considered, and methods introduced to overcome this. One possible practical solution to this dilemma of whether students who pass ongoing assessment but fail examination, should be allowed to pass courses and progress in their studies, would be to make it compulsory for the students to pass the examination component of the course.
These findings have implications for those countries (Australia, UK, Republic of Ireland, New Zealand) where performance in undergraduate ongoing assessment is partly used to determine whether nursing students/graduates go on to clinical practice. In Australia, assessment for nursing students is commonly a mixture of ongoing assessment and examinations to give a Grade Point Average (GPA), and for many nursing courses/programs, most marks are from ongoing assessment. Thus, in the present nursing program at the university where the present study was undertaken there are 23 compulsory and one elective course. Seven of the courses are off-campus (practicums) and are marked as satisfactory or not satisfactory. Of the remaining 16 compulsory courses, 8 have no examinations, and 78% of marks are allocated to ongoing assessment and only 22% to examinations. It seems likely that the number of students who failed the examination components in our Australian university but passed the program overall, would have failed the NCLEX-RN examinations in USA system and not have been registered. Further consideration needs to be given as to whether students in Australia, who do not undertake or fail examinations, are fit to practice.
One possible practical solution to this dilemma of whether students who pass ongoing assessment but fail examination, should be allowed to pass courses and progress in their studies, would be to make it compulsory for the students to pass the examination component of the course. In addition, studies need to be undertaken that consider the relationship between success in undergraduate courses and clinical practice. Another practical solution is to adopt the system used in the USA, where after completion of an undergraduate course in nursing, success in a national examination, NCLEX-RN, is a requirement for clinical practice.
Limitations
The major limitation of this study is that it is of a single course in pharmacology, and that some of the findings may not relate to other courses being undertaken by nursing or non-nursing students. However, we have previously shown a similar reliance of marks in ongoing assessment for the overall success of nursing students in a bioscience course [22]. Also, the findings of the present study may apply to any course where students obtain significantly lower marks in examinations than ongoing assessment. However, for many courses, we do not know whether marks are lower for examinations than ongoing assessment for nursing or non-nursing students. Thus, similar analysis needs to be undertaken of other courses to determine whether the findings are specific to science courses for nursing students or can be related to other courses for nursing and non-nursing students.