Rubric development
This study involved adapting an online rubric, designed by the University of Wisconsin, which covered students’ participatory experiences of, and processes used during, teamwork, as well as the associated learning activities such as “conduct research” and “solve problems”. This adaptation involved a group of academic staff with different expertise, but with similar experience in delivering PBL, to develop a draft rubric (see Appendix A). This adaption process involved modifying the rubric considering technical and pedagogical aspects of the design and implementation specific of our objectives. For example, we changed the scoring system to suit our project, and we refocused the skills being assessed to include professionalism, work ethic, and communication skills. The draft rubric contained a performance category for teamwork, four scoring or achievement levels (exemplary, proficient, marginal and unacceptable) and their corresponding criteria or descriptors.
Data collection and instrument
The study was conducted in the School of Engineering, RMIT University, Melbourne, Australia, from July to December 2018. The participants, including both students and academic staff, were invited to participant in a survey using an anonymous written questionnaire. The questionnaire consisted open-ended questions specifically designed to gather feedback that aligns with the research questions. The following are the example of survey questions relevant the findings of this paper:
- What is your overall opinion of the example rubric?
- How did you find the definitions of what key performance looks like at each level, particularly for excellent performance? What are the qualities that distinguish an excellent response from other levels?
- How useful do you think the rubrics are in providing meaningful and timely feedback, particularly in terms of identifying areas needing work and strengths for improving performance?
- How could the above rubrics be made better in terms of enhancing their role as a feedback tool?
Participants
The student participants were drawn from the first-, second- and third-year Biomedical Engineering design courses/subjects (i.e., three separate courses/subjects) and a first-year course in Computer-Aided Design in Mechanical, Automotive and Advanced Manufacturing and Mechatronics Engineering. These courses were specifically chosen because they significant emphasis on teamwork in assessments, with at least 25% of the assessment related to teamwork. In these courses, students are asked to form groups (between three-to-five members per group) to carry out design projects (see Table 1 for the proportion of teamwork assign for each course).
Table 1. Design courses in study, number of enrolment and proportion of grade for teamwork assessment
Course/Subject title
|
Program(s)/Degree(s)
|
Expected enrolment
|
% of group assessment
|
Introduction to Biomedical Engineering and Design
|
Biomedical Engineering
|
35
|
60
|
Biomedical Computer Aided Design
|
Biomedical Engineering
|
30
|
25
|
Biomedical Engineering Design and Practice 2
|
Biomedical Engineering
|
27
|
70
|
Computer Aided Design
|
Engineering Degrees (Mechanical, Automotive, Advanced Manufacturing and Mechatronics)
|
150
|
25
|
The student surveys were administered by a research officer, who was not teaching the courses, at the end of a two-hour class in Week 5 of a twelve-week teaching period. The survey took five-to-ten minutes to complete so as not to disrupt the classes. If the students decided to take part, they were asked to anonymously complete a checklist of basic demographic information and provide responses to a set of open-ended questions relating to the rubric which was provided with the survey. The main analysis used data from a total of 152 (62.8%) students (from the four courses in Table 1) participated in the survey (119 males and 33 females), with an average age of 19.9 ± 3.3 years (range 17-41 years).
All staff members in the School of Engineering, particularly those who involved in teaching the relevant courses, were invited to participate in the survey. To conduct the staff survey, our research team made initial contact with them either face-to-face or by email. We provided them with a briefing on the study and sent each interested staff member a copy of an information sheet and consent form. Staff members who chose to participate were then asked to complete a questionnaire and return it to the research team. A total of 12 staff participated in the survey: 11 were permanent staff and one held a casual position. On the average, they had nine years of experience in teaching and coordinating courses at the tertiary level.
Ethics
Prior to conducting the survey, approval was obtained from the RMIT Human Research Ethics Committees (SEHAPP 47-18). Each participant was provided with the standard Participant Information Sheet and Consent form. Their consent was collected after obtaining their signature on the consent form. This ensured that ethical guidelines were followed and confirmed that participants were well-informed about the study and voluntarily agreed to take part. We also emphasised to the students that their participation was voluntary and would not affect their assessment grades and progress if they chose to withdraw from the study. They were offered the opportunity to enter into a draw for an iPad; or book and movie vouchers; as a small incentive.
Data coding and analysis
The project assistant entered all data for both the student and staff surveys into Excel spreadsheets. Survey data were then analysed using the NVivo 12 software package. Based on the approach used by Saldana [46], the transcriptions were analysed in an iterative process using the following steps. First, we read through all the participants’ responses and familiarised ourselves with their responses. Then, we conducted a preliminary analysis during which codes were generated [47]. Our research team reread the codes to identify emergent themes related to the codes. We continuously reviewed the major themes to determine whether they reflected the codes accurately. After that, we analysed the themes to identify the key findings.
Findings
For the initial reading of the data simple coding was used. In terms of the participants’ overall perceptions of the rubric a simple positive/negative response was coded. In terms of how the participants found the definitions of the different achievement levels, the data were coded in terms of whether the participants found the definitions useful and whether they found the levels reflected their own understanding of what, for example, an “excellent” achievement was. In terms of using the rubric to provide feedback, the data were coded in terms of whether the participants found the rubric useful or not. A deeper coding was then performed to provide a more detailed analysis. Through this analysis, 55.7% of students and 58% of staff have identified a number of issues that could be improved. These are presented below.
Key elements of teamwork rubrics for enhancing learning and performance
Both staff and students commented that the clarity and structure of the rubric could be improved with the use of clear guidelines, clear categories, objective criteria, scoring system, and robustness. These perceptions are listed below.
Clear guidelines
The students (10%) and academic staff (17%) indicated that the rubric needed clearer instructions and more specific measures. For example, some students stated:
There must be an appendix or detailed note to explain each and every part of the rubric more clearly. [Surveyed student #35]
Easy to understand what I am marked on, could use some more elaboration for a few of them. [Surveyed student #57]
Pretty useful but is also a bit vague, maybe a teacher should go through the rubric to be more specific about the expectations required to get the highest marks. [Surveyed student #122]
Comments were offered by staff members on the rubric lacking objective measures, leading to students possibly thinking it was too abstract and subjective:
Overall, it is good and detailed. Some of the items can be more specific and measurable, e.g. 'sometimes', 'occasionally', can be quantified in some time frame. My experience with students is that they could like to see few examples under each category or else, they feel a bit disconnect thinking it is too abstract or subjective. [Academic #9]
Our participants, both staff and students, highlighted the importance of providing clearer definitions for the items being measured. This called for more detailed and explicit guidance within the rubric, with suggestions to include appendices or notes that further explain the criteria. Feedback indicated that certain aspects of the rubric are perceived as too abstract or vague, leading to requests for more objective, quantifiable measures and less ambiguous language. Both students and staff expressed the need for concrete examples under each criterion to better understand and meet the expectations set in the rubric.
Clear performance categories
Students’ responses (10%) revealed that the rubric needed to be improved in terms of its categories. For example, some students stated:
Combine professionalism into ethic and group [Survey student #100]
All of them are good but professionalism would be part of other categories like work ethic and communication. [Survey student #144]
In a similar vein, 25% of the staff suggested some of the performance categories in the rubric needed to be condensed to avoid repetition, for example one staff member stated:
The group/teamwork section should go under ethics instead as it seems to always have a positive attitude/impression. [Academic #10]
In addition, another staff suggested that to ensure the rubric is an effective tool to cultivate the targeted professional competencies such as work ethic, communication and problem solving, a few examples for each category should be provided as guides:
Look good. My experience with students is that they would like to see few
examples under each category or else, they feel a bit disconnect thinking it is too abstract or subjective. [Academic #7]
Two out of the twelve academic staff (16.7%) emphasised that the quality of what the student groups produce should also form an assessment category in the rubric:
Perhaps contain a section assessing the quality of the material they produce. I have found that most group hostility surrounds individuals within a group having different opinions on quality Inevitably, a few members end up doing more work than the others to lift the final product to a quality they deem acceptable. [Academic #2]
Analysis of these comments revealed several key areas for improvement. Both students and some staff members suggested that certain categories in the rubric, such as 'professionalism', 'ethics' and 'group/teamwork', overlap and could be combined. Staff member also recommended to include criteria for assessing the quality of group outputs and the dynamics within the team. These comments are of interest as they related to the potential interrelation between team functioning and the quality of the final product.
Objective criteria
Some students (32%) frequently indicated that the rubric served its purpose well. They noted that it guided them on how team members should work together and behave professionally. They also thought that the scoring/achievement scale was useful in helping them to achieve the desired grade. However, some students (15.6%) found that the descriptor or needed further clarification:
Sometimes key performance can be hard to define thus unclear. Good descriptors for basics but less definitions for high marks. [Surveyed student #22]
For the general criteria, it is okay, however in certain areas, its specific like individual work, it needs more clarity [Surveyed student #50]
Key performance includes staying on task, respectful etc. Excellent performance includes hard working, actively working with team members. [Surveyed student #101]
Some staff members (25%) thought the rubric provided a clear outline on how students were to be assessed and what performance was expected of them:
Seems ok to me, particularly given there is some range on the number awarded to each.
The advice I was given was to ensure the descriptions contained adverbs and adjectives which were demonstratable. This eliminates room for debate from students over their marks I think it looks good for student assessment. [Academic #2]
The feedback indicated the importance of developing a rubric with clear, specific and demonstrable criteria that can objectively assess both general competencies and specific skills or tasks. Students expressed a need for more detailed guidance on specific aspects of performance, particularly those that distinguish average from excellent work. Academic staff emphasised the importance of using clear, demonstrable adjectives and adverbs in the rubric. This approach is considered as key to minimising subjectivity and debates over grading, highlighting a preference for tangible and observable criteria in assessments.
Scoring System
Two staff (17%) and 18 students (12%) thought that the scoring ranges for the quality levels of each assessment criterion should be more even. In addition, some students suggested that criteria with higher weight or value should have more description:
Some ranges in my opinion are too big (24-30) to fit one description. Make the grade out of 10 or increase amounts of intervals. [Surveyed student #39]
Each category should be given a separate score, the teamwork section had 4 sections but only one score, individual scores would help. [Surveyed student #23]
Perhaps keep it concise and reduce the weighting on some areas to keep it equal. [Surveyed student #49]
This indicates, again, that a more detailed description of each achievement level was required. Similarly, staff members also thought that the rubric should provide a clearer outline of all levels of performance. As one staff member commented:
It does not actually tell the student (group) why a particular grade was awarded and the difference between say a 6 to 15 for group/teamwork. A smaller range associated with particular behaviours. e.g. where there is a range of say 6-15 for a particular subset within one aspect. [Academic #3]
Another staff member stated:
I think that the ‘exemplary’ category should highlight qualities of someone who has gone above and beyond. Otherwise, many people will end up in this bracket just by doing the work and complying in all these areas (even if they haven't gone above and beyond). This is the issue I currently have with my rubrics and end up with lots of HDs just because they comply, e.g. in ‘communication’ category, could add something like 'helps others who struggle communicating and problem solving, actively seeks and suggests solutions that show critical thinking and innovation. [Academic #8]
In addition to wanting better descriptors for each achievement level, students (16%) and staff (25%) also recommended that the descriptors needed to be as simple and specific as possible. They found the many layers of descriptors within an assessment task rather confusing, and this made it difficult to determine the achievement level of a single group:
It is a good step but the line between each section is vague and can be interpreted differently. [Surveyed student #99]
I don't think there should be divisions within proficiency as this makes it harder to score. [Academic #5]
In terms of scoring system in the rubric, several suggestions were drawn from the feedback. These included: 1. scoring accuracy, which involved narrowing the scoring ranges to allow for better precision in assessment, provide separate scores for different point of criteria that offer clearer feedback on specific areas of performance, and ensuring balanced weighting across categories to maintain balance and fairness in the evaluation. 2. clarity and distinctiveness, which involved defining exemplary performance reflecting above and beyond basic compliance, and 3. simplifying scoring levels, which involved reducing the number of divisions within each proficiency level to avoid complicating the scoring process.
Robustness
Two staff members (17%) indicated that to avoid students arguing over their marks the rubric required a more precise summative rating scale:
Seems okay to me, particularly given there is some range on the number awarded to each. The advice I was given was to ensure the descriptions contained adverbs and adjectives which were demonstratable. This eliminates room for debate from students over their marks. [Academic #2]
It does not actually tell the student ([or] group) why a particular grade was awarded and the difference between say a 6 and 15 for group/team work. [Academic #3]
When considered the insights from academic staff directly involved in its application, it is essential to develop a robust assessment tool. Their feedback highlighted the need for precise and observable descriptors that eliminate ambiguity. They also recommended providing clear and explicit explanations for each performance level, thereby offering students and groups a clear understanding of expectations and justifications for their assessments.
Key features of rubrics in providing feedback in enhancing student performance
Just over half of the respondents (52% of students and 50% of staff) indicated that the rubric could be a source and mechanism for providing feedback to improve student performance. However, some of the staff and students suggested additional features to complement the rubric. These additional features could be grouped into: incorporate self- and peer-assessment; the need for descriptions that guide subsequent improvement; provide spaces for specific written comments; and marking time and feedback. These findings are presented below.
Incorporate self- and peer-assessment
A number of students (17%) and staff (25%) suggested including self- and peer-assessments in the rubric to allow students to assess their own contributions as well as those of their peers.
Also, one student made a suggestion to allow peers to provide each other with feedback to ensure all team members were clearly and fairly assessed, and each student would receive a fair mark based on the whole team’s achievement:
Giving students opportunity to give feedback about other students but making sure that everyone gets a fair mark. [Surveyed student #89]
One of the comments from an academic who had prior experience with peer assessment noted the importance of peer-assessment in encouraging students to participate in group tasks and take initiative to help others.
These insights revealed key features of rubrics related to peer assessment; they should enable students to provide feedback about their peers, ensure fairness and equity in marking, and encourage active participation and mutual support.
Need for descriptions that guide subsequent improvement
Twenty-five (16%) students believed that the rubric was not sufficiently explanatory in helping them identify their areas of strength and weakness, and that the rubric had limited use in guiding them to improve their learning. For example, some students commented:
Not very good. They are good for marking and identifying areas of strength or weakness; However, they give very vague and generic feedback. [Surveyed student #30]
Pretty useful but it also a bit vague, maybe a teacher should go through the rubric to be more specific about the expectations required to get the highest marks. [Surveyed student #104]
If the feedback points our something that need to be improved, I will be able to see the problem right away and to do better next time. [Surveyed student #66]
Students identified to several key features of rubrics crucial for feedback that enhances their performance. They are looking for rubrics with detailed and specific descriptors that clearly outline expectations for each level of achievement, particularly for higher performance brackets. They also appreciate feedback that helps them to identify and subsequently address areas for improvement.
Provide specific written comments
Twenty (13%) students and two staff (17%) indicated that supplementing the rubric with written feedback could have a positive impact on students’ learning. As one student stated:
Put a footnote which specifies what needs to be improved on. [Surveyed student #21]
Another added:
The rubric is too broad and may require additional comments to articulate on feedback. [Surveyed student #135]
One of the staff also argued that the rubric needed written comments to address the grading for the overall assessment of the tasks related to teamwork and PBL:
I think it needs to be supplemented by overall feedback section that provides specific example on the overall assessment of the tasks. [Academic #7]
To effectively support student performance improvement, rubrics should provide clear, detailed and actionable feedback that goes beyond numerical or categorical grades. It is crucial to articulate specific areas for improvement, providing contextual examples and holistic comments. These features ensure that rubrics serve not only as tools for assessment but also valuable resources for learning and development.
Marking time and feedback
An important finding was the students’ suggestion (5%) that, after assignments have been marked, the feedback and grade should be returned to them speedily, noting that this would help in their learning by reflecting on their weaknesses and strengths while the whole event is still fresh in their minds. One student commented:
They would be useful if they were sent out a couple days after submitting work while
the stuff you did is still fresh in your head, else you forget everything. [Surveyed student #99]
Similarly, academic staff (17%) acknowledged that the marked assignment could provide meaningful feedback in terms of areas needing more work and those areas that were already at a high standard, but the timing was important.
…but I guess this depends on when they [the students] receive the outcomes. [Academic #3]
In summary, these insights highlighted timing is a critical aspect of an effective rubric-based feedback system. Timely feedback, especially when coordinated with the release of grades, can greatly enhance the learning experience. Providing feedback while students’ memories of their work are still fresh allows them to reflect on their performance and draw meaningful connections with their work.