The aim of this paper was to report the design elements of the Rubric Interprofessional Identity Development (RIPID) and the perspectives of students who were involved in the co-construction process. This process has led to a user-friendly and learner-centered instrument, which has been supplemented with a manual comprising background information, criteria with definitions and illustrative quotes, exemplars, scoring instructions and a scoring form (Virk et al., 2020). Moreover, this tool builds upon the conceptual framework from G. Cantaert et al. (2022) by defining six attributes of IPI in observable terms based on the authentic experiences of interviewed medical students, as shown in Fig. 2 (Smeets et al., 2022). The gears depict the interrelatedness of each criterion, while the arrow emphasizes the calibration process through which learners construct an interprofessional orientation as part of their learning trajectory (G. Cantaert et al., 2022; Pieschl, 2009). Note that this figure shows the criteria at the individual level, meaning that the two criteria inferring context dependency and the team mental model are not visualized.
By involving and collaborating with students, we were also able to determine the perceived level of complexity of using the RIPID, the ideal timing of introduction, applicability in assessing different types of work, suitability for self-assessment, combination with other tools and use for program evaluation. Accordingly, this discussion is dedicated to interpreting these perceptions in relation to the broader literature. First, the appropriateness of the RIPID appears to largely depend on the stage of learning and the learning context. Sufficient training and education on the purpose of using the RIPID may first be needed before we can expect learners to become judges of their own learning (Morton et al., 2021; Chan and Ho, 2019). Learners who are not yet familiar with a formative and gradeless approach to assessment, for instance, may be less receptive to using the rubric (Nkhoma et al., 2020; McMorran et al., 2017). Moreover, learners should develop reflective skills and feedback literacy to be able to evaluate and adjust their learning performance accordingly (Nkhoma et al., 2020; Nieminen and Carless, 2022; Cheng and Chan, 2019). Therefore, we recommend a gradual implementation based on learners’ developing metacognitive abilities by first introducing the theoretical concepts of the RIPID in the early undergraduate years (Nkhoma et al., 2020; Chan and Ho, 2019). Progressively, learners can get the grips with several of the criteria and quality levels through structured reflections and assessment of their works and those of peers (P. Kilgour et al., 2020; Matshedisho, 2020; Chan and Ho, 2019).
Although the RIPID has not yet been tested for peer assessment, this approach is known to promote critical thinking, as it allows learners to experience the role of assessor by reviewing and giving feedback on others’ work while also gaining a better understanding of how to improve theirs (Double et al., 2020; Pereira et al., 2016b; Erguvan and Aksu Dünya, 2021). This way, the RIPID can be used to support the scoring of peers with an accuracy corresponding to that provided by faculty (P. Kilgour et al., 2020; Double et al., 2020). Furthermore, collaborative learning may be promoted in which learners with different perspectives reflect on their experiences while giving and receiving feedback (Nkhoma et al., 2020; Gasaymeh, 2011). Collaborative learning is intrinsic to CLEs and is especially relevant when assessing the criteria of team mental model and commitment as a way to promote group reflection on the perceived team dynamics and sense of belonging (Gasaymeh, 2011; Andermo et al., 2022; Bhattacharya et al., 2021). These reflections can enhance team performance when individuals generate a sense of collective awareness and efficacy, which opens up an interesting avenue for further research on interprofessional team dynamics (London et al., 2022; Hayward et al., 2014)
The majority of participants agree that through practice, learners may come to appreciate the value of RIPID and use it to self-regulate and, over time, self-direct their learning on the basis of authentic experiences, making it an ideal tool for work-integrated learning (WIL) and continuing professional development (CPD) (Villarroel et al., 2018; Brookhart, 2018; Cheng and Chan, 2019). In these contexts, collaboration and professionalism are more realistic and meaningful to assess in relation to a learner’s developing professional identity (Villarroel et al., 2018; Virk et al., 2020). Hence, learners should be motivated to become increasingly involved and self-reliant in their progression from novice to full members of a community of practice (CoP) (Virk et al., 2020; Nkhoma et al., 2020; Cruess et al., 2018). According to Lave and Wenger’s (1991) situated learning theory, a CoP refers to a social network such as a profession that is characterized by a mutual engagement of members with a joint enterprise and a shared repertoire. Becoming a member refers to a socialization process influenced by the community’s history, culture and practices during which individuals gradually develop their skills, knowledge and professional identity (Cruess et al., 2018). Role models such as mentors and supervisors play a central role in this process by transferring knowledge and scaffolding reflection for learners to become full members of a CoP (Kassab et al., 2020; Cruess et al., 2018). In this case, the RIPID can engage learners in monitoring and self-assessing their journey in becoming skilled professionals (Cruess et al., 2018; Ajjawi et al., 2020; Scholtz, 2020). Eventually, these professionals can become self-directed learners who are capable of independently defining and pursuing their learning needs (Cosnefroy and Carré, 2014).
Developing a tool suitable for authentic assessment comes with certain challenges that our participants believe the RIPID could tackle (Ajjawi et al., 2020). First, the RIPID accounts for the fact that learners often rotate diverse CoPs, which helps determine context dependency and the opportunities for IPL (Scholtz, 2020; Cruess et al., 2018; Panadero and Jonsson, 2020). Second, students believe that some of their clinical supervisors may prefer the traditional method of assessment out of personal motives or time constraints and presume that they do not always have an interprofessional orientation (Scholtz, 2020; Ajjawi et al., 2020). Indeed, while there is a pressing need for continuous authentic assessment during WIL and the opportunities this provides for IPL, concrete educational and research initiatives remain somewhat scarce (Chong et al., 2020; Walland and Shaw, 2022; Brandt, 2018). Furthermore, transitioning professionals are at risk of unconsciously adopting a profession-centric mindset due to the ingrained tendency among CoPs to reproduce practice (Liljedahl et al., 2022; Cruess et al., 2018). This tendency explains the persistence of hierarchical inequalities and power differentials that reinforce competitive behaviors between professions and has been denoted as a major challenge for IPE (Liljedahl et al., 2022; Pecukonis, 2014). As a result, learners should develop an awareness that “some experiences are mis-educative”, as argued by Dewey (1986) in his experiential learning theory (p. 25). Similarly, Kolb (2005, p. 205) notes that learners will regularly encounter situations that do not “promote growth-producing experiences” and as such are not meaningful. Based on the works of Erikson (1968), this means that learners will inevitably have to deal with inner conflicts when negotiating their identity due to the discrepancy between the formal and informal curriculum (Lawrence et al., 2018). Hence, reflection is crucial for learners to make sense of conflicting experiences and to understand how their cognitions and emotions impact their behavior (Lim et al., 2023; Cruess et al., 2018). In this sense, the RIPID can be considered a boundary object, as it attempts to integrate what is learned in formal education and what occurs in the workplace, thereby fostering the development of an IPI across a professional continuum (Ajjawi et al., 2020; Scholtz, 2020).
The third challenge relates to the need for a method that helps learners navigate their identity development, which would be possible with a personalized electronic portfolio (Janssens et al., 2022; Cruess et al., 2018). The uptake of e-portfolios by different programs in higher education has grown considerably in the past few years, as it has proven to be a suitable method for acquiring a holistic judgment of learners’ progression in difficult-to-measure domains that are essential for employment, such as collaboration, reflection and lifelong learning (Walland and Shaw, 2022; Tur et al., 2019). E-portfolios represent a digital depository that allows learners to create and purposefully select a heterogeneous collection of learning artifacts over time to showcase their efforts and achievements (Scholtz, 2020; Walland and Shaw, 2022). These artifacts provide evidence on learners’ performance, which comes in different media formats, such as feedback forms, blogs, assessment reports, video recordings, presentations and structured reflections (Peeters and Sexton, 2020; Jönsson et al., 2021; Heeneman et al., 2021). The true value of portfolios lies in their formative use when students self-regulate their learning by selecting which artifacts to include and in reflecting on their selection. This way, learners are motivated to recognize their strengths and weaknesses and to identify the steps needed to improve their learning (Baas et al., 2020; Sanchez et al., 2020). At times, reflections can be revisited, which helps learners understand how their experiences across different contexts have contributed to their professional development and encourages them to reevaluate the assumptions and beliefs that were at the basis of their past behavior (Kassab et al., 2020; Peeters and Sexton, 2020; Walland and Shaw, 2022; Sanchez et al., 2020).
Ideally, a portfolio is introduced at the start of undergraduate education, as this would provide learners with sufficient time to collect evidence of their achievements with which they can demonstrate their preparedness for practice in anticipation of their graduation (Janssens et al., 2022; Sanchez et al., 2020). Based on this portfolio, learners’ developing belief systems, past behaviors, learning strategies, changing goals and reflective capacities can be captured (Janssens et al., 2022; Hong et al., 2021; Walland and Shaw, 2022). Hence, assessment should focus on the portfolio as a process and not solely as a product of learning (Walland and Shaw, 2022). Practically, the RIPID can act as a guideline for setting personal objectives, collecting artifacts, monitoring performance, structuring reflections and planning future learning efforts (Walland and Shaw, 2022; Tur et al., 2019). This way, learners should be able to create and relate each artifact to one or more of the rubric’s criteria and accordingly reflect on the learning progress toward their intended objectives (Scholtz, 2020). Throughout this process, learners should be offered sufficient opportunities to compose their portfolio with artifacts evidencing increasingly complex performances (Baas et al., 2020; Scholtz, 2020). In addition, an e-portfolio may prove to be an ideal medium for collaborative and peer learning through which learners can be provided with rubric-referenced feedback that supports them in making connections between their learning artifacts (Panadero et al., 2019; Walland and Shaw, 2022; Tur et al., 2019). Eventually, learners may become motivated to use the e-portfolio to engage in lifelong learning and in further developing their expertise and collaborative capabilities as well as strengthening their IPI (Panadero and Jonsson, 2020; Tur et al., 2019; Baas et al., 2020; Kassab et al., 2020). Nonetheless, future research is needed to explore the effectiveness of such an e-portfolio as well as the requirements in terms of the content, structure and training, as this appears to be a road less traveled (Janssens et al., 2022; Van Ostaeyen et al., 2022; Walland and Shaw, 2022).
Importantly, as advised by one student, implementation of the RIPID should not lead to extra workload and stress, as this may negatively affect students’ wellbeing. Rightfully so, considering that poor student wellbeing has gained considerable attention due to the high prevalence of depression and burnout, which is detrimental to both students’ development and the quality of patient care (Dyrbye et al., 2020; Dyrbye et al., 2019). In particular, the grading scheme based on numerical scores appears to be a significant predictor of burnout because students experience unhealthy competition and constant pressure in obtaining excellent grades (Joseph et al., 2019; Dyrbye et al., 2019; McMorran and Ragupathi, 2020). Associatively, students may develop ‘reflection fatigue’ when they are expected to reflect over and over on repetitive themes, which is often not reciprocated by feedback (Trumbo, 2017). As a result, students learn how to game the system by writing superficial yet plausible texts with little reflective effort to free up time with which they can focus more on their grades (Campbell et al., 2020; Trumbo, 2017). In this sense, grades act as extrinsic motivators and stumbling blocks that lead to unnecessary stress in mastering complex competencies related to collaboration and critical thinking (McMorran and Ragupathi, 2020). In contrast, a gradeless approach such as that proposed with the RIPID may enable a shift toward self-regulated learning by fostering intrinsic motivation and the building of collaborative relationships, thereby enhancing students’ wellbeing (McMorran et al., 2017; McMorran and Ragupathi, 2020; Andermo et al., 2022). Extending the former, the quality and not quantity of reflective assignments matter, in which learners become engaged in reflection and feel safe in articulating their emotions, inner conflicts, and moral dilemmas (Trumbo, 2017; Campbell et al., 2020). Correspondingly, assessment events should be sufficiently spread across semesters to account for students’ needs and workload (Preston et al., 2020). At each of these events, faculty as well as mentors and peers can support learners by coaching and providing feedback on their thoughts and feelings in a one-on-one or group setting (Trumbo, 2017). This notwithstanding, reflection should not be limited to fragmented instances but needs to proceed across the learning continuum, which reemphasizes the usefulness of a portfolio through which learners’ progression as well as their wellbeing can be unobtrusively monitored (Preston et al., 2020; Dyrbye et al., 2019).
Finally, we gained preliminary insights into the use of the RIPID to evaluate the assessment practices of IPL within a curriculum. Program evaluation can inform faculty about how to design CLEs and how to improve the constructive alignment of their course within the interprofessional curriculum (Gasaymeh, 2011; Panadero and Jonsson, 2020). This way, multiple data points over prolonged periods of time can be aggregated to obtain a comprehensive overview of students’ learning trajectory, which can be used to provide learners with meaningful feedback (Heeneman et al., 2021). Evidently, assessment practices should comprise more than the use of rubrics or questionnaires to account for the different attributes of learning, the different characteristics of assessors, the limitations of individual instruments and the different contexts (Smeets et al., 2022; Krkovic et al., 2018; Heeneman et al., 2021). This approach corresponds to a model of programmatic assessment, which is an instructional design approach for constructivist learning oriented at embedding assessment within the curriculum as a driver for learning behavior (Heeneman et al., 2021; Preston et al., 2020). Accordingly, a combination of tools providing both quantitative and qualitative reference information can be helpful for students to collect in their portfolio and to support their self-reflection (Reinders et al., 2020; Nishizuka, 2022; Pereira et al., 2016a). This combination can include observation instruments, for instance, the one we compared with the RIPID, as well as other rubrics such as the ICAR (Hayward et al., 2014; Curran et al., 2011), Objective Structured Clinical Examinations (OSCE) (Hayes et al., 2018) or self-report questionnaires such as the Extended Professional Identity Scale by Reinders et al. (2020) and the Interprofessional Socialization and Valuing Scale by King et al. (2010); (Holden et al., 2015). Evidently, implementing the RIPID as the backbone of an e-portfolio and as part of a programmatic approach to assessment requires a cultural shift within the institution through negotiation with course lecturers of different disciplines and by organizing trainings, for which further research is needed (McMorran et al., 2017; Andermo et al., 2022; Morton et al., 2021; Nishizuka, 2022; Nkhoma et al., 2020; Gasaymeh, 2011).
Limitations
Despite our comprehensive approach in developing the RIPID, several limitations should be noted. First, the validation process largely focused on undergraduate medical education, thereby necessitating further validation across different educational programs and levels of education. In a practical sense, this would mean that instructions, quotes and exemplars should be adapted to make them equally relevant and authentic for learners from different programs (Curran et al., 2011). These programs should also comprise nonhealth professionals, such as engineers and legal advisors, who are trained for a job in the healthcare system and whose expertise is equally essential to include in the collaborative process (Frenk et al., 2022). Accordingly, we propose extending our co-construction process to include students, faculty, practitioners and experts to account for their professional culture and language as well as to explore the optimal conditions for implementation and to identify potential barriers (Curran et al., 2011; Morton et al., 2021; Erguvan and Aksu Dünya, 2021).
Second, while we determined the interrater reliability during the first phase, this was not repeated after the second phase. Accordingly, future studies need to examine the interrater as well as intrarater reliability of each criterion at a larger scale while taking into account the learning context and individuals’ characteristics such as education, gender and prior experiences (Cockett and Jackson, 2018; Hayward et al., 2014; Shabani and Panahi, 2020). In addition, the psychometric structure and dimensionality of the RIPID should be determined by calculating the internal consistency and conducting exploratory and confirmatory factor analysis (Flora, 2020; Rogers et al., 2019). Furthermore, information regarding the RIPID’s concurrent and discriminant validity could be obtained through comparison of scores with other related measures and on the basis of different characteristics of learners (Shabani and Panahi, 2020; Cook et al., 2015). Even though our data collection allowed the combined use of the RIPID with the REFLECT rubric, we chose not to do so, as the medical students did not use the RIPID. Sharing the RIPID beforehand is not only recommended for its educational value but also because this would reduce the risk of unstructured and poor reflections that may complicate data analysis (Rogers et al., 2019; Shabani and Panahi, 2020). Nonetheless, more robust statistical procedures should be considered, such as the Many Facet Rasch Model (MFRM), to investigate potential facets impacting students’ scores and identify sources of errors, such as those related to raters’ scoring behaviors (Erguvan and Aksu Dünya, 2021; Rogers et al., 2019; Myers et al., 2020).
Third, we did not yet test the combined use of the RIPID with different types of exemplars. Although current research praises the benefits this combination may bring, there may be a risk of inhibiting divergent thinking when learners overanalyze exemplars instead of reflecting on their own experiences (Panadero et al., 2022; To et al., 2022). Furthermore, the rubric has three quality levels in contrast to the recommended four to six levels, even though this is not exceptional for multidimensional constructs (Pancorbo et al., 2021; Nkhoma et al., 2020; Jönsson and Panadero, 2017). Nonetheless, both aspects warrant more research but also emphasize the importance of adequately training learners in using rubrics and exemplars. Last, our pragmatist approach and use of a theoretical framework inevitably lead to a predisposed view on what is important within the assessment of IPI. For this reason we consistently attempted to be reflexive and transparent throughout the study process and chose to involve stakeholders within this co-construction process to include their needs and perspectives. Furthermore, we invite other researchers to further add to our understanding and to empirically test this assessment tool (Nieminen et al., 2023). In addition, as with all other assessment tools, the RIPID will presumably not always lead to completely valid inferences on learning. For this reason, we reiterate the importance of employing a programmatic assessment approach for IPL (Virk et al., 2020).