We conducted a quantitative retrospective cohort evaluation of SOP between 2014 and 2017 using registration and participant evaluation data. Ethical approval for this study was granted by the University of Toronto Research Ethics Board.
Intervention: Safer Opioid Prescribing Development and Description
Starting in 2012, we developed SOP using Kern’s model for curriculum development (30) with two particular adaptations for CHPE. First, for needs assessment and program planning, we used the PRECEDE-PROCEED model (31, 32), a comprehensive framework for program design, implementation and evaluation that is commonly used in the fields of public health and health promotion. In using this model, we formalized our conception of education as a health policy intervention as has been done elsewhere (33). Using PRECEDE-PROCEED allowed us to: a) contextualize CHPE within the specific circumstances of the Canadian contemporary opioid epidemic and the range of other policy options for addressing it; b) involve the target audience for the intervention in program planning; and, c) conceptualize and categorize specific implementation and effectiveness outcomes during the initial design stages (Table 2; see also Appendix 2 for Program Logic Model).
Table 2: Mapping Needs to Program Development
Identified need
|
|
How this was addressed in program development
|
Prescribed opioids were identified as an important contributor to opioid related harms and family physicians were identified as the most common prescribers of opioids (34).
|
|
The scientific planning committee included family physicians from a diversity of backgrounds (primary care, chronic pain care, addictions medicine, anesthesia, pharmacology and inner-city medicine).
|
The opioid epidemic was growing in scale and was linked to the practices of the majority of family physicians (35).
|
|
The program targeted family physicians, though it was designed to also be relevant to specialist prescribers as well as other professionals involved in opioid prescribing (e.g. pharmacists). Nurse practitioners were not identified as a primary target at the time of development since they were not eligible to prescribe opioids in our jurisdiction until early 2017.
|
There was an inequitable distribution of harms, with greater rates of overdoses and deaths from opioids in rural and remote communities – places where there might be less access to practice supports and high quality CHPE programs (36).
|
|
The program was to be delivered virtually and in the evenings, outside of typical practice times, to increase accessibility for rural and remote health professionals.
|
Chronic pain was a major learning priority for family physicians (37, 38) and there were important knowledge gaps with respect to opioid prescribing (39).
|
|
SOP content focused on opioid prescribing but was contextualized within models of the management of chronic pain as a complex medical condition.
|
There was a persistent influence of the pharmaceutical industry on prescribing practices and thus a growing skepticism of opioid educational programs because of possible pharmaceutical industry involvement (19).
|
|
Faculty for the program during the study period of interest did not have any history of involvement with opioid or other pharmaceutical manufacturers. The program received no funding from industry for either development or delivery. It was funded entirely by participant registration fees to ensure sustainability. Fees for the program for physician participants were C$450 for the webinars and $650 for the workshops. A reduced rate for non-physician and resident participants was C$150 for the webinars and C$200 for the workshops.
|
Existing CHPE programs in the field tended to be based on expert opinion rather than the best available evidence, for example, from systematic reviews or clinical practice guidelines.
|
|
Foundational documents included a national clinical practice guideline (40) and tool that was developed to support the implementation of the guideline (41).
|
The provincial medical regulator had an active and substantial influence on opioid prescribing behaviour, which in some cases could be an even stronger driver of prescribing behaviour than certain kinds of educational interventions (42).
|
|
Participants in the program were sometimes required or suggested to attend by their medical regulator due to the identification of possible inappropriate controlled substance prescribing; however, program administration and faculty were blinded to participants’ regulatory status.
|
Our second adaptation to CHPE was to use multiple systematic reviews of continuing medical education (CME) effectiveness to identify and incorporate best practices in education for achieving practice change and improvement in patient outcomes (43-46), including for internet-based CHPE (47). SOP utilized multiple interventions (13 distinct interventions), was of substantial duration (3-4 months), utilized a blended-learning approach, was interactive, and identified links between clinical practice and serious health outcomes (48). The program was split into two components – a series of three synchronous evening webinars followed by a one-day in-person workshop to create a flipped classroom (see Appendix 3: Safer Opioid Prescribing Program Components and Description). Besides addressing accessibility, the virtual format also aimed to make the program scalable to reach a large number of participants simultaneously. The webinars were made synchronous and interactive to help create a virtual community of learning (49), which we hypothesized would help normalize a challenging area of practice and also drive higher levels of completion — a known challenge for online learning programs (50, 51).
The first of the three webinars focused on the multimodal management of chronic pain; the second on the details of opioid prescribing (e.g. patient risk assessment, medication selection, initiation and titration); and the third on situations in which prescribing can be more challenging (e.g. with the elderly, in pregnancy, with people living with opioid use disorder). The workshop addressed challenging cases and communication issues, focusing on skills and competencies particularly suited for a live workshop as compared to a synchronous webinar. Webinar participation was a pre-requisite for workshop participation. Each webinar and the workshop had specific pre-work and post-work to prime learning and to facilitate integration into practice, respectively. The program was accredited for a total of 27 credits of learning: 9 credits for the webinars and an additional 18 credits for the workshop.
Study population
The study population included all SOP participants from January 1, 2014 through June 14, 2017. This study period was chosen because the program had fully launched in its current form by January 2014 and the content of the program was substantially redeveloped after June 2017 based on the release of new Canadian guidelines for opioid prescribing (52). We included all participants in this study period, regardless of profession, specialty, location, completion status and whether or not they had any identified medical regulator involvement with respect to controlled substance prescribing. We excluded medical residents and trainees, participants for whom there was substantial missing participation data, and participants who participated only in the workshop but not in the webinars.
Data sources
Participation data was collected from the registration system of Continuing Professional Development at the University of Toronto which administers SOP. Registration data included dates of participation, practice location, profession and specialty. For Ontario physicians, these registration data were linked with gender, graduating medical school and dates of Ontario licensure from the public register of the CPSO. Prior authorization from the CPSO to access these data was obtained. The Ontario Medical Association’s (OMA) Rurality Index of Ontario was used to determine a rurality score based on practice postal code. This Index has been used in other program evaluations as a measure of rurality for Ontario physicians (53). For data pertaining to the rurality of the Ontario family physician population (as a comparator to our participant group), we combined the OMA-generated RIO score at the Census Sub-Division level (54) with the Ontario Physician Human Resources Data Center’s list of Physician Counts by Census Sub-Division for 2017 (55). For satisfaction data, we used anonymous program evaluations collected immediately post program.
Outcome measures
We collected both participation and satisfaction data to assess for the implementation measures of reach, dose, fidelity and participant responsiveness (56). We defined program reach as the total number of participants in any webinar. We further characterized this outcome with information about their profession, specialty and province of practice. Specifically, for Ontario physician participants, we collected data about gender, graduating medical school (international versus domestic), number of years of practice since Ontario licensure to first participation in SOP, medical specialty and rurality. This aligns with Durlak and DuPre’s (56) definition of reach as, “the percentage of the eligible population who took part in the intervention, and their characteristics” (p 329). Given the known influence of medical regulation on opioid prescribing (42), we also recorded the status of the Ontario physician participants with respect to controlled substance prescribing and the provincial medical regulatory college (CPSO). We defined those who had a public record of controlled substance prescribing restrictions or a public record of a regulatory hearing regarding their controlled substance prescribing as having medical regulatory involvement.
Hewing closely to the definition of dose as “how much of the original program has been delivered” (56, pp. 329), we measured program dose using attendance information for each of the webinars and workshop. Attendance at each of the webinars and workshop was coded as a binary attended / did not attend outcome.
Since reducing bias, and specifically bias relating to opioid manufacturer involvement was a key element of the intended program design, we interpreted ratings of program bias as a measure of program fidelity – namely “the extent to which the innovation corresponds to the originally intended program” (56, pp.329). Likewise, the program was also intentionally designed to directly address identified clinical needs of practicing physicians which centered around appropriate chronic pain management, alongside opioid prescribing. Thus, we interpreted participant-provided relevance to practice ratings as a measure of program fidelity. This was measured using a 5-point Likert scale where 1 is ‘not at all relevant’ and 5 is ‘very relevant’ for the webinars and a 7-point Likert scale where 1 is ‘not at all relevant’ and 7 is ‘very relevant’ for the workshops.
Finally, we used anonymized, participant-provided ratings of adequacy of active learning time as a measure of participant responsiveness, which can be defined as “the degree to which the program stimulates the interest or holds the attention of participants” (56, pp. 329). This was measured using a 5-point Likert scale where 1 is ‘none’ and 5 is ‘just enough’ for the webinars and a 7-point Likert scale where 1 is ‘none’ and 7 is ‘just enough’ for the workshops.
These last three measures for fidelity and participant responsiveness were collected anonymously from participants post-intervention and so could not be linked to participant demographics.
Data analysis
We used descriptive statistics mean, median, standard deviation, minimum, and maximum for continuous measures, and frequency and percentage for categorical measures to describe the sample. We assessed the association between categorical variables using Chi-squared and Fisher’s exact tests. We assessed the association between binary and continuous measures using two sample t-test. We used logistic regression to assess association between participant factors including gender, years in practice, webinar completion, regulatory college status, setting, and international medical graduate (IMG) or Canadian medical graduate (CMG) status and the likelihood of workshop participation. We used the Hosmer–Lemeshow test to assess the goodness of fit for the logistic regression model. We used analysis of covariance to assess for variability in adequacy of active learning time and clinical relevance across different size groups and program types (webinar or workshop). All tests were two-sided and p < 0.05 was considered statistically significant. We used the statistical software SAS 9.4 for data manipulation and statistical analysis.