The systematic review will be conducted in accordance with the Joanna Briggs Institute (JBI) methodology for Mixed Methods Systematic Review (MMSR) (35) and the Preferred Reporting Items for Systematic Review and Meta-Analyses Protocols (PRISMA-P) (36) [see Additional file 1]. To find relevant research evidence, the population, phenomenon of interest, and context (PIC) format has been applied to formulate the review questions, devise a search strategy, and guide study selection criteria. The project will utilize an iterative process undertaken primarily by AW with input from members of the research staff and larger research team based on areas of expertise. A written record will be organized according to the Matrix Method (37).
Search strategy
The proposed search will be conducted in accordance with the checklist for Peer Review of Electronic Search Strategies (PRESS) Guideline for systematic reviews (29) to achieve a balance of recall and precision. With academic librarian support (DD), the search strategy, including all identified keywords will be adapted for each database. An initial limited search of two databases (MEDLINE and CINAHL) was undertaken to identify articles on the topic. Since controlled vocabulary is unique to each database, keywords (rather than subject headings) have been identified as the most reliable approach for suitable recall. We determined that keyword searching, with the generation of all synonyms, plurals, and alternate spellings (e.g., centred and centered), for the phenomenon of interest (i.e., PROM, PREM, and implementation) [see Additional file 2] produced high yield. Titles, abstracts, and keywords of relevant articles were used to assist in the identification of synonyms for each keyword. Although it is common to include a third keyword to represent the population or context elements of the PIC question, we found that it limited the precision of our results. As such, we decided to exclude a third concept for searching and instead include them as part of our selection criteria. Findings from our preliminary search informed the search for the project.
The evidence to answer our question will be retrieved by searching for the published literature between January 2009 and December 2019 using eight databases with the EBSCO platform that covers the subjects of nursing, allied health, health sciences, psychological literature, physical therapy, occupational health, nutrition, kinesiology, and evidence-based reviews (i.e., MEDLINE, CINAHL, PsycINFO, Web of Science, Embase, SPORTDiscus, Evidence-based Medicine Reviews, and ProQuest (Dissertation and Theses)). The reason for the 10-year date range was two-fold: first, to keep the yield of records manageable and second, to synthesize evidence based on the current state of the field. Limiters being used include: (a) scholarly/peer reviewed citations, (b) English language, and (c) date range. The two keywords will be utilized in conjunction with appropriate truncation (e.g., asterisks) to include multiple variants using the same truncated root or stem. In some circumstances the boolean “NEAR” or proximity locators will be used to link terms that may not be adjacent (e.g., barrier* n4 implement* and facilitat* n4 implement*) (30). All identified search terms will be linked using Boolean operators. The boolean “OR” operator will be used to link search terms as a union for each concept for the purpose of expanding and broadening a search. The interaction of these concept searches will then be combined with “AND” to narrow the search (30). Search histories [see example Additional file 3] from all databases will be retained.
Our exploration will be supplemented by using other searching strategies to carry out a comprehensive search and counterbalance the limits of keyword database searching (30). This includes: footnote chasing (i.e., scanning the references of “keeper” articles), author searching of those publishing extensively in their field, and backward/forward citation searching of related systematic reviews and other seminal articles (e.g., large studies with numerous publications or those frequently cited) (30). Additionally, a search of authors most frequently publishing in the field will be conducted. The ProQuest database will be used to search for eligible dissertations and theses. Upon completion of our database search, we will search the unpublished literature to lessen publication bias and to retrieve difficult to find literature and information regarding implementation projects. To avoid bias, we will use two approaches. First, a judicious examination of the “grey” literature (e.g., research reports, practice guidelines, and user guides) will be conducted using the Google© search engine in 2020 from a university internet protocol address with the most cited search terms of the PIC concepts identified during our database searches. Second, we will access websites of credible organizations, agencies, and associations that may produce and publish knowledge translation documents supporting PCM implementation (30). These websites will be identified by seeking the opinions of experts in the field. A PRISMA diagram (28) will be created to reflect the search strategy and study selection to reveal how the included studies were determined.
Types of Evidence
The MMSR methodology combines an assortment of evidence to create a breadth and depth of understanding of the review questions posed to inform practice and policy. The inclusion of all available evidence, regardless of type, allows for the degree of agreement or discrepancies between sources of evidence as well as validating or triangulating the findings. Various aspects of a phenomenon of interest can be examined and the available data can contextualize the findings (35). Furthermore, given that implementation at the point-of-care requires a variety of knowledge to inform practice, diverse evidence types will be sought. This review will consider peer-reviewed literature: quantitative, qualitative, and mixed methods studies in addition to reviews (i.e., systematic and literature), organizational implementation projects (e.g., quality improvement, knowledge translation project, implementation of PROMs, program evaluation, or pilot/feasibility project), and expert opinion (e.g., an individual, group or learned body that draws on their practical experience and understanding of the knowledge). We will include not only the evidence on the effectiveness of strategies for implementing PCM (‘knowing what’ type of evidence), but also evidence related to subjective experiences, attitude, behaviours and/or the accepted discourse at the time of practice (‘knowing how’ type of evidence) (34). Opinion-based evidence will be included when derived from expert opinions. That is, the opinions from experts in the field that were gained through some form of consensus building process (e.g., conference, think tank, special interest group, panel, and current discourse) (34). Inclusion of the unpublished grey literature is unique to a systematic review of this nature.
Selection Criteria
The next step to finding relevant evidence for inclusion in this review is to define the selection criteria. To be included in the review, the literature needs to meet the following eligibility criteria (see Table 3): (a) HCPs in a clinical setting; (b) information pertaining to PROM or PREM implementation (e.g., HCP experiences, strategies for integrating into practice, influential factors, or attitudes towards use); (c) data were at the individual level; and (d) any study design. Articles will be excluded if (see Table 3): (a) articles focused exclusively on decision-makers (e.g., managers) or patients; (b) information pertaining to when and why PROMs and PREMs are used as well as impact of their use; (c) studies about instrument development, testing and selection; or (d) implementation of aggregated data. Furthermore, to determine inclusion of studies in the review we will apply the criteria in a specific order (Gough). After each citation is confirmed as written in English and within the date limit, we will ensure it meets the study design criterion. Next, the phenomenon of interest criterion will be applied followed by screening for population and context.
Table 3
Topic
|
Inclusion
|
Exclusion
|
Population (P)
|
• Healthcare providers
|
• Decision-makers exclusively
• Patients exclusively
|
Phenomena of interest (I)
|
Studies about PREMs or PROMs and
• experiences of applying or implementing
• methods or strategies for integrating and interpreting (e.g., processes, logistics, tools, or workflow)
• factors (barriers and facilitator) influencing implementation
• views or attitudes towards their use
|
Studies about PREMs or PROMs and
• impact or effectiveness
• mechanisms by which they work (e.g., patient-provider communication)
• ways used (e.g., screening, assessment, improve communication)
• measurement development, testing, and selection
• suitability for specific patient populations
• a focus solely on patient-centred care
|
Context (C)
|
Studies concerning data at the individual (micro) level with patients:
• routine clinical care
• point-of care
• everyday clinical practice
• directly inform patient care or care planning
• clinical decision-making
• real-world application
|
Studies concerning aggregated data for purposes such as:
• performance indicators or accreditation
• value-based medicine
• quality improvement or quality control
• resource allocation, service provision, and economic evaluation
• clinical registries
• reimbursement and payer issues
• benchmarking
• drug development
|
Study Design
|
Published scholarly work including research, pilot or feasibility projects, evidence-based implementation/quality improvement, systematic reviews, literature reviews, and expert opinion
|
Published literature such as editorials, opinion or position papers, commentary, study protocols, conference proceedings or abstracts, and theory.
Insufficient information reported on study design
|
Study Selection Procedure
Following the completion of these searches, all identified citations will be loaded into EndNote X9© (Version 9.3.3) (38) and duplicates removed. AW will provide a thorough orientation to those involved in the selection process to ensure rigor. Given the large quantity of records anticipated to be retrieved (e.g., greater than 20000) the first 100 record titles and abstracts will be screened by two independent reviewers for assessment against the inclusion criteria to be identified as relevant, not relevant, or maybe relevant. Following that, AD will screen all other records to determine relevancy. AW to confirm eligibility will rescreened those identified as potentially relevant. To ensure validity of the selection criteria, in the EndNote© library for this project, AW will conduct keyword searches (e.g., outcome measure, patient outcome, patient-reported) of record titles of those deemed irrelevant to reapply the selection criteria. Finally, all relevant studies will be retrieved in full text and their citation details independently reviewed (AW and AD) against the selection criteria to confirm inclusion. Reasons for further exclusion of all studies will be recorded. Any disagreements that arise between the reviewers at each stage of the study selection process will be resolved through discussion (33, 37).
Assessment of Methodological Quality
Critical appraisal of included studies will determine the level of evidence and methodological quality as a basis for our confidence to act on the recommendations from our synthesis. Two independent reviewers, blinded to each other’s assessments, will retrieve all included citations, and applicable supplemental files, in full text format for assessment. AW will provide a robust orientation to primary reviewers (AD, DG, SH, FH, LE, SL, LM, and two undergraduate research assistants) of the process and appraisal checklists to ensure rigor. Authors will be contacted to request missing or additional data for clarification, where required. Any disagreements that arise between the reviewers will be resolved through consensus discussions among select team members, or a blinded third reviewer.
We will use the following standardized JBI critical appraisal instruments for assessing quality (see Table 4): systematic review, qualitative, cross-sectional, prevalence, case report, and text and opinion (27, 39). To evaluate organizational implementation projects, three questions from the JBI case report checklist (40) were combined with questions from the Johns Hopkins’ organizational experience checklist for non-research evidence (41) and questions for quality improvement interventions (42, 43). Similarly, the JBI checklists for analytical cross-sectional (40) and prevalence survey (44, 45) studies were modified to include four additional questions about the research questions, research methods, ethical approval, and justified conclusions (46–48). JBI checklists do not exist for mixed methods studies or literature reviews. AW conducted an extensive review of the literature to locate other standardized tools of high reliability and validity. Based on a parsimonious set of core criteria, the mixed method checklist focuses on both the effective integration of the quantitative and qualitative components of studies, as well as the provision of a rationale for using a mixed methods design (49–51). The checklist used for literature reviews will be based on the Johns Hopkins’ form for non-research evidence (41).
Table 4
Summary of the Critical Appraisal Checklists by Research Design and Level of Evidence
Type of Evidence
|
Critical Appraisal
|
Level of Evidence
|
Systematic review
|
JBI Systematic Review Appraisal Tool (52)
|
1
|
Qualitative
|
JBI Qualitative Appraisal Tool (53)
|
3
|
Analytical Cross-sectional
|
JBI Analytical Cross Sectional Appraisal Tool with others (above)
|
3
|
Survey
|
JBI Prevalence Appraisal Tool in combination with others (above)
|
3
|
Mixed method
|
Mixed Method Appraisal Tool (51)
|
3
|
Organizational implementation
|
JBI Case Report with others (above)
|
4
|
Expert opinion
|
JBI Text and Opinion Appraisal Tool (39)
|
5
|
Literature Review
|
JH Non-research Evidence (Literature Review) Appraisal Tool (41)
|
5
|
All checklists contain a series of criteria (range 8 to 15 questions) scored as being “met” or “not met” or “unclear” and, in some instances, as “not applicable.” Following critical appraisal, all studies will be given a percentage score with higher scores indicating a greater percentage of the quality criteria were met. The research team decided not to set a quality threshold to exclude evidence. Rather, once the data are synthesized we will determine the confidence to act based on the quality and level of the evidence. A modified version of the JBI levels of evidence (54, 55) for meaningfulness will be used as it best aligns with our review questions and the nature of the evidence. The five levels are:
-
Quantitative or mixed-methods systematic review;
-
Qualitative or mixed-methods synthesis and single experimental-based quantitative study;
-
Single qualitative and descriptive or observational quantitative study;
-
Systematic review of expert opinion and organizational implementation project single study (e.g., evidence-based practice, quality improvement, and knowledge translation); and
-
Expert opinion and literature review.
Table 4 cross-references the types of evidence with levels of evidence. Each citation will be assigned a level during the extraction process to be subsequently used during data synthesis. Given the evidence in this review is explorative, descriptive, and interpretative in nature, the JBI Grades of Recommendation will be the criteria used to define the overall strength of the recommendation (i.e., strong or weak) (55, 56).
Data Extraction
The data extraction step provides the means by which the most pertinent information about the topic (i.e., study characteristics and findings) can be summarized and culled from the primary studies. All source documents will be loaded into the data management software NVivo™ (Version 12.6) (57). Using this software, a Review Matrix will be generated to maximize efficiency and create “order out of chaos” (37, p. 150). Column topics for the matrix will be defined according to the purpose of the proposed systematic review to capture pertinent bibliographic information, methodological characteristics, and content-specific characteristics (e.g., implementation theory) of each included citation (see Table 5) (37). Column topics for which there is a discrete response option (e.g., methodology) will be extracted using the NVivo™ file classification function. The NVivo™ codes function will be used to identify the column topic response for items that have more than one response option (e.g., studies conducted in multiple settings or involving multiple HCPs). These data will offer contextual and methodological data to support the data synthesis results (37). Select team members will be involved in assembling the extracted data from all included articles with relevant accompanying illustrations (e.g., participant quotes or statistical test values). Notes on the definition of column topics and response options as well as the overall extraction process will be kept to ensure consistency amongst extractors.
Table 5
Bibliographic Information and Study Attributes to be Abstracted
Bibliographic Information
|
Study Attributes
|
• Authors
• Year of publication
• Article title
• Keywords
• Digital object identifiers
|
• Country/ies of study
• Methodology
• Research design
• Implementation theory
• Context or Setting
• Practice area
• Sample population/profession of healthcare providers
• Sample size
• Sampling method
• Level of evidence
• PROM and PREM instruments used
|
The next step will be the extraction of the pertinent study findings, specifically from the results and discussion sections of each citation. Using NVivo™ (57), the process of synthesis begins as the study findings will be extracted into specific codes. All study findings from the included citations will be coded for analysis as textual descriptions. Qualitative data will be composed of themes or subthemes with corresponding illustrations (e.g., quotations, tables, and figures). The quantitative data (e.g., descriptive or inferential statistics) will be converted into “qualitized data.” This process will involve the transformation of all quantitative data into textual descriptions or narrative interpretation in a way that answers the review questions. When necessary, corresponding statistical test results can be captured as part of the coding process. As per the narrative synthesis approach (58), code names will be based on a theoretical framework. In our study, we will use the Consolidated Framework for Implementation Research (CFIR) (59). The CFIR is an evidence-based guideline used to assess multiple contexts and identify factors that might influence the process and effectiveness of the implementation of a specific intervention, which, in our review, is PCM. The five major domains are intervention characteristics (8 items), inner setting (5 items), outer setting (4 items), characteristics of individuals involved (5 items), and implementation process (4 items) (59). A further framework will be used to code the identified implementation processes or actions to support a practice change. For this, we will use the validated Expert Recommendations for Implementing Change (ERIC), which is a compilation of 73 discrete strategies in nine clusters (60, 61). Codes not represented in either framework will be created, as determined, by AW to answer the review questions. In this manner, extraction and initial synthesis occur simultaneously. To reduce coding error during data extraction, we will develop a coding protocol, provide coder training, leverage our substantive expertise amongst team members, and use the NVivo coding comparison feature to improve reliability (30). In summary, the overall extraction process of transforming and coding these data will facilitate each element of the narrative synthesis to integrate the existing evidence and answer the review questions (35).
Data Synthesis
The synthesis will follow a convergent integrated approach as per the JBI methodology for MMSR. In this manner, data from all types of evidence will be simultaneously extracted and synthesized into meaningful codes. Furthermore, this integrated approach means that the transformed “qualitized” data will be combined to identify patterns across all the studies as well as explore relationships of the data between and within the studies (35). The integration of these data will be guided by a narrative synthesis approach (58), which is well suited for MMSR that utilize diverse types of evidence and has sample heterogeneity (35). Moreover, this approach allows for the use of theoretical frameworks to shape the analysis. In our case, the analysis will use two implementation science frameworks allowing us to focus broadly on the implementation process as well as effective strategies to implement and sustain changes in HCP’s behaviour (58). Popay et al. identifies four iterative elements to a narrative synthesis.
-
Element 1: The role of theory in evidence synthesis (p. 12). Contributing to knowledge translation theory on how PCM implementation works, why, and for whom we will use the CFIR (59) and ERIC strategies (60, 61) that are based on theories of change. With the use of NVivo™ for extraction, the process of synthesis begins as the theory contributes to the interpretation of study findings and determines how widely applicable the findings may be. In this way theory building and theory testing can be incorporated as a key aspect of the proposed systematic review (58).
-
Element 2: Developing a preliminary synthesis (p. 13). A preliminary synthesis is conducted to understand the codes identified and summarize the results of included studies. This will be achieved by defining patterns of findings simultaneously across all the studies. An initial description of the findings will evolve based on similarity in meaning to produce an integrated synthesis (58). Furthermore, using NVivo™ to identify codes within each citation will subsequently lead to overarching categories and themes.
-
Element 3: Exploring relationships of the data between and within the studies (p. 14). The purpose of the third element is to identify reasons that might explain any differences in the findings regarding the successful implementation of PCMs. The emerging patterns identified in the pooled data will be further analysed to identify factors, study characteristics, and context explaining differences. Comparing and contrasting relationships across studies is important to this stage of the synthesis as a means to explore the influence of heterogeneity (58). The NVivo™ relationship and query features will aid in our exploration of associations.
-
Element 4: Assessing the robustness of the synthesis (p. 15). This element allows for the integration of the quality assessments to determine the strength of the evidence and support with the trustworthiness of the synthesis products (e.g., answers to the study questions and recommendations). Using NVivo™ the included studies will be assigned both a level of evidence, and a quality score that will be cross-linked to the products of the synthesis. From this, a final determination of the strength of the evidence to support conclusions draw from the synthesis process can be made (58).
Thus, the results of this narrative synthesis approach will provide a critical analysis to determine effective methods for PCM implementation by HCPs in everyday practice.