While the Jadad paper provides an algorithm intended to identify and address discordance between SRs in an overview, there is limited guidance within the manuscript regarding the application/operationalization of the algorithm. Absence of this detailed guidance leaves room for subjective (mis)interpretation and ultimately confusion when it comes time to use the algorithm. To address this, we engaged in an iterative process of interpretation and implementation of the algorithm step by step. This process involved virtual meetings whereby consensus was sought for decision rules at each step of the algorithm to ensure consistency in both interpretation and application. Feedback was solicited and decision rules were accordingly adjusted until consensus was achieved. This tool underwent pilot testing as described in 3.6 where feedback was further solicited and adjustments were made.
Two researchers will independently assess each set of SRs in the included overview using the Jadad algorithm, starting with Step C (Fig. 1). Information and data from the overview will be used, and when data is not reported, we will consult the full text of the included SRs. The Jadad decision tree assesses and compares sources of inconsistency between SRs with meta-analyses, including differences in clinical questions, inclusion and exclusion criteria, extracted data, methodological quality assessments, data combining, and statistical analysis methods.
Step A is to examine the multiple reviews matching the overview question using a PICO framework. If the research questions are not identical, then step B indicates choosing the review closest to the decision makers’ research question and no further assessment is necessary. If multiple reviews are found with the same PICO, then step C should be investigated. As we are using overviews examining discordance as our sample, we will start at Step C in the Jadad decision tree.
Legend: Step A is to examine the multiple reviews matching the overviews’ question using a PICO framework. If the research questions are not identical, then step B indicates choosing the review closest to the decision makers’ research question and no further assessment is necessary. If multiple reviews are found with the same PICO, then step C should be investigated.
Here we detail our interpretation of the Jadad algorithm for each step in assessing the discordance in a group of SRs with similar PICO elements. If an overview or the included review does not report a method, we will indicate it as “not reported,” and it will not be chosen for that step.
Step D and G follow from Step C. Steps E, F, H. and I are completed depending on the decisions at Steps D and G, respectively.
“Meeting” a step means a review met the criteria in the sub-step or step that is highest in the hierarchy. For example, a review that meets E3 criteria fulfills criteria A and B, which is the highest in our hierarchy.
We will determine if the RCTs are similar across reviews by either finding this information in the overview, or extracting all RCTs from the included reviews using an excel matrix to list the reviews at the top, and trials in the left rows. The RCTs will be mapped to the reviews in order of publication date (earliest trials at the top). Using this matrix, we will determine if the reviews include the same or different trials.
3.7.4 Step E – Assess and compare data extraction, clinical heterogeneity, and data synthesis
If the reviews are the same risk of bias/quality, then the next step is Step E, to assess and compare data extraction, clinical heterogeneity, and data synthesis across the reviews.
Step E1 - Assess and compare the data extraction methods across reviews
For this step, Jadad states, “If reviews differ [in outcomes reported], the decision-maker should identify the review that takes into account the outcome measures most relevant to the problem that he or she is solving.” We interpret this step as selecting the review that (A) matches the overview’s primary outcome.
Jadad then writes that reviews that conduct independent extractions by two reviewers are of the highest quality. We therefore decided that reviews that (B) used an independent data extraction process using two review authors should be chosen. If a ROBIS assessment is done, then the latter point will be mapped to ROBIS 3.1. “Were efforts made to minimise error in data collection?”
Decision rules:
#1. Reviews that meet criteria A and B are highest in our hierarchy
#2. Reviews that meet criteria A are second highest in our hierarchy
#3. Reviews that meet criteria B are third highest in our hierarchy
Step E2 – Assess and compare clinical heterogeneity of the included RCTs across reviews
Clinical heterogeneity is assessed at the review level by examining the research question pertaining to the primary outcome and the eligibility criteria PICO elements of each included RCT to see if they are sufficiently similar. If the PICO across RCTs are similar, then clinical heterogeneity is minimal, and reviews can progress with pooling study results in a meta-analysis. If a ROBIS assessment is done, this question is mapped to ROBIS 4.3 “Was the synthesis appropriate given the nature and similarity in the research questions, study designs, and outcomes across included SRs?”
If a review states that (A) they assessed for clinical (e.g., PICO) heterogeneity across RCTs (in the methods or results sections), then this will be the review that is chosen at this step. Example of a review reporting a clinical heterogeneity assessment: "If we found 3 or more systematic reviews with similar study populations, treatment interventions, and outcome assessments, we conducted quantitative analyses (Gaynes 2014)". If authors reported and described clinical heterogeneity in the manuscript, then rule (B) authors that judged the clinical heterogeneity assessment to be minimal or low with rationale, will be chosen at this step.
Decision rule:
#1. Reviews that meet criteria A and B are highest in the hierarchy
#2. Reviews that meet criteria A are second highest in our hierarchy
Step E3 – Assess and compare data analysis methods across reviews
Jadad et al. are purposefully vague when describing how to judge whether a meta-analysis was appropriately conducted. For this step, we interpret it as reviews reported conducting an: (A) appropriate weighted technique to combine study results (i.e. used a fixed or random random-effects model) and (B) whether authors conducted an investigation of statistical heterogeneity (i.e. by reporting I2, tau2, or chi2) (Fig. 2).
Decision rules for if the presence or absence of heterogeneity is present in the meta-analysis:
#1. Reviews that meet criteria A and B are highest in our hierarchy
#2. Reviews that meet criteria A only are second highest in our hierarchy
#3. Reviews that meet criteria B only are third highest in our hierarchy (this decision can be ignored if heterogeneity is not observed)
Decision rules for Step E.
#1. Reviews that meet Step E1, E2, and E3 are highest in our hierarchy
#2. Reviews that meet Step E1 and E2 second highest in our hierarchy
#3. Reviews that meet Step E1 third highest in our hierarchy
#4. Reviews that meet Step E2 and E3 fourth highest in our hierarchy
#5. Reviews that meet Step E2 fifth highest in our hierarchy
#6. Reviews that meet Step E3 sixth highest in our hierarchy
Note
Reporting only Steps E1, E2 or E3 is not considered a systematic approach to evidence synthesis.
3.7.5 Step F - Select the review with the lowest risk of bias, or the highest quality
From the risk of bias/quality assessment conducted through Step D, we will choose the review with the lowest risk of bias judgement, or highest quality assessment rating. ROBIS contains a last phase where reviewers are asked to summarise concerns identified in each domain and describe whether the conclusions were supported by the evidence. Based on these last decisions, a final review rating will be made based on high, low or unclear risk of bias. For our Jadad assessment, we will choose a binary rating of either high risk or low risk of bias. Any reviews assessed as ‘Unclear’ risk of bias will be deemed as high risk. When using the assessments of risk of bias/quality of reviews from the included overviews, we will choose the rating of the authors. If uncertainty exists, we will re-assess the included reviews using ROBIS.
3.7.6 Step G - Do the reviews have the same eligibility criteria?
If the reviews do not include the same trials, then decision-makers are directed to turn to Step G – assess whether the reviews have the same eligibility criteria (Fig. 3). The overview may contain text in a methods section, or a characteristics of included reviews table where the PICO eligibility criteria can be extracted and assessed. If this is not the case, then the PICO eligibility criteria will be extracted from the included reviews by two authors independently and then compared to resolve any discrepancies.
2.8.7 Step H - Assess and compare the search strategies and the application of the eligibility criteria across reviews
If the reviews contain the same eligibility criteria, then Step H is to assess and compare the search strategies and the application of the eligibility criteria across reviews (Fig. 4).
Step H1 - Assess and compare the search strategies across reviews
In this step, Jadad et al.’s recommendations are vague, although they make reference to comprehensive search strategies as being less biased. We interpret this step as authors explicitly describing their search strategy such that it can be replicated. To meet this interpretation, our criteria are that reviews: (A) search 2 or more databases, (B) search the grey literature; and (C) include a full search algorithm (may be attached as an appendix or included in the manuscript).
Decision rules:
#1. Reviews that meet criteria A, B and C are highest in our hierarchy
#2. Reviews that meet criteria A and B are second highest in our hierarchy
#3. Reviews that meet criteria A and C are third highest in our hierarchy
#4. Reviews that meet criteria B and C are fourth highest in our hierarchy (unlikely scenario)
#5. Reviews that meet criteria A only are fifth highest in our hierarchy
SCENARIOS for Step H1
• 3 reviews are identified for our Jadad assessment
Criteria to choose a systematic review at Step H1: (A) 2 or more databases – (B) searched grey literature --(C) full search in appendix
Scenario 1
Review 1: A and B, but not C (decision rule #2)
Review 2: A and B but not C (decision rule #2)
Review 3: A and C, but not B (decision rule #3)
Conclusion: No review meets ALL of our criteria; which do we choose? Based on our decision rules, we choose BOTH Review 1 and 2
Scenario 2
Review 1: A, but neither B nor C (decision rule #5)
Review 2: A and B, but not C (decision rule #2)
Review 3: NeitherA, B, nor C (does not report the search methods)
Conclusion: No review meets ALL of our criteria; which do we choose? Based on our decision rules, we choose Review 2
Step H2 - Assess and compare the application of the eligibility criteria across reviews
In this sub-step, Jadad indicates that we should choose the review with the most explicit and reproducible inclusion criteria, which is ambiguous. Jadad states, “Reviews with the same selection criteria may include different trials because of differences in the application of the criteria, which are due to random or systematic error. Decision-makers should regard as more rigorous those reviews with explicit, reproducible inclusion criteria. Such criteria are likely to reduce bias in the selection of studies” [16]. We did not know if this meant clearly reproducible PICO eligibility criteria, as this would be a repeat to Step G, whether the eligibility criteria were applied consistently by reviews (i.e. compare eligibility criteria to included RCTs’ PICO to see if they indeed met the eligibility criteria), or if this meant (A) independently screening of title, abstracts, and full text against the eligibility criteria by two reviewers. We selected the latter criteria when choosing from the included reviews in an overview.
Decision rules:
#1. Reviews that meet criteria A is highest in our hierarchy
Decision rules for Step H:
#1. Reviews that meet Step H1 and H2 highest in our hierarchy
#2. Reviews that meet Step H1 second highest in our hierarchy
#3. Reviews that meet Step H2 third highest in our hierarchy
3.7.8 Step I – Assess and compare the publication status, quality, language restrictions of the included RCTs, and analysis of data on individual patients
If the reviews do not have the same eligibility criteria, then the next step, Step I, is to assess and compare the publication status, quality, language restrictions of the included RCTs, and analysis of data on individual patients across the reviews (Fig. 5). This step maps to ROBIS item 1.5, namely, “Were any restrictions in eligibility criteria based on appropriate information sources (e.g. publication status or format, language, availability of data) [32]?”
Step I1 – Assess and compare the publication status of the included RCTs across reviews
In the absence of clear guidance, we interpret this step as “choose the review that searches for and includes both published and unpublished data (grey literature).” Published studies are defined as any study or data published in a peer-reviewed medical journal. Unpublished data is defined as any information that is difficult to locate and obtained from non-peer-reviewed sources such as websites (e.g. World Health Organisation website, CADTH), clinical trial registries (e.g. clinicaltrials.gov), thesis and dissertation databases, and other unpublished data registries (e.g. LILIACS). Our interpretation is that reviews are chosen at this step that search for: (A) studies published in peer-reviewed medical journals, and (B) reports/documents/content that are not published in medical journals.
Decision rules:
#1. Reviews that meet criteria A and B are highest in our hierarchy
#2. Reviews that meet criteria A are second highest in our hierarchy
#3. Reviews that meet criteria B are third highest in our hierarchy
Note
Reporting only A or B is not considered a systematic search.
Step I2 – Assess and compare the methods used to assess the quality of the included RCTs across reviews
In this step, the Jadad paper recommends assessing the appropriateness of the methods used to assess the quality of the included RCTs across reviews. This item maps to ROBIS item 3.4, “Was the risk of bias/quality of RCTs formally assessed using appropriate criteria?” Here we interpret this item as to whether the review authors used the Cochrane risk of bias tool (version 1 or 2). All other RCT quality assessment tools are inappropriate because they are out of date and omit important biases (e.g. Agency for Healthcare Research and Quality (AHRQ) 2012 [33] omits allocation concealment). However, the Cochrane risk of bias tool was only published in October 2008. Therefore, we applied a decision rule: for reviews dated 2012 (giving one year for awareness of the tool to reach researchers) and later, the Cochrane risk of bias tool is considered the gold standard. For reviews dated 2009 or earlier, we considered the Jadad scale [34] and Schulz [35] to be the most common scales used between 1995 and 2011. Other tools will be considered on a case-by-case basis.
As a decision hierarchy, to meet the minimum criteria for this step, a review will have (A) assessed the risk of bias of RCTs using any tool or approaches, and (B) used the Cochrane risk of bias tool version 1 or 2 (if dated 2009 or later). If several reviews are included that meet these two criteria, the review that (C) integrates the risk of bias assessments into the results or discussion section (i.e. discusses the risk of bias in terms of high and low risk of bias studies, reports a subgroup or sensitivity analysis) will be chosen.
Decision rules:
#1. Reviews that meet criteria A, B and C are highest in our hierarchy
#2. Reviews that meet criteria B and C are second highest in our hierarchy
#3. Reviews that meet criteria A and B are third highest in our hierarchy
#4. Reviews that meet criteria A and C are fourth highest in our hierarchy (unlikely scenario)
#5. Reviews that meet criteria A only are fifth highest in our hierarchy
SCENARIOS for Step I2
• 3 reviews are identified for our Jadad assessment
Scenario 1
Review 1: A and B but not C (decision rule #3)
Review 2: A and B but not C ( decision rule #3)
Review 3: A and C, but not B (decision rule #4)
Conclusion: No review meets ALL of our criteria; which do we choose? Based on our decision rules, we choose BOTH Review 1 and 2
Scenario 2
Review 1: A, but neither B nor C (decision rule #5)
Review 2: A and B, but not C (decision rule #3)
Review 3: NeitherA, B, nor C (does not report the search methods)
Conclusion: No review meets ALL of our criteria; which do we choose? Based on our decision rules, we choose Review 2
Step 13 - Assess and compare any language restrictions across reviews
In this step, Jadad indicates that reviews with (A) no language restrictions in eligibility criteria should be prioritised and chosen over those that only include English language RCTs. This step maps to ROBIS item 1.5, namely, “Were any restrictions in eligibility criteria based on sources of info appropriate (e.g. publication status or format, language, availability of data)?”
Decision rule:
#1. Reviews that meet criteria A are highest in our hierarchy
Step I4 – Choose the analysis of data on individual patients
If (A) an individual patient data meta-analysis was identified in the overview, Jadad et al. recommend this review be chosen over reviews with pairwise meta-analysis.
Decision rule:
#1. Reviews that meet criteria A are highest in our hierarchy
Decision rules for Step I:
#1. If there is an IDP meta-analysis (Step I4), then this review is the highest in our hierarchy
#2. Reviews that meet Step I1, I2, and I3 are second highest in our hierarchy
#3. Reviews that meet Step I1 and I2 third highest in our hierarchy
#4. Reviews that meet Step I2 and I3 fourth highest in our hierarchy
#5. Reviews that meet Step I1 and I3 fifth highest in our hierarchy
#6. Reviews that meet Step I1 is sixth highest in our hierarchy
#7. Reviews that meet Step I2 is seventh highest in our hierarchy
#8. Reviews that meet Step I3 is eighth highest in our hierarchy
Note
Reporting only Steps I1, I2 or I3 is not considered a systematic approach to evidence synthesis.