The aim of this systematic review was to address the following key questions (KQs):
KQ 1: How effective are different methods in recovering studies falsely excluded during literature screening?
KQ 2a: What are the characteristics of studies that have been falsely excluded during literature screening?
KQ 2b: Can predictors help identify studies that are at a high risk of being falsely excluded?
This systematic review was conducted according to Cochrane methods (11). We followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) 2020 statement (12). The PRISMA checklist can be seen in Additional file 1. We registered our study protocol „Falsely excluded studies in the literature screening process - a systematic review” at https://osf.io (https://osf.io/5zdpb/). Because we could not identify studies that formally assessed predictors of the false exclusion of records during literature screening, we amended the protocol on June 11, 2021. We expanded the inclusion criteria to also include the study characteristics of falsely excluded studies, even if these characteristics were not formally assessed in a predictive model.
Eligibility criteria
The a priori–defined eligibility criteria are listed in Table 2 and described in more detail below.
Table 2: Inclusion and exclusion criteria
|
Inclusion
|
Exclusion
|
KQ 1: Methods to recover falsely excluded studies
KQ 2: Potential predictors of false exclusion
|
• Reference list checking of included studies
• Similarity searches (i.e., related articles)
• Forward citation tracking of included studies using citation indexes
• Google Scholar search
• Contacting experts/researchers/companies/other stakeholders
• Study design
• Main objective
• Sample size
• Country of conduct
• Year of publication
• Structure and content of abstract (e.g., only title available, no abstract, uninformative abstract)
• Language of publication
• Risk of bias
• Database indexing of studies (e.g., PubMed listing)
• Publication type
• Journal in which the study is published
• Impact factor of journal in which the study is published
|
Other methods
|
Outcomes
|
• Proportion of falsely excluded studies that could be recovered (KQ 1)
• Recall, precision, numbers needed to read (NNR) of supplementary searches (KQ 1)
• Impact of recovered studies on meta-analysis results and conclusions (KQ 1)
• Falsely excluded studies by characteristics (KQ 2)
• Falsely excluded studies by predictors (KQ 2)
|
Other outcomes
|
Study design
|
• Systematic reviews (KQ 1, 2)
• Randomized/nonrandomized trials (KQ 1, 2)
• Prospective and retrospective, controlled and uncontrolled observational studies (KQ 1, 2)
|
Nonempirical publications (e.g., editorials, letters)
|
Date of search
|
Published 1999 or later
|
1998 and earlier
|
Publication language
|
No restrictions
|
No restrictions
|
Abbreviations: KQ = Key question, NNR = numbers needed to read
We searched for studies assessing the use of supplementary search methods (e.g., forward citation tracking, reference list checking, and web searching) to recover studies falsely excluded during literature screening. These supplementary search methods are defined in Table 1.
Additionally, we searched for studies focusing on the predictors and characteristics of falsely excluded studies, such as those based on study design or publication type. For detailed eligibility criteria, see Table 2.
Information sources
An experienced information specialist performed searches for eligible studies in Medline (Ovid), Science Citation Index Expanded, Social Sciences Citation Index, Current Contents Connect (all via Web of Science), Embase (Elsevier), Epistemonikos.org, and Information Science & Technology Abstracts (Ebsco) from 1999 to June 23, 2020. We first developed a search strategy for Ovid Medline and then amended it to fit other electronic databases. We considered publications in all languages. According to the peer review of the electronic search strategy (PRESS) statement (12), the electronic Ovid Medline search strategy was peer-reviewed by another information specialist. See Additional file 2 for the database search strategies.
In addition, we searched for grey literature (i.e., unpublished studies) relevant to this systematic review. Potential sources of grey literature included the Open Science Framework (www.osf.io), websites of known organizations that produce rapid reviews (e.g., Canadian Agency for Drugs and Technologies in Health [CADTH]) based on the CADTH Grey Matters Checklist (13) , and dissertation databases (e.g., Digital Access to Research Theses [DART]-Europe). Furthermore, we searched for Cochrane Colloquium abstracts of oral, poster, and workshop presentations and Health Technology Assessment international (HTAi) meeting abstracts.
We manually searched the reference lists of background articles on this topic for any relevant citations that our electronic searches might have missed. Additionally, we hand searched journals that regularly publish methods studies, such as Systematic Reviews and Research Synthesis Methods. If our search retrieved conference abstracts of studies that might have fulfilled our inclusion criteria, we manually searched for further information about these studies (e.g., publications, entries in trial registries, etc.). Additionally, an information specialist conducted similar articles searches for identified key articles in PubMed and Google Scholar and forward citation tracking using Scopus up to January 28, 2021. The search results for the similar article searches are ranked by “similarity” to the key article; the top 20 articles are those categorized as the most similar according to the search algorithm. We exported the top 20 articles and assessed them according to our eligibility criteria. See Additional file 3 for the similar articles searches and forward citation tracking.
Study records
Data management
Identified citations were stored in an EndNote® X8.2 bibliographic database (Thomson Reuters, New York, NY). All results of the abstract and full-text review, including the exclusion reasons during the full-text review, were recorded in the EndNote database. PDF files of all full-text articles were stored on a server accessible to all members of the review team.
Selection process
Deduplication of the search results was carried out with EndNote® X8.2 (Thomson Reuters, New York, NY). We developed and pilot-tested abstract and full-text review forms that reflected our inclusion and exclusion criteria. Two independent reviewers screened abstracts and full-text articles in Covidence (www.covidence.org) and evaluated their eligibility for inclusion. Any discrepancies were resolved through discussion or consultation with a third reviewer. A total of 50 abstracts were piloted by all reviewers to resolve discrepancies and to test the abstract review form. The full-text review was piloted with five full-text articles.
Data collection process
We designed and pilot-tested a structured data abstraction form. The data were extracted by one reviewer and checked for completeness and accuracy by a second investigator. The data extraction process was piloted with five studies.
Data items
For studies that met our inclusion criteria, we extracted the following study characteristics and outcomes:
- Study characteristics: author, year of publication, aims, study design, sample size (e.g., number of studies analyzed), number of reviewers involved
- Characteristics of methods/information sources used to recover falsely excluded studies (for KQ 1)
- Characteristics of falsely excluded studies/publications: study design, content of the abstract, language of publication (for KQ 2)
- Outcomes: proportion of falsely excluded studies/publications that could be recovered, impact of recovered studies on meta-analysis results and/or conclusions, proportion of falsely excluded studies/publications by characteristic or predictor
Risk of bias assessment
For methods studies with a case study design we adapted the Joanna Briggs Institute Critical Appraisal Checklist for Case Reports and for methods studies with a case series design the Joanna Briggs Institute Critical Appraisal Checklist for Case Series (14).
Data synthesis
We summarized the results narratively and grouped them by outcomes of interest. We did not identify enough studies with a similar design to be able to conduct meta-analyses.