Study design
A methodological study was conducted to analyse the rate of inclusion of SRs in six selected electronic resources, and in combinations of two databases with reference checking. We did not write a protocol for this study.
Search methods
A set of Overviews, including Health Technology Assessment (HTA) reports, Cochrane Overviews, and non-Cochrane Overviews, was obtained from a preceding methodological study, as described in detail in Pieper et al. (2014) [20]. Briefly, a search for overviews was conducted in MEDLINE via PubMed, Embase via Embase.com [21], CINAHL via EBSCOhost [10], PEDro [22], CDSR, DARE, CENTRAL, CMR, HTA, and NHS-EED via the Cochrane Library, and the websites of 127 HTA agencies from inception to May 2012. Search terms were text words related to Overviews and SRs and the search algorithms can be found in Pieper et al. (2014) [20]. The search was limited to Overviews published in English or German between 2009 and 2011.
Eligibility criteria
Overviews were defined as systematic reviews for which the unit of searching, inclusion and data analysis is the systematic review rather than the primary study [1]. Thus, we included all Overviews that had searched explicitly and systematically for SRs in at least one electronic database, included at least one SR (Overviews including both SRs and primary studies were eligible if the evidence synthesis relied at least in part on SRs, e.g., by combining primary studies and SRs in the evidence synthesis), and critically appraised all included SRs and additional primary studies. A HTA report was defined as “a method of evidence synthesis that considers evidence regarding clinical effectiveness, safety, cost-effectiveness and, when broadly applied, includes social, ethical, and legal aspects of the use of health technologies” [23].
Inclusion criteria:
- Searched for SRs in at least one electronic database;
- Included at least one SR in their evidence synthesis;
- Critically appraised included SRs and primary studies; and
- Full text publication was available.
Exclusion criteria:
- Overviews with a methodological focus; and
- Published in a language other than English or German.
The set of included Overviews will be henceforth called the “reference set”.
Description of electronic databases selected in this study
Six databases were selected to assess inclusion of systematic reviews as described in the section “data collection”, below, namely MEDLINE, CINAHL, Embase, Epistemonikos, PsycINFO, and TRIP. The key features of these databases are described in Table 1.
Table 1. Description of electronic databases and resources
|
|
|
Publisher
|
Access
|
Type
|
Coverage
|
CINAHL [10]
|
EBSCO
|
by subscription
|
indexed database
|
nursing, biomedicine, health sciences librarianship, alternative medicine, and allied health topics
|
Embase [8]
|
Elsevier
|
by subscription
|
indexed database
|
biomedical literature, 1947 to present
|
Epistemonikos [17]
|
Epistemonikos foundation (non-profit)
|
free of charge
|
citations database, data scraped from other databases and the web
|
health evidence, nine supported languages [24, 25]
|
MEDLINE [7]
|
U.S. National Library of Medicine (non-profit)
|
free of charge
|
indexed database
|
biomedicine and health literature, 1966 to present [26]
|
PsycINFO [11]
|
EBSCO, American Psychological Association
|
by subscription
|
indexed database
|
behavioural science and mental health
|
TRIP [18]
|
Trip Inc.
|
free of charge
|
clinical search engine
|
health care [27]
|
Psyc. topic = mental health- or psychology-related topic.
|
|
|
|
|
|
|
|
The sources scraped by Epistemonikos include, or have included, CDSR, PubMed, Embase, CINAHL, PsycINFO, LILACS, DARE, HTA database, The Campbell Collaboration online library, JBI Database of Systematic Reviews and Implementation Reports, EPPI-Centre Evidence Library. Updates of algorithms in February and April 2019 have led to an expansion in the dataset by more than a factor of 1.5. TRIP widely collects references from sources of SRs (including Cochrane Library and DARE), guidelines, regulatory agencies (FDA, EMA, NICE, IQWIG), HTA databases, NHS EED, literature databases (PubMed), journals, as well as PROSPERO and clinical trial registries.
Data collection
From the full text of each included Overview, the following data were extracted into MS Excel (2016): citation, publication title, number of databases searched, name of each database searched, searched in social science/economics databases (yes/no; i.e. EconLit, HEED, NHS EED, IBSS, Social Sciences Citation Index, Social SciSearch, the Campbell Collaboration Database, Social Sciences Abstracts, Social Services Abstracts, Applied Social Science Index and Abstracts, Social Service Information Gateway), searched in additional sources (‘other sources’ yes/no; i.e. reference lists of included studies, queries to experts, Google, Google Scholar, internal departmental files, hand-searching or electronically searching journals, clinical trial or study registries (e.g. clinicaltiral.gov, PROSPERO), publishers’ databases (e.g. Springer, ScienceDirect, Thieme, Wolters Kluwer), HTA agencies’ websites (e.g. https://www.iqwig.de, https://www.dimdi.de, http://www.msac.gov.au)), number of SRs included, Overview type (Cochrane Overview, HTA report, or non-Cochrane Overview), intervention/non-intervention Overview, and mental health- or psychology-related topic (yes/no).
For each Overview, the included SRs were extracted and tagged with the Overview from which they originated. Primary studies were not extracted. HTA reports are usually structured into sections entitled clinical effectiveness, safety, cost-effectiveness, social, ethical, or legal. For HTAs, we only included SRs from the clinical effectiveness section of the report.
The database searches for SRs were performed in April 2019. A stepwise process was followed to identify whether a SR was included in an electronic database, found by reference checking, or included in a database combination:
- From the sample of SRs extracted from the Overviews, we determined which of six databases each SR was included in, namely MEDLINE, CINAHL, Embase, Epistemonikos, PsycINFO, or TRIP.
- We then determined which database contained the largest overall number of included SRs. This database was identified as the ‘reference database’. We set the reference database to MEDLINE as it had the highest inclusion rate. The SRs included in MEDLINE will be henceforth called the “MEDLINE-included SRs”.
- A list of all SRs not included in MEDLINE was compiled. These SRs are called the ‘MEDLINE-non-included SRs’.
- For each ‘MEDLINE-non-included SR’ obtained in step C, we then manually checked the reference lists of the ‘MEDLINE-included SRs’ that were cited in the same Overview as the ‘MEDLINE-non-included SR’. The purpose of this step was to find out if each SR not included in MEDLINE could have been identified by reference checking of SRs identified in MEDLINE on the same topic, rather than by additional database searching. The SRs found in the reference lists/bibliographies are henceforth called ‘biblio SRs’.
Finally, we constructed five combined sets of SRs by merging the ‘MEDLINE-included SRs’, ‘biblio-SRs’, and SRs obtained in step A for CINAHL, Embase, Epistemonikos, PsycINFO, and TRIP. For each of these five combined sets, we calculated a combined mean inclusion rate (see statistical analysis). This was done to evaluate whether searching more than one database would expand the study pool.
Statistical analysis
For each Overview, we calculated:
- the mean inclusion rate (% of included SRs) and corresponding 95% confidence interval (95% CI) separately for each database.
- the mean inclusion rate for the reference database (as defined above) combined with reference checking (as described above).
- the mean inclusion rates for combinations of the reference database, reference checking, and each of the other five databases.
The Overview-level inclusion rates obtained in statistical analysis steps A to C were then aggregated for the entire dataset by calculating weighted mean inclusion rates and corresponding 95% confidence intervals (95% CI). Weighting was based on the number of SRs included in each Overview.
Stratification
The goals of stratification were to generate hypotheses on contexts where the results-based recommendations would apply, and to identify situations where retrieval would be inadequate and further searches may be necessary. Inadequate retrieval of SRs was defined as lower than 95% retrieval. Thus, we: (1) investigated whether searching large numbers of databases offers added value, (2) gauged the magnitude of effect when using ‘other sources’ (as defined above in section ‘data collection’), (3) examined whether different Overview types require searching different electronic resources, (4) explored differences in database inclusion between healthcare interventions and other fields of healthcare research, and (5) evaluated the role of specialist databases when such databases exist in the area of the Overview topic, using PsycINFO as an example.
To answer the above objectives (1) to (5), respectively, exploratory analyses were performed for the following strata: (1) number of databases searched (1-3 / ≥4), (2) other sources searched (yes / no), (3) Overview type (Cochrane Overview, HTA report, or non-Cochrane Overview), (4) intervention/non-intervention Overviews, and (5) mental health- or psychology-related topic (yes/no).
Stratification analysis was performed only for strata containing ≥3 Overviews for analysis. For each stratification analysis, the weighted mean inclusion rate with 95% CI was calculated for combinations of the reference database and reference checking with each of the other databases. For analyses with two strata, the weighted difference in means and corresponding p-value were calculated using a two-sample weighted t-test (Welch) computed in R version 3.5.1 (2018-07-02) using the R package ‘weights’ [28, 29]. The significance level for each individual test ai was adjusted for multiple testing using the Bonferroni correction, i.e. ai = ag/n = 0.0017 for a global significance level of ag=0.05 and n=30 tests (6 databases and 5 stratified analyses).
Qualitative analysis of missed SRs
All SRs that were not included in a combination of the reference database, reference checking, and the best additional database were analysed qualitatively. Features investigated were the topics of these SRs, whether they were located on websites, included in the other five databases that were investigated in this study, listed in a publisher’s database (e.g. ScienceDirect, Wiley Online Library, Springerlink, De Gruyter), or in Google Scholar.