The Wales COVID-19 Evidence Centre (WCEC)
The WCEC brought together a unique collaboration of established research groups within Wales with expertise in conducting rapid reviews, systematic reviews, health technology assessments, economic evaluations, and the analysis of linked population-level routinely collected data. The WCEC operated through a core management team working closely (using videoconferencing) with the Collaborating Partner research teams (Box 2).
The WCEC undertook evidence reviews to address knowledge gaps and the specific needs of government, healthcare, public health and social care stakeholders in Wales. The evidence produced was designed to be of immediate use to decision-makers and to have a direct impact on decision-making, patient and client care, reducing inequalities and identifying future research needs. The work of WCEC was delivered through four main processes: Question Prioritisation Process, Evidence Review Process, Knowledge Mobilisation Process, and Stakeholder Engagement (including public involvement). This paper focuses on the Evidence Review Process, and the Stakeholder Engagement that supports this. The processes for prioritising and setting research questions, and knowledge mobilisation, are described in more detail elsewhere [8, 9].
BOX 2: Wales COVID-19 Evidence Centre (WCEC) Collaborating Partners
WCEC operated through a core management team working closely with six Collaborating Partners:
• Health Technology Wales (HTW) - http://www.healthtechnology.wales/
• Wales Centre for Evidence-Based Care (WCEBC)- A JBI Centre of Excellence - https://www.cardiff.ac.uk/research/explore/research-units/wales-centre-for-evidence-based-care
• Specialist Unit for Review Evidence (SURE) centre - https://www.cardiff.ac.uk/specialist-unit-for-review-evidence
• Public Health Wales Evidence Service - https://phw.nhs.wales/services-and-teams/observatory/
• Bangor Institute for Health & Medical Research (BIHMR) - Centre for Health Economics and Medicines Evaluation - https://cheme.bangor.ac.uk/research/whess.php.en - in conjunction with Health and Care Economics Cymru (HCEC) - https://healthandcareeconomics.cymru/
• Population Data Science - SAIL Databank - https://saildatabank.com/
The core management team comprised a Director and leads for each of the four processes: prioritisation process, evidence review, knowledge mobilisation and impact, and stakeholder engagement. It worked closely (and remotely) with a Public Partnership Group and members of the Welsh Government’s Technical Advisory Cell and Technical Advisory Group (TAC/TAG – sometimes referred to as “Welsh SAGE”) [10]. There was also a methodology subgroup, with representation from all Collaborating Partner groups, meeting on-line fortnightly for methodological support and to share good practice. Members of the Public Partnership Group (PPG) provided public involvement in each review and are involved in the knowledge mobilisation process.
|
Development of the WCEC evidence review process
The WCEC sought to develop an evidence review process that could deliver robust reviews within four to eight weeks but with flexibility to provide decision-makers with a credible summary of the available evidence within days or weeks when needed. We considered the range of rapid evidence review products identified by Hartling et al (2015) (Box 1), but we were also mindful to avoid having too many types of outputs, as this could be confusing to stakeholders [11]. We developed a phased reviewing approach [12, 13] which utilises three types of rapid review products: a rapid response product (which is called a Rapid Evidence Summary), an evidence inventory product (called a Rapid Evidence Map), and a Rapid Review. These are described in more detail in Table 1.
Best Practice Framework
Our overall process and methods development were informed by guidance for conducting and reporting rapid evidence review products [7, 11–18]. The methods selected for our Rapid Reviews were adapted according to the topic area, type of review question, the extent of the evidence base, urgency of the questions, and the needs of the decision-makers. To support the Collaborating Partner review teams, a Best Practice Framework (Table 2) was developed with recommendations from key sources for methodological shortcuts that could be applied at each stage of the rapid review.
Three key guidance documents were prioritised for developing the Framework summarising the recommendations for best practice of conducting a rapid review [7, 13, 18]. We also referred to two existing guidance documents, developed and already used by two Collaborating Partners for conducting rapid reviews [11] or rapid health technology assessments [19].
The Review Process
The phased review process is outlined in Figure 1 and described in more detail in the next section. Each review was conducted by a dedicated Collaborating Partner review team supported by the core management team. A continuous and close relationship with the decision-maker and relevant stakeholders (including Public Partnership Group representation) was facilitated by three or more online stakeholder meetings.
Question prioritisation process
The review question(s) were submitted by stakeholders (e.g. policymakers/advisors, health and social care leads, public, academic/research groups) and prioritised during a formal consultation process, which is reported in detail elsewhere [9]. Urgent questions could also be submitted directly by policymakers or TAC/TAG members and ‘fast-tracked’ onto the WCEC work programme. Key stakeholders, including those submitting the question and members of the Public Partnership Group (PPG), provided expert (topic and methodological) input throughout the evidence review process.
Review Process Phase I: Rapid Evidence Summary (RES)
In phase I, the review question was allocated to an appropriate WCEC Collaborating Partner (review) team, and an introductory stakeholder meeting organised. This early phase comprised preliminary work to inform the Rapid Review work. However, it was adaptable to produce a final rapid response product (Table 1) within weeks if no Rapid Review was planned.
Introductory stakeholder meeting
The stakeholder meetings included members of the core management team and WCEC public partners, the review team, and relevant stakeholders. The introductory meeting was used to confirm the decision problem or review question including key outcomes, clarify how the evidence would be used, and confirm required timelines. It was also an opportunity for stakeholders to notify the review team of potentially seminal research or useful grey literature sources. Where an ill-defined decision problem / question had been submitted in the Prioritisation Process, this meeting also served to develop a structured review question.
Preliminary search of the literature
The review team then conducted a scoping search and a scan of key COVID-19 resources. This was supported by a tailor-made Resources list, including both COVID-19 specific and generic registries and databases of secondary research (Supplementary information, Additional file 1). This preliminary review of the literature enabled the reviewers to familiarise themselves with the topic area, check the research question has not been addressed by other groups or evidence centres, identify the extent and type of available evidence, and inform the methods and design of the rapid review in Phase II (and develop the protocol). The searches focused on identifying robust secondary or tertiary research. Primary studies were considered if no relevant reviews were identified.
Output from Phase I
The output from this first phase was presented as an annotated bibliography with key findings, using a template to support the efficient and transparent reporting of what was done and found. When there was a high priority urgent decision to address, or insufficient evidence for a rapid review, the Rapid Evidence Summary was published as the final output for the stakeholder. For example, our review of ozone machines and other disinfectant in Schools (RES_23) [20].
If an up-to date, robust and directly relevant evidence review or clinical guideline was identified during the preliminary searches then a critical appraisal and summary of the review was conducted. For example, our review of vaccination in pregnant women (RES_24) [20]. If multiple systematic reviews were identified, then a review of existing reviews was considered for the subsequent phase Rapid Review. For example, in our review of innovations to support patients on elective surgical waiting lists (RR_30) [21] and our review of interventions to recruitment and retain clinical staff (RR_28) [22].
Intermediate stakeholder meeting
The findings of the initial phase (if progressing to a rapid review) were presented at a second, intermediate, stakeholder meeting. Collaborative discussions refined the review question, drafted eligibility criteria, and decided on the overall reviewing approach to be used (if proceeding to Rapid Review). Stakeholders identified important contextual issues, known equality, or economic impacts for consideration in the proposed review.
Review Process Phase II: Rapid Review
Phase II comprised a Rapid Review (RR) of the evidence, usually completed within 1-2 months. This could be supplemented or substituted by a Rapid Evidence Map (REM). The Rapid Review delivered a synthesis or meta-synthesis of the evidence, whilst the Rapid Evidence Map provided a description of the available literature (Table 1). Both were based on a comprehensive search strategy and pre-defined protocol.
Rapid Evidence Map
For broad or complex review questions a Rapid Evidence Map could be conducted, providing an inventory of the nature, characteristics, and volume of available evidence for the particular policy domain or research question. The Rapid Evidence Map was based on abbreviated systematic mapping [23] or scoping review [24] methodology, depending on the type of review question. For example, our review of recruitment and retention of NHS workers [20]. Stakeholders could also request a Rapid Evidence Map as the intended final rapid product. For example, in our review of inequity experienced by LGBTQ+ community [20].
Rapid review
Our Rapid Reviews used an adapted systematic review approach, with some review components abbreviated or omitted to generate the evidence to inform stakeholders within a short time frame, whilst maintaining attention to bias. We followed methodological recommendations and minimum standards for conducting rapid reviews [7, 13, 18]. The approach and decisions made on tailoring the rapid reviews were the responsibility of the individual review teams, according to the type of question, research volume and time frame, in discussion with core management team members and expert stakeholders.
Output from phase II
The template for our final Rapid Review and Rapid Evidence Map reports are based on recommendations for reporting evidence reviews for decision-makers [11, 16]. This incorporates a two-page “top line summary”, the results and recommendations for practice presented up front, and the details of the methods used at the end of the report.
Our review reports were made available via a library on the WCEC website [20]. From May 2022, reports were published on a pre-print server and allocated a doi. Thus, reports could be identified readily in database searches, and other review teams could identify potential duplicate review questions early on. A short lay summary and the links to the pre-print server were included in the WCEC library. The ongoing WCEC work programmes, which included questions in progress, scheduled, and completed, was also published on the website.
Knowledge Mobilisation Process - planning pathway to impact
Final stakeholder meeting
A final stakeholder meeting was used to present the findings of the review to the stakeholders, address any queries, identify the policy and practice implications, and support the development of a knowledge mobilisation plan.
Appraisal of the overall review process and rapid review methods
We appraised our overall approach and rapid review methods to reflect on our experience of implementing the WCEC review process and to identify key learning points.
We compared our methods and practice with the recommendations of Garritty et al. [7], Tricco et al. [13], Plüddemann et al. [18], Mann et al. [11], and Health Technology Wales [19], as the principal resources for our own Best Practice Framework (Table 2). We also compared our rapid review methods with the array of methodological shortcuts recommended in published guidance developed or used across rapid review centres and organisations, as reviewed by Speckemeier et al. [25] (Table 3). That scoping review included guidance for any type of rapid evidence product with a completion time ranging from a day to over 6 months. The output included a table summarising the range of recommendations, or methodological shortcuts, provided in the guidance, and the frequency with which they were reported. However, the authors did not provide an indication of which recommendations were optimal.
The approach used for appraising our rapid review methods
We assessed whether our reviews, mainly completed within two months, aligned with our Best Practice Framework, and whether methods aligned across our different Collaborating Partner groups. Findings were presented at a methods subgroup meeting and discussed to reflect on what worked well or could be improved (and how).
As part this appraisal, key data from all Rapid Reviews and Rapid Evidence Maps completed up until March 2023 were extracted. These included data on the search date, overall reviewing approach, limits applied, sources searched, volume of research identified, study selection process, data extraction process, and approach used for quality assessment. An important consideration here is that the approach used depended on the research question being addressed, the volume and type of research available, and the timeframe within which the review was conducted.
Where the methods of individual reviews met or exceeded the recommendation in the Best Practice Framework the text was highlighted green, for recommendations that were either partially or not always met the text was highlighted orange, and where our methods consistently did not meet the recommendation, the text was highlighted in red. We did not seek to identify individual failures or the frequency with which our methods did not meet the recommendations, but to reflect on our overall process and methodological approach used and identify what changes could be made. The colour coded Framework table was presented at a methods group meeting, and participants given a copy of the data extraction table summarising individual reviews.