Overview
CogTale forms an element in the research roadmap of the Cognitive Interventions Design, Evaluation, and Reporting (CIDER) group, an international team dedicated to advancing the field of COT research in the older adult population (https://cider.med.umich.edu; 11,). Key objectives of CIDER are promoting the methodological rigor of interventions and trials, acceleration of evidence synthesis, and dissemination of reliable and responsible information to the general public, researchers, and clinicians.
The Melbourne eResearch Group (MeG;https://eresearch.unimelb.edu.au) implemented the CogTale platform, which offers a combination of both public and restricted-access services. The restricted-access section is a custom-built web application supporting a comprehensive pipeline from data entry to displayed outputs.
We established CogTale as a three-tier web application, with a single-page React-based user interface. The backend is based on a NodeJS application that provides a representational state transfer (REST) -based application programming interface (API) supporting job execution, messaging, and other business management services. The analysis sub-module, implemented in R, supports statistical analyses and report generation. Record and file data are persisted to a MongoDB database. Together, the three-tiered platform supports the identification and queueing of targeted articles, data extraction, data analysis and reporting, quality assurance, and administration of the entire review process. The application is integrated with a WordPress-based public-facing website.
Application Inputs
Eligible studies for inclusion on the CogTale platform are articles reporting controlled trials of cognition-oriented interventions targeting older people on the continuum of cognitive health and impairment, ranging from cognitively unimpaired to people with a formal diagnosis of dementia.
Trained coders extract detailed design and methodological data from each eligible trial, guided by a coder manual, into a data entry form that also includes all relevant means and standard deviations for every measure, condition, and time-point in a trial. We organised Items on the data extraction form based on the following study aspects:
- Methodological/design data
- Setting and design of the study
- Primary and secondary outcomes
- Interventions: Nature and Dose
- Intervention Targets and Components (experimental and control)
- Populations and sub-populations included
- Measures used
- Statistical analyses done
- Numerical data/findings
- Sample size, means and standard deviations in relation to each measure, condition and time point reported in the study
We designed the coder dashboard such that half the screen includes the data extraction form while the other half includes several possible tabs between which coders can toggle, namely, a portable document format (PDF) viewer of articles related to the trial being coded, the analysis being conducted on the data (see below), a discussion panel, and the coding manual.
For each data extraction item, coders can choose from several response alternatives (termed “vocabularies”), with some options being mutually exclusive and some allowing multiple vocabularies. Coders also have the option of adding an additional response if deemed appropriate. This is automatically added to the list of vocabularies for that item. A sample subset of data extraction items from one section is shown in Figure 1.
Upon completion of the data extraction process, coders are required to write a plain language summary highlighting the main findings, strengths and limitations of the study.
There are 5 possible study statuses: queued (before coding has commenced), in progress (when the study has been assigned to a coder), requirements met (when all required sections have been completed by the coder), revisions required (when an administrator has reviewed the coding and has decided that it is incomplete or needs to be revised) and verified (when an administrator has approved the coding of the study). This ensures that data extraction undergoes systematic monitoring and evaluation by research personnel with appropriate expertise.
Application processing/algorithms
Methodological quality indices
The detailed data extracted from each trial is used to calculate three common indices reflecting the methodological quality of the study, namely: the PEDro scale (14), the Jadad rating (15) and the Cochrane Risk of Bias tool (16). In relation to these scales, the data extracted is used to determine the score of each item included in each of the scales, such as blinding of participants and outcome assessors, the method of randomisation, and retention rates, amongst other factors. Not every item on the calculated methodological quality index is necessarily represented by a single item on the data extraction form, and at times a combination of responses is used to determine the score on a particular item. We developed and refined algorithms describing these scoring rules in an iterative way to achieve close agreement with manual scoring of the above indices (available from the authors upon request).
Effect estimates and confidence intervals
Once the ‘Results’ section of the data extraction form is complete, estimates of treatment effect (Hedges’ g; 17) along with their confidence intervals are automatically calculated for each measure and time point. The treatment effect estimates are based on the standardised mean differences between experimental and control groups. For all outcomes, a positive effect size favours the experimental condition, whereas a negative effect size favours the control condition.
Meta-analysis
The treatment effect estimates from multiple studies in the platform can be synthesized through a quantitative meta-analysis carried out according to the users’ criteria.
Based on recent recommendations (18), a pooled Hedges’ g is calculated for each outcome using the random-effects model with the restricted maximum likelihood (REML) heterogeneity estimator (τ2) in conjunction with the Hartung–Knapp–Sidik–Jonkman (HKSJ; 19, 20) method to calculate the corresponding confidence intervals. The REML estimator outperforms other heterogeneity estimators and the HKSJ correction is not influenced by the magnitude or estimator of τ2 and it is insensitive to the number of studies (18).
Heterogeneity in effect estimates across studies is tested using the Q-statistic (with p < .10 indicating significant heterogeneity) and its magnitude is quantified using the I2 statistic, which is an index that describes the proportion of total variation in study effect size estimates due to heterogeneity. This is independent of the number of studies included in the meta-analysis and the metric of effect sizes (21). As the Q-statistic has low power when the number of studies is small (22), 95% prediction intervals are calculated to quantify the extent of heterogeneity in the distribution of effect sizes (23). The prediction interval is an estimation of the range within which 95% of the true effect sizes are expected to fall. Further information about the meta-analysis is available from: https://cogtale.org/cogtale-report-statistical-information.
Grading of the evidence
In relation to each outcome included in the quantitative meta-analysis, the platform automatically calculates a measure of certainty in the evidence, that is, an estimate of how confident we can of the result.
The certainty is calculated as being either ‘Low’, ‘Moderate’, or ‘High’ with regards to each outcome, and it is determined by a unique algorithm based on several factors. These include a combination of; a) the heterogeneity of the effect (e.g., the I2 statistic), b) the methodological quality scores of the synthesised studies (e.g., the PEDro score), and c) the combined sample size. Findings associated with low confidence are likely to change with the addition of more relevant and high-quality studies, whereas findings associated with high confidence are more likely to remain the same if new studies were added to the analysis.
Evidence Summaries
Based on a combination of the effect size, quality rating, and certainty in the accuracy of the finding, a recommendation may also be included in the plain language summary of the meta-analysis result. For example, where the overall quality of the included studies is high, and the certainty in the accuracy of the finding is moderate to high, a recommendation may be made as to the veracity of the intervention effect. The recommendation could indicate the treatment is unlikely to be effective, or an intervention could be recommended with a high degree of confidence.
Where there is low confidence in the accuracy of the estimate, no recommendation is made for or against the intervention. However, a caveat that the advice may change with the accumulation of further high-quality evidence is included.
User interface
The website contains several general features (About, Resources, News, Twitter feed, study and report metrics, etc.). The website landing page is also the main gate to accessing the application.
A ‘Login’ function permits the creation of an account or to log in to an existing one. Once logged in, users can manage information on their profile, and access the ‘Explore’ function from the user menu, from where they can browse all studies stored on the database. A section of the landing page is displayed on Figure 2.
To date, we specified four account types, each with different permissions, and all free of charge.
A ‘General User’ account is a general profile, available for anyone who wishes to bookmark or export studies, or to conduct meta-analysis procedures and receive associated reports.
Users classified with ‘coder A’ accounts have permission to extract all relevant trial data into the CogTale platform but they cannot self-assign studies. An administrator must assign these. Further, questions in the data extraction form that require greater judgement or specialist knowledge may be disabled for this user type and a more experienced coder (i.e. a coder B or administrator) must complete them. ‘Coder B’ accounts can add new studies, self-assign studies for coding, and extract all relevant data in the question set. Finally, ‘Administrator’ accounts grant permission for users to establish coding accounts as well as perform any of the tasks mentioned thus far. Following a review process, Administrators can also change the status of a study into ‘verified’. They are also able to add or change questions and response alternatives in the data extraction form. Thus, the CogTale data extraction function is flexible and can be adapted to changes in research conditions or attributes that may be reported.
Application functions
Below, we summarise the main application-related functions available within the user interface:
Users can browse or perform searches to locate specific studies or groups of studies in the database.
Users may simply browse the catalogue by navigating through the list or by writing the name of the author, the title or the journal in which the study was published. The available studies may be viewed either as a list or in tabular form, where users can sort them on several aspects, including author name, year of publication, trial design (e.g., randomised controlled trial), population of interest (e.g., dementia), or intervention (e.g., cognitive training). Two sliding bars further allow users to filter studies based on a selected range of publication years and methodological quality scores.
Advanced search options permit users to specify numerous other search properties, broadly corresponding to all main sections in the data extraction form, such as the delivery format or setting of the intervention, the type of control group, or the sample size of a study, amongst other properties.
Users can export the results retrieved through the search by clicking on the ‘Export’ function, where tab-separated values (TSV) format allows transfer of the information from the database to a spreadsheet. Citation information can also be downloaded in research information system (RIS) format for import into reference/citation manager applications, like Zotero, Citavi, Mendeley, and EndNote.
- Single study results page
By clicking on a given study, users can view detailed information about it. The single study results page displays the status of the study in the pipeline, which (as noted above) can vary from being ‘In progress’ to being ‘verified’, depending on its stage in the coding process. For each study in the database, the single study results page displays citation information and the abstract, an expandable table of methodological quality scores (item level and total), effect size tables (for each measure, time points, and populations) and a plain language commentary or summary provided by the coder.
Users can select multiple studies and submit them to a meta-analysis by clicking on the ‘analyse’ button, which will in turn bring up the meta-analysis wizard (MAW). The MAW allows users to define the scope of the meta-analysis by specifying populations (e.g., people with mild cognitive impairment), targets (e.g., study participants, caregivers, or clinicians), and broad as well as specific outcomes of interest (e.g., global cognition, delayed recall). Figure 3 displays an example of the selection of outcomes of interest for the generation of a given meta-analysis.
Only studies that include relevant data (i.e. means and standard deviations for each group) can be pooled together in a meta-analysis. Therefore, if a study is retrieved by the search but it does not have the data required to allow computation of effect sizes, CogTale automatically excludes it from the analysis. For any given outcome or population of interest, data can be pooled if there are at least 3 studies that provide data for that effect estimate. By default, all available outcomes and populations that meet the minimum criteria are selected for meta-analysis. However, users can customise the search to remove any outcome and any population from the analyses if desired.
Application outputs
- Single study results page
- Meta-analysis report
Once a meta-analysis is specified and submitted, users receive a comprehensive report containing tables, figures and results summaries as an email attachment. Table 1 summarises the sections included in a report, Figure 5 presents an example of a section from the report, and Figure 6 presents an example of the figure included in a report, showing the relation to each outcome, the effect estimate and confidence interval together with the certainty of the finding.
The pooled effect size is interpreted in-text using three different methods. Firstly, the measure of non-overlap (U3; 24) gives the percentage of cases from the experimental condition that exceeds the mean of the control condition. Secondly, the percentage of overlap between the control and experimental conditions is reported (25). Lastly, the probability of superiority (also known as the common language effect size; 26) which gives the probability that a person picked at random from the experimental condition will have a higher score than a person picked at random from the control condition. For example, an effect size of Hedges’ g = 0.50 would be presented in the report as:
The treatment effect found suggests that 69% of the treatment will be above the mean of the control group, 80% of the two groups will overlap, and there is a 64% chance that a person randomly picked from the treatment group will have a higher score on all specific outcomes than a person randomly picked from the control group.