Cognitive processes such as working memory, processing speed, attention, and language functioning all decline during healthy ageing (Reuter-Lorenz et al., 2021; Salthouse, 2010; Segaert et al., 2018). As life expectancy in developed countries continues to increase (Roser et al., 2013), mitigating age-related cognitive decline has become an increasingly popular field of research. In the present study, we examined whether computerised cognitive training improves performance across multiple domains of cognition.
A recently popular method of delivering cognitive training has been to use commercially available brain training programmes. Applications such as Lumosity (Lumosity, 2023), Peak (Peak, 2023) and BrainHQ (BrainHQ, 2023) are commercially advertised as training programmes that will improve cognitive ability and delay cognitive decline. These applications are easy to use, relatively affordable, adaptive (increasing in difficulty with improved performance, which is key for cognitive training programmes to work (Brehmer et al., 2012)) and include training games that cover a variety of cognitive processes such as short-term memory, language, attention, and processing speed.
Support for the effectiveness of brain training programmes in healthy older populations is mixed. There is evidence from meta-analytic studies that computerised cognitive training or brain training leads to small but significant improvement in skills such as working memory, processing speed, and visuospatial skills in healthy older adults (Kueider et al., 2012; Lampit et al., 2014). Conversely, a recent meta-analysis has found no convincing improvement after accounting for publication bias (Nguyen et al., 2022). Older adults have higher expectations of brain training compared to younger adults (Rabipour & Davidson, 2015), and they could arguably benefit most from their use, if effective. Whether brain training programmes lead to tangible improvements in cognitive abilities in healthy older adults therefore warrants further investigation.
Some of the inconsistencies found in cognitive training research more broadly can be attributed to methodological differences (Green et al., 2014; Noack et al., 2014; Simons et al., 2016). Sample sizes vary substantially and are often limited; 50% of studies in a 2014 review of transfer effects in cognitive training studies had fewer than 20 participants in each group (Noack et al., 2014), and 90% had fewer than 45 in each group. Training duration is also often limited; 50% of studies reported 8 hours and 20 mins of training or less, with the majority (90%) reporting less than 20 hours in total (Noack et al., 2014). Another concern is the size and content of the test battery (Green et al., 2014). Many studies, especially early studies when cognitive training was in its infancy, used a small test battery (i.e., one test per cognitive function) to assess cognitive outcome measures. However, to assess valid training benefits, the outcome measures need to be chosen such that they assess changes across the construct rather than the individual tasks. For example, executive function would ideally not be assessed by a single measure: executive function itself is made up of smaller subprocesses (inhibition, shifting and updating (Sandberg et al., 2014)), so one outcome measure that focuses on one of those processes is not enough to encompass executive function as a whole. Moreover, if cognitive training includes a specific task that trains, for example, working memory (e.g., an n-back task), then its true benefits can only be assessed through performance on a different task which measures skills within this domain (i.e. a task which also assesses working memory such as a digit span task) to rule out that improvements are mere practice effects. A final consideration is the choice of control group (Simons et al., 2016). The gold standard is to use an active control group that mimics the intervention as closely as possible, while leaving out the ‘active ingredient’ of the training. However, the very nature of cognitive training programmes makes this difficult. The type of control groups in published studies therefore varies, often including passive control groups, and not always accounting for placebo effects, motivation, or cognitive demands (Simons et al., 2016). Active control groups can be divided further; into ‘active-ingredient’ controls and ‘similar-form’ controls (Masurovsky, 2020). ‘Active-ingredient’ control groups are identical in every aspect apart from the ‘active’ ingredient, but these are difficult to implement and in practice are rarely used. ‘Similar-form’ active controls are much more common, mimicking aspects of the training but differing in a few ways. ‘Similar-form’ control groups are still considerably more suitable than passive or no-contact control groups (Masurovsky, 2020).
We note that in the above set of issues, a key concern, but one most often overlooked, is the need to establish evidence of transfer effects (the benefits of the training ‘transferring’ to other, untrained, cognitive tasks), as opposed to practice effects (improvements within the training, or same tasks, itself). Transfer effects can be categorised by how similar they are to the trained cognitive domain (Sala et al., 2019). Near transfer refers to skills generalising to similar domains (e.g., training in working memory which is transferring to other, related but untrained, working memory tasks), while far transfer relies on the cognitive domain being weakly related, or not related at all, to the trained domain (e.g., working memory training which is transferring to language, or executive control benefits; (Sala et al., 2019). The more shared features there are between domains, the nearer the transfer effects (Sala et al., 2019). Of course, the ultimate aim of brain training programmes is that training of specific cognitive processes leads to improvements across cognitive domains (Stojanoski et al., 2018). There is some evidence that brain training can lead to transfer effects (McDougall & House, 2012), however, there are also cases where no transfer benefits are found at all (Kable et al., 2017; Stojanoski et al., 2018). Even when papers report significant positive effects of brain training programmes on cognition in healthy older populations, the effects are often driven by improvements on very near transfer tasks (Lee et al., 2020), and little to no evidence of far transfer is established. Furthermore, a recent meta-analysis of brain training randomised controlled trials with older adults found small but significant transfer to some cognitive domains, however most effects were no longer significant once publication bias was taken into account (Nguyen et al., 2022). There are also cases where previously reported effects have perhaps been exaggerated. Brain training research sometimes describes improvements in trained effects (improvement in performance within the programme) and report these as an improvement in cognitive ability (Bonnechere et al., 2021). Instead, these are in fact just practice effects and do not necessarily entail improvements in cognitive function, since transfer effects (near or far) were not established or were not even assessed. Transfer effects are essential if a training programme is going to be effective and wide-reaching, especially in ageing populations, but concrete evidence for them is often lacking.
Due to these inconsistencies and the controversy surrounding brain training programmes and their effects, there is a need for robust and rigorous research to assess its efficacy. An extensive review paper has given recommendations for how research into brain training programmes should be conducted and published (Simons et al., 2016). The researchers recommended a large sample size with random allocation to groups and blinding of conditions if possible. An appropriate active control group should be utilised, meaning a control group that correctly mimics the level of engagement of the intervention, but that theoretically will not result in improved cognitive performance. This allows for placebo effects to be controlled for, and any effects to be attributed to the ‘active’ ingredient of the training programme (Simons et al., 2016). Furthermore, interventions need to control for expectations and motivations of both groups. Finally, the researchers recommend using appropriate outcome measures and a test battery using multiple tasks to measure each construct. Our study has incorporated each of these key recommendations.
To assess the possible cognitive benefits of the training we measured cognition across a wide range of domains. Among various possible cognitive functions of interest, working memory stands out as a commonly reported function. This is not only due to its consistent decline with age (Salthouse, 2010) but also because it serves as a foundation for many other cognitive abilities. Working memory training has shown convincing improvements in memory skills in older adults in recent years (Karbach & Verhaeghen, 2014). Another cognitive skill that exhibits consistent decline with age is processing speed, which has been effectively trained in older adults: the well-known ACTIVE study demonstrated significant and sustained improvements over a two-year period in processing speed (Ball et al., 2002). Although findings on attention skills are not always consistent, attention skills do undergo changes with age, and deficits in attention can impact daily life (Glisky, 2007), making it a worthwhile line of enquiry. Finally, language problems, specifically word finding difficulties, increase with age (Maylor, 1990) (Segaert et al., 2018) and are commonly reported by older adults as deficits they notice in older age.
In sum, cognitive training is an important field of research that needs methodologically sound experiments to assess whether brain training programmes are effective in healthy older adult populations. The aim of the current study was to do just that: to assess the efficacy of a commercially available adaptive brain training programme (Peak) for improving function in a range of cognitive domains, using a randomised controlled study with healthy older adults. We aimed to include a larger sample size than has been used in many previous cognitive training studies (Noack et al., 2014) and an appropriate active control group. We assessed cognitive functions known to decline with healthy ageing and used tasks that are commonly used in ageing research. These included working memory (Forward Digit Span task and visual N-back task), processing speed (Choice Reaction Time task and Letter Comparison task), attention (Attention Network Task) and language functioning (tip-of-the-tongue task). We hypothesised that we would find significant improvements within the training games (practice effects) for our intervention group. Whether we would find transfer effects from the brain training to other cognitive abilities was uncertain, though we anticipated any transfer effects would be to similar cognitive tasks (near transfer) rather than to dissimilar tasks (far transfer).