Aim
The Grant Program evaluation aim is to identify the interventions that are acceptable, feasible and effective, have a positive impact on VFRs and their families, provide value for money or return on investment for Movember and Distinguished Gentleman’s Ride, and the effectiveness of the Grant Program overall.
The research questions are:
1a). Is each Project effective in improving the mental health and well-being for participating VFRs and/or VFR families?
1b). What is the sustainability of the individual Projects that have shown to be effective and how suitable are they to spread?
1c). What is the social return on investment (SROI) of the individual Projects?
2a). What is the performance of Projects in improving the mental health and well-being, for participating individual VFRs and/or VFRs families and the whole Grant Program?
2b). What is the return on investment of the Grant Program, considering the mental health outcomes that have been achieved against the investment made by Movember?
Setting
The study evaluation involves 15 organisations and their 25 Projects in seven countries (Australia, Canada, Ireland, Germany, New Zealand, United Kingdom, United States), and the overall Grant Program.
Evaluation study design
The evaluation will take an ecosystem real-world approach, which recognises and manages the context and complexities involved. The ecosystem approach establishes the systems and connections to gather, analyse, understand, and synthesise the monitoring and evaluation data to support decision-making (at the Projects and Grant Program levels) [14]. Mixed methods and novel approaches will be used to evaluate the Projects and the Grant Program in real world conditions. Also embedded in the evaluation design are methods to meet the key priorities for the Grant Program of knowledge sharing, organisational learning for future grant programs, Projects, and interventions.
A multi-step process will be adopted that will be revised and adapted in collaboration with the Movember Monitoring, Research and Evaluation team. Figure 1 provides an overview of the evaluation project and program design. There are two key parts of the evaluation. One relates to the primary target audience (VFR) via the evaluation of the effectiveness of each project. This part will be referred to as the Project evaluation. The other is the secondary target audience, the Grant Program and Project’s organisations via the evaluation and comparison of Projects, the impact, sustainability, and scalability of the Projects. This part is referred to as the Grant Program evaluation. Whilst each part has targeted specific methodology, data collection and analysis, they are not discrete, nor undertaken separately, rather they may co-occur and inform each other throughout the evaluation. The knowledge sharing and organisational learning is continuous throughout the evaluation with specific and targeted activities at key time points.
The Projects commenced at various times during 2022–2023. The evaluation takes a two-step approach. The first step involves co-design and consultation processes [15, 16] to prepare for the data collection to be used in the evaluation and commenced in 2022. The second step to collect the data and undertake the evaluation commences in the second quarter of 2024.
STEP 1 – co-design and preparation
Co-design has benefits for improved ideas and knowledge (particularly about the project and participant needs), embedding real-world information, consensus and knowledge sharing in the evaluation, for the decision makers, funders, organisations and participants [17]. The approach provides: insights relevant to the revision of the problem statement; examination of how the program addresses the aims; amendment or affirmation of the Kurt Lewin Theory of Change [18], including how the grant program sits within this theory; and a realist evaluation approach to amend or confirm the aims, the ‘what, when, who and how’ of monitoring and evaluation outcome measures and data collection. This approach will improve the evaluation project management through better context based knowledge, and decision-making for more targeted monitoring and evaluation, and enable continuous improvements [17].
Co-design techniques to be used in this preparation period include workshops for idea generation and consensus building, using nominal group techniques in the identification of a set of evaluation indicators, project measurement instruments, and protocols and project processes [19–22]. Co-design includes collaboration and open communication including two-way feedback during the lead up to data collection, the use of Expert-based Collaborative Analysis (EbCA) [23], with the interpretation and data analysis phase using Knowledge Discovery from Data (KDD) [24], digital platforms to visual information, final workshops for knowledge sharing and organisational learning [25–27]. When there is agreement, these are incorporated into the development of the data collection plan and monitoring and evaluation plan. To support the quality of our co-creation approach, we use an operationalised version of the Dialogue, Access, Risk/Befit and Transparency (DART) framework with the project partners [28]. This framework is part of the building blocks for developing the interaction and its value with the Projects and stakeholders.
The following activities are performed during this co-design and preparation period:
-
Establish a data sharing agreement with each of the 15 organisations and confirm each project has ethical approval and consent forms which includes sharing de-identified data with UC.
-
Confirm the logic of change (inputs, throughputs, and outputs), target audience/s and sample size for each of the Projects.
-
Gather information relevant to the evaluation e.g. intervention mechanisms, time frames, recruitment period.
-
Identify a set of common outcome indicators, revise and affirm with an independent international expert panel.
-
Establish domains of inputs, throughputs and outputs across all Projects and the relevant parameters of some domains (e.g. frequency of best practice range) with a panel of Project’s representatives and Movember.
-
Affirm time points that the Projects will be collecting data, the minimum participant numbers for the various analysis methodology (e.g. statistical analysis, SROI)
-
Collaborate with the Projects to support their data collection, provide support or advice as needed, develop systems for secure data transfer within the seven country jurisdictions.
STEP 2 - Data collection and analysis methods
Data collection
Each of the organisations will undertake their own data collection for their Projects and administer outcome measures such as questionnaires. De-identified data from each project will be uploaded on a secure IT platform (AARNet FileSender) for the UC evaluation and stored electronically in secure password protected files and computers on firewall protected servers.
Analysis methods
The complexity, dimensionality and non-linearity of mental health interventions, particularly multiple disparate interventions, necessitate the use of multiple analysis techniques to develop real-world decision support systems. The data collection and analysis tools and methods for this evaluation project are described in this section.
1. Global Impact Analytics Framework (GIAF)
The Global Impact Analytics Framework (GIAF) will be used to analyse the processes of impact of the VFR Projects. The GIAF is a novel approach and a toolkit for impact analysis developed by authors (LSC, SL, CW) in partnership with a consortium of international topic experts [29, 30]. The GIAF has previously been used to measure the impact of different Projects implemented in the real world [31–33]. The GIAF uses an ontoterminology approach by including a taxonomy of the process of implementation, an accompanying glossary of terms and a series of checklists to evaluate the different domains of the taxonomy. The methodology utilises qualitative and quantitative analysis. The GIAF toolkit has been used in various single country and international Projects [32–34].
The use of the GIAF allows measurement of the progress of each of the Projects, on relevant domains in the different phases of implementation, and the identification of process gaps in implementation. The GIAF evaluation supports organisations, funders and researchers to learn from their experience, make improvements and determine factors which may influence sustainability and scaling up of the Projects [34]. The GIAF provides ladders (a type of ordinal scale) in the initiation phase of (pre-implementation) in the domains of planning, engagement, co-creation, pre-readiness, the maturity phase (early implementation) domains of readiness, usability, dissemination, adoption, penetration, and the evolution phase (later implementation) domains of maintenance, diffusion, extension, expansion, diversification, exporting. Each of the GIAF ladders contains a checklist with multiple levels, which is used to document information and then rate the extent a project engaged in the activity or strategy. For example, the dissemination1 ladder has seven levels ranging from 0 = no dissemination to 6 = comprehensive dissemination. The real world data to rate the Projects on the relevant ladders, is collected via various tools (spreadsheet, surveys, questionnaires, and intervention use) and opportunistic information. In the GIAF a standard vocabulary was developed and a GIAF glossary of 180 terms to describe and define the key concepts of each overarching indicator used in the various ladders. Once data is collected, the Projects will be rated by three independent raters on each of the ladders in the relevant domains.
Statistical Analysis
Descriptive statistical analyses will be performed to summarise data and profile all variables. A set of indicators of sustained mental health and wellbeing outcome measures used by each Project over the study period will be analysed using repeated measures at three timepoints (baseline or pre-intervention, post-intervention, and post-post-intervention), assessing any significant trend over time (e.g., linear). The analysis plan includes statistical analysis methods appropriate to handle multilevel longitudinal design data for individual and organisational outcomes including needed adjustments for baseline characteristics, such as demographics and comorbid conditions, and accounting for plausible pathway mechanisms (mediation and moderation mechanisms).
The primary statistical methods planned will be analysis of variance (ANOVA) in the cross-sectional design (e.g., baseline data) and mixed-effects linear models for repeated measures data [35]. In case of non-normal distribution, generalized mixed-effects linear models will be applied instead. Finally, while adjusting for participants’ characteristics, the mediation and/or moderation effects will be devised and tested for significance[36]. Program performance will require aggregated outcomes (at the organisational level) and meta-analysis type of methods such as random effects modelling approach to handle these comparisons [37]. Statistical tools based on Bootstraping and/or Monte Carlo methods [38–40] will be devised to deal with the inner uncertainty of the data and the small sample size (aggregated outcomes) and other issues (e.g., non-normal outcomes) when deriving confidence intervals for estimated effect sizes. SPSS [41] STATA [42] and Mplus [43] statistical software programs will be used for various analyses.
2. Performance evaluation
The evaluation team will use an adapted version of the hybrid decision support system EDeS-MH (Efficient Decision Support-Mental Health) [44] to assess the performance of the 25 Projects and the overall efficiency of the Grant Program. EDeS-MH integrates a Monte Carlo simulation engine, fuzzy inference engine, and Data Envelopment Analysis (DEA). This combination of methods allows for: the consideration of inherent uncertainty within the ecosystem being studied (Monte-Carlo simulation engine); the integration of context-based expert knowledge and theoretical models using an artificial intelligence approach (fuzzy inference engine) for variable interpretation; and the incorporation of variables as inputs (resources) and outputs (outcomes) in operational models to assess the Relative Technical Efficiency (RTE), a performance indicator of the project (Data Envelopment Analysis). The indicator is scored from 0 to 1 where higher RTE values indicate greater efficiency in the decision making units of analysis (Projects in this study).
To provide a comprehensive assessment of the Projects, various scenarios will be examined, representing meaningful combinations of input and output variables from different perspectives such as users, providers, intervention, or institutional approach, among others. The EDeS-MH tool has previously been used to evaluate the efficiency of MH systems at managerial (macro and meso levels) in Spain [45, 46], and service (micro) levels in countries like England [46] and Finland [47]. Moreover, it has demonstrated effectiveness in evaluating interventions related to Mental Health system management [44, 48]. It is well-suited for evaluating the Grant Program and individual Projects, where the insights obtained for organisational learning for the Projects and can assist decision-makers to design or adjust interventions or future Grant programs for greater impact.
3. Social Return on investment (SROI)
Social Return on Investment (SROI) methodology will be applied to measure wider socio-economic outcomes, analysing and computing views of multiple stake holders in singular monetary ratio. The aim will be an analysis involving only beneficiaries against all stakeholders, and incorporating objective designs such as before-and-after designs for accounting for outcomes to improve robustness [49, 50]. The analysis in this evaluation will be based on an impact map (or logic model), reporting negative outcomes, and using objective quantitative modelling for each of the funded Projects and comparing across the grant program overall. The SROI computation and appraisal will be based on the UK Cabinet Office framework [51, 52].
The proposal is to evaluate the impact of each Project on health-related quality of life (HRQoL) of participants using measures (e.g. SF12, PHQ-9, EQ5D-5L) [53–56]. The evaluation will be reliant on a comparison group for the intervention Projects and collection of the evidence on outcomes and monetised them. Those aspects of change that would have happened anyway or are a result of other factors need to be eliminated through statistical modelling and analysis discussed earlier (adjustment or controlling for other factors). The evaluation team will ensure the SROI analysis is undertaken on individual Projects where it is plausible. It will take into account implementation project costs (set up costs, fixed costs of implementation), intervention costs, wider systems impact (secondary impacts, societal consequences), net intervention costs, net implementation costs, health benefits (health outcomes, quality adjusted life years (QALYs) gained), net implementation health benefits, and implementation cost-effectiveness (return on investment, cost-effectiveness ratio, cost-benefit ratio). In comparing the costs and benefits of the 25 Projects, the evaluation will use standardized outcome measures that permit comparisons across Projects, weight time spent in a given state of health by quality of life in that state (QALY), measure impact of premature death and years lived with disability (disability-adjusted life year (DALY)), use set thresholds for decision-making based on society’s willingness to pay for one additional QALY (or one less DALY), apply country specific thresholds and prioritize interventions that produce the greatest benefit for cost.
4. Data Co-op Platform
The evaluation will use visual tools and processes to analyse datasets using visual representations of the data to facilitate pattern recognition and to elicit expert tacit knowledge in the EbCA process [57]. These insights support better, data-driven decisions using a mental health ecosystem approach. The data Co-op platform is a secure digital tool and repository to facilitate information and knowledge sharing [58]. The platform utilises existing public data platforms such as, in an Australian context, the Australian Data Archive (ADA) and Analysis and Policy Observatory (APO); government data sources such as the Australian Bureau of Statistics (ABS), Australian Institute of Health and Welfare (AIHW), and other open data sources. Community and social media data are also utilised. The data Co-op platform transforms diverse and disconnected open datasets from multiple underlying data sources into “connected and geospatially enabled linked data packages”, using data linking, data aggregation and geospatial mapping [59].
Data from the Projects can then be overlaid over these linked data layers or packages to gain further insights into improving the mental health and wellbeing of project participants. The information is then assembled into interactive platform dashboards. The data Co-op platform will allow end users (the 15 organisations, Movember and UC researchers) to share their data and capabilities to create collective impact. The end users can operate these dashboards and gain information and insights about their individual Projects in their respective countries and an overview of all 25 Projects. The aim is to organise and present these existing modules as a dynamic (changing as more information and data is added), purpose-built, rapid-response decision-support system for the Projects. The end user is able to ask ‘questions’ or request different scenarios via the platform and the information is presented using all the linked data packages.
5. Gendered Lens
A key element of the Movember VFR Grant Program is the inclusion of gendered approaches when developing and implementing the Projects. Gendered approaches require the Projects to understand how socially constructed gender roles internalized by men and women can act as barriers to improving and maintaining men’s health and well-being [60, 61]. Gender norms, roles and relationships affect women and men differently and lead to different, often unequal opportunities between groups of women and men [62]. The assessment of the level the Projects adopt gendered approaches is performed by two external gendered lens experts. Each Project will be assessed using qualitative information and categorised into a profile using an adapted version of the World Health Organization Gendered Assessment Tool (WHO GAT) [61]. Projects will be assessed by the two gendered lens experts at three time points, to identify any change in the Project’s progress and inherent ability towards adopting a gendered approach. The change made by Projects is included in the evaluation of performance.