Stage 1: Content generation and identification of existing outcome measurement instruments
Searches of published literature identified 14 studies that met the criteria for inclusion in the scoping review. Characteristics of the studies and the decision-related outcome measurement instruments used in the studies are shown in Table. 1 below.
Table 1
Characteristics of studies included in the scoping review and decision-related outcome measurement instruments used
Lead author name
|
Publication date
|
Setting
|
Type of decision
|
Self or proxy decision
|
Outcome
domains
|
Outcome measurement
instruments*
|
Juraskova, I et al [36]
|
2015
|
Oncology
Australia and New Zealand
|
Participation in breast cancer trial
|
Self
|
Anxiety/depression; attitudes towards participating; decisional conflict; involvement preferences; actual (objective) understanding; perceived (subjective) understanding
|
Including: adapted information style questionnaire; CPS; DCS; QuIC
|
Politi, MC et al [37]
|
2016
|
Oncology
USA
|
Participation in cancer trial (multiple cancers and trials)
|
Self
|
Clarity of opinion about participating; decision readiness; decisional conflict; intent to participate; knowledge; self-efficacy
|
Including: questionnaire of eleven objective knowledge items; low literacy version DCS; decision readiness measured on a single-item 5-point scale
|
Sundaresan, P et al [38]
|
2017
|
Oncology
Australia and New Zealand
|
Participation in prostate cancer trial
|
Self
|
Anxiety/depression; decisional regret; decisional conflict; knowledge
|
Including: objective knowledge measured using adapted 11-point and 7-point knowledge scales; DCS; QuIC; MDMIC with additional items related to clinical trials; DRS; SWDS
|
Robertson, EG et al
|
2019
|
Oncology
Australia
|
Participation in acute lymphoblastic leukaemia trial (children and young people)
|
Self
|
Acceptability of DA; decisional conflict; emotional safety; feasibility; involvement in decision-making; knowledge
|
Including: preferred and actual decision-making role (purpose designed); DCS for parents (purpose designed for adolescents), FCC‐HL‐AYA (adapted for parents and adolescents); adapted versions QuIC
|
Cox, C et al [26]
|
2012
|
Intensive care
USA
|
Prolonged mechanical ventilation provision in critical illness
|
Proxy
|
Acceptability of DA; conflict with physicians; decisional conflict; feasibility; physician-surrogate discordance; quality of communication; trust in physician; comprehension of relevant information
|
Including: QOC; DCS
|
Einterz, S et al [27]
|
2014
|
Nursing homes
USA
|
Treatment decisions for person with advanced dementia
|
Proxy
|
Clinician–surrogate concordance; involvement in decision-making; knowledge; quality of communication; satisfaction with care
|
Including: QOC; knowledge assessed with 18 true/false items
|
Hanson, L et al [28]
|
2011
|
Nursing homes
USA
|
Feeding options in advanced dementia
|
Proxy
|
Clinician–surrogate concordance; decisional regret; frequency of communication with health care providers; involvement in decision-making; knowledge
|
Including: DCS; knowledge assessed using 19 true / false items; SWDS; DRS
|
Snyder, A et al [29]
|
2013
|
Nursing homes
USA
|
Feeding options in advanced dementia
|
Proxy
|
Decisional conflict; knowledge
|
Including: knowledge assessed using 19 true/false items; DCS
|
White, D et al [33]
|
2012
|
Intensive care
USA
|
Decisions about treatment options in critical illness
|
Proxy
|
Acceptability of DA; decisional confidence; feasibility; perceived effectiveness of DA; quality of communication; self-efficacy
|
Including: QOC; DSES; DCS
|
Cox, C et al [34]
|
2019
|
Intensive care
USA
|
Decision about prolonged mechanical ventilation provision in critical illness
|
Proxy
|
Anxiety/depression; clinician–surrogate concordance; decisional conflict; perception of care centeredness; quality of communication; comprehension of relevant information
|
Including: QOC; DCS
|
Hanson, L et al [35]
|
2017
|
Nursing homes
USA
|
Treatment decisions for person with advanced dementia
|
Proxy
|
Advance Care Planning problem score; satisfaction with decision; decisional conflict; involvement in decision-making; quality of communication; satisfaction with care
|
Including: QOC
|
Lord, K et al [30]
|
2017
|
Memory clinics
UK
|
Dementia family carers deciding about place of care
|
Proxy
|
Acceptability of DA; anxiety/depression; decisional conflict
|
Including: DCS
|
Malloy-Weir, LJ et al [31]
|
2017
|
Nursing homes
Canada
|
Initiation of antipsychotic medications for person with dementia
|
Proxy
|
Satisfaction with decision; decisional conflict; knowledge
|
Including: knowledge assessed using an 8–10-item survey based on the Ottawa Knowledge Questionnaire; low-literacy DCS; SWDS
|
Mitchell, SL et al [32]
|
2001
|
Acute care
Canada
|
Placement of a percutaneous endoscopic gastrostomy tube for older adult > 65 with cognitive impairment
|
Proxy
|
Acceptability of DA; decisional conflict; knowledge
|
Including: knowledge assessed in a multiple-choice format; DCS
|
* Restricted to outcome measurement instruments for decision-related outcomes only |
CPS = Control Preferences Scale; DCS = Decisional Conflict Scale; QuIC = Quality of Informed Consent Scale; MDMIC = Multi-Dimensional Measure of Informed Choice; scale; DRS = Decisional Regret Scale; SWDS = Satisfaction with Decision Scale; FCC-HL‐AYA = Functional, Communicative, Critical Health Literacy scale; QOC = Quality of Communication scale; DSES = Decision Self-Efficacy Scale |
There was notable heterogeneity in the outcomes and OMI used. All studies used a combination of purpose-designed measures, some of which had been used in previous studies, and established and validated OMI. Of validated measures of decision quality, including the quality of decision-support, the Decisional Conflict Scale (DCS) was most commonly used (12 (86%) studies), followed by Quality of Informed Consent Scale (QuIC) [28] and Satisfaction with Decision Scale (SWDS) [29] which were both used in 3 (21%) studies. DCS use included the traditional version with 16 statements and 5 response categories [27], as well as the low literacy version with 10 statements and 3 response categories [42]. The Decision Regret Scale (DRS) [30] was used in 2 studies. There was considerable heterogeneity in measures used to assess constructs such as objective and subjective knowledge, with most studies using purpose-designed scales containing decision-specific knowledge items.
Stage 2: Assessment of content coverage by existing outcome measurement instruments
The COS consists of 28 outcome items which relate to the process of decision-making, proxies’ experience of decision-making, and factors that influence decision-making such as understanding [21]. These cover key construct domains such as values clarity (understanding the personal value of options), subjective understanding (feeling informed), objective understanding (being informed), preparedness to make a decision, and regret and satisfaction with the decision – which included outcomes of that decision.
These constructs were tabulated against those included in commonly used validated measures including the DCS, QuIC, SWDS, and DRS. A focused literature review of outcome measures used in the evaluation of interventions to improve decision-making about healthcare and informed consent decisions identified additional candidate OMIs: DelibeRATE for measuring deliberation during the informed consent process for clinical trials [43], Preparation for Decision Making (PrepDM) scale [44], and Decision Self-Efficacy scale (DSE) [45]. As these were validated measures that are widely used in studies to evaluate decision aids, and form part of the well-established evaluation toolkit for the Ottawa Decision Support Framework (ODSF, a framework that aims to conceptualise the support needed for making difficult preference-sensitive decisions), the quality of the OMI was not formally assessed as is usually recommended when selecting outcome measurement instruments for outcomes included in a COS [25]. Any overlapping areas or constructs, and those not currently captured by these existing measures, were identified.
Across 28 outcome items, measures were identified for 18 of the outcomes when assessed using seven existing OMI. There was sufficient coverage of domains such as self-efficacy with all three items measured by existing scales. However, ten of the COS items across five domains were not covered by any of the measures. This, unsurprisingly, included proxy decision-making specific domains such as knowledge sufficiency - both about their role as proxy decision-maker and in relation to them knowing about the wishes and preferences of the person they are representing. Another domain not well covered was satisfaction, including whether the proxy felt that they had sufficient time to make a decision which is considered an essential component of informed consent for research [46]. There was considerably heterogeneity in the phrasing of many of the items, which primarily reflected the diverse origins of the scales included in this review.
Stage 3: Integration of items into a new measure of proxy consent
Based on the content gaps across existing items and the lack of specificity and applicability to proxy consent decisions described in stage 2, new self-report items were generated for ten outcome items across five domains: knowledge, understanding, deliberation, values congruence, and satisfaction. Having identified the need for the newly developed items to be combined into a new scale alongside those adapted from existing OMI, it was recognised that a degree of harmonisation in the phrasing of individual items was needed in order to present a single combined questionnaire and improve comprehension. The phrasing of items which were covered by existing OMIs were slightly modified from the original COS wording to more closely align with one another, and to improve comprehension when used as a self-completed OMI rather than as a list of COS items. A draft version of the CONCORD scale was developed.
In order to establish initial face validity prior to larger scale testing, this draft version of the scale then underwent initial testing with a small group of lay advisors (n = 3) who support the larger research programme. This resulted in changes being made to the layout of the questionnaire so that it was easier to navigate, including grouping items into three sections with headings to help orientate respondents towards which stage of the decision-making process they were being asked about. Three sections were created: preparation for decision-making, decision-making process, and decision outcome. By the end of this stage, the first test version of the CONCORD scale (version 1.0) was developed.
Stage 4: Cognitive testing of CONCORD scale
Remote cognitive interviews were conducted with eleven family members of someone living with dementia between September and October 2021 (Round 1), and between November and December (Round 2) with the revised version of the scale (version 2.0). The mean duration of interviews was 43 mins (range 29–59 mins). Participant characteristics are presented in Table.2.
Table 2
Cognitive interview participant characteristics
|
Round 1 Participants
n = 6 (%)
|
Round 2 Participants
n = 5 (%)
|
Total participants
n = 11 (%)
|
Sex*
|
|
|
|
Male
|
2 (33%)
|
1 (20%)
|
3 (27%)
|
Female
|
4 (67%)
|
4 (80%)
|
8 (73%)
|
Age
|
|
|
|
20–29
|
0
|
1 (20%)
|
1 (9%)
|
30–39
|
2 (33%)
|
0
|
2 (18%)
|
40–49
|
0
|
1 (20%)
|
1 (9%)
|
50–59
|
2 (33%)
|
1 (20%)
|
3 (27%)
|
60–69
|
1 (17%)
|
2 (40%)
|
3 (27%)
|
70+
|
1 (17%)
|
0
|
1 (9%)
|
Country
|
|
|
|
England
|
2 (33%)
|
5 (100%)
|
7 (64%)
|
Wales
|
4 (67%)
|
0
|
4 (36%)
|
Ethnicity
|
|
|
|
British (White)
|
4 (67%)
|
5 (100%)
|
11 (100%)
|
Not stated
|
2 (33%)
|
0
|
0
|
*Sex registered at birth. All participants described their gender as being the same as the sex they were registered at birth |
An interim analysis was conducted after Round 1 and the results were discussed among the research team. Where responses indicated uncertainty or participants had recommended specific word changes to items to increase clarity, consensus was reached about which modifications to the content or additions were required. This included changes to the phrasing of some questions to reduce uncertainty about what decision was being referred to, through adding the term ‘decision about my relative taking part in the study’, and to ensure greater consistency instead of using a combination of ‘choice’, ‘option’ and ‘decision’. Changes were also made to the order of questions in the section concerning preparation for decision-making, where some Round 1 participants felt they had lacked a ‘logical’ order. Revisions were made to the wording and ordering of the CONCORD Scale as a result. The revised version was then used in the interviews in Round 2. Participants’ general views about the length and format of the questionnaire are reported in the following text, with corresponding illustrative quotes in a supplementary file (Appendix 1).
Instructions for completion of the CONCORD scale
Participants appeared to understand the brief instructions for completing the scale that were provided at the beginning of the questionnaire, with no participants requiring further explanation to complete the questionnaire. However, when specifically asked about the clarity of the instructions, many participants had not read them in detail, appearing to skip over them to complete the questions.
Length and format of the questionnaire
The mean time for completion of the questionnaire was just under 3 ½ mins (range 1 ½ to 5 mins). All participants considered the questionnaire to be of reasonable length. One participant commented that proxy decision-makers are also likely to be (or have been) carers and so will be experienced in completing forms as part of that carer role, including much longer administrative questionnaires. The format was considered to be acceptable, although one participant commented that the font size may be a little too small, for example for people with visual impairments.
Ordering of items
Following the changes made to the order of the items after Round 1 interview, participants in Round 2 considered the order of the three sections and the items within each part to be acceptable and have a ‘logical order’ that followed their thought processes. Suggestions were made to further revise the section headings to more clearly describe the content of each section, and to consider labelling them as A-C. Although, as with the instructions at the start of the questionnaire, not all participants read the section headings closely or were conscious of having done so.
Views about the contents and acceptability of the questionnaire
Participants understood the purpose of the questionnaire, and all viewed the contents of the questions as being acceptable or viewed as ‘pretty harmless’. While participants generally felt that items were clear and straightforward, their responses in Round 1 indicated that some items included terms that required further explanation e.g what form ‘support’ might take when asked if they had enough support to make a decision.
Other responses indicated that quantifying sufficiency was challenging – particularly against the backdrop of uncertainty that surrounds proxy decision-making. For example, some participants questioned whether they could ever feel confident or satisfied with their decisions. Some participants identified points of redundancy across a small number of the items, for example ‘I feel it is the right decision’ and ‘I feel the decision was a wise one’ where participants felt that ‘right’ and ‘wise’ were similar concepts. However, other participants viewed them as disparate items and considered their responses to whether their decision was ‘right’ or ‘wise’ to be distinct from one another.
Participants often distinguished between questions that related to their own feelings and knowledge, and those that required thinking about the wishes of the person they were representing - which they described as requiring more time and consideration. Some participants described the impact that completing the questionnaire had on their (hypothetical) decision-making and how it prompted them to think about some of the issues. When asked about areas they thought were important but missing from the questionnaire, no participants identified any areas of content inadequacy.
Scoring
All participants attempted to complete the CONCORD scale, although it was a counterfactual exercise for participants and so some questions were slightly more difficult for some. One participant [ID 06] did not complete a small number of items (n = 3) where the meaning was not clear to them, and another [ID 01] did not complete a larger number of items (n = 14) as they focused on commenting about the phrasing and order of items which they felt meant that they could not score those items. Some participants said that they had scored an item as ‘neither agree nor disagree’ where they were not sure of the meaning, although more normally they used this to indicate a neutral response.
Creation of a final version
Following the completion of Round 2, the results were discussed among the research team. Additional minor revisions were made to the wording of a small number of items where participants appeared to be uncertain about the meaning or where they suggested it might be unclear for others. The refined version of the CONCORD scale is shown in Table 3, with a 5-point response scale where 1 = strongly agree to 5 = strongly disagree.
Table 3
Combined Scale for Proxy Informed Consent Decisions (CONCORD scale)
PART A. Thinking back to when you made the decision, how informed did you feel?
|
1
|
I am informed about the purpose of the study, any procedures, risks and benefits
|
2
|
I have been informed about my role in making the decision about my relative taking part in the study
|
3
|
I am able to represent my relative’s wishes and preferences
|
4
|
I am clear about which benefits from taking part (for them or others) would matter most to my relative
|
5
|
I am clear about which disadvantages of taking part would matter most to my relative
|
6
|
I am clear whether the benefits or disadvantages of taking part would be more important to my relative
|
PART B. HOW DID YOU FEEL DURING THE PROCESS OF MAKING A DECISION?
|
7
|
I recognise that a decision about my relative taking part in the study needs to be made
|
8
|
I understand the information that I need in order to make a decision
|
9
|
I understand that the decision about taking part in the study depends on what would matter most to my relative
|
10
|
I feel that I am adequately informed about the issues that are important to my relative
|
11
|
I feel able to ask the research team any questions I have about the study
|
12
|
I feel able to express my opinions about my relative taking part in the study or not
|
13
|
I feel as involved as I want to be in the decision
|
14
|
I feel that the information about the study prepared me to make a decision
|
15
|
I feel confident that I can understand the information well enough to make a decision
|
16
|
I have given the decision about whether my relative takes part or not consideration and thought
|
17
|
I am clear about what my relative’s wishes and preferences would be about taking part in the study
|
18
|
I feel supported to make a decision about the study
|
19
|
I am confident that I can make a decision about the study
|
20
|
I feel confident that I can delay my decision if I need more time
|
21
|
I am ready to make a decision about the study
|
PART C. HOW DO YOU FEEL ABOUT THE DECISION YOU HAVE MADE?
|
22
|
I am satisfied with my decision
|
23
|
I am satisfied that my decision would be consistent with my relative’s values
|
24
|
I feel it is the right decision
|
25
|
I feel the decision was a wise one
|
26
|
I feel I had enough time to make a decision
|
27
|
I am comfortable with the decision
|
28
|
I feel that the decision process was good (regardless of the outcome)
|