A convenience sample of self-reported HIV-negative, U.S. military MSM and trans-individuals were recruited between March and April 2020 through a closed Facebook group with an internal membership of over 7,000 LGBT U.S. military members. The group administrators placed monthly advertisements describing the study on the group’s main forum. Those interested could click on a link to access an online study disclosure form with a ‘click to consent’ procedure. An option to provide an e-mail address that was not linked to survey responses was offered to participants who opted to receive $5 compensation for questionnaire completion. The study was approved by the Yale University Institutional Review Board.
To collect and quantify respondent preference data, an adaptive choice-based conjoint (ACBC) survey instrument was developed based on a starting set of PrEP program attributes resulting from review of the literature of previous PrEP preference conjoint experiments, which were then refined through in-depth, qualitative interviews from PrEP experts and U.S. military MSM.(2-5, 11, 12, 14, 15, 27-39) With a focus on modifiable PrEP program characteristics, the final survey design was composed of five different PrEP program delivery attributes of interest that included: dosing method (daily oral tablet, on-demand tablet regimen [two tablets before sex, one tablet for two days after], rectal douche [before sex], injection [every 2 months], implant [once a year]), provider type (military, civilian), visit location (on-base, off-base, smartphone app), dispensing venue (on-base, off-base, mail delivery), and lab evaluation (on-base, off-base, home-based mail kit). The survey was piloted by the author (JG) with a convenience sample of eleven military MSM members within the targeted social media group for concept testing, and the descriptions and wording of three attribute categories and two attribute level choices were revised for clarification based on feedback. Figure 1 shows a sample item of the conjoint survey, and Table 1 describes the program attributes presented within the survey. Additionally, we collected demographical data to include age, race, ethnicity, rank type (officer, enlisted or warrant officer), military branch, geographic region, PrEP experience (“Have you ever used PrEP [Pre-Exposure Prophylaxis]?”), depressive symptoms with the Patient Health Questionnaire-2 (PHQ2),(40, 41) and the HIV Incidence Risk Index for MSM (HIRI-MSM).(42) Measures to explore levels of satisfaction with a current level of HIV protection and disclosure discomfort within interactions with a primary care provider were also collected,
ANALYSIS
The final survey instrument was loaded into Lighthouse Studio 9, and an experimental design module was used to pre-test the design with 500 simulated respondents for optimal choice task configuration. The final design produced a survey where each level within an attribute was seen at least three times per respondent; achieving a high degree of precision at the individual level with a standard of error of <0.03 and all efficiencies reporting at 1.00.(43)
Table 2 displays the CONSORT diagram of respondent enrollment and exclusion. To ensure the integrity of the data and eliminate random or duplicate responders, security features within the Sawtooth software and servers recognize returning study participants through the use of internet browser cookies and IP addresses. It also prevents repeated or duplicate attempts to retake the survey.(44) Additionally, as extensive pilot testing required at least 10 to 15 minutes, responses completed in less than 10 minutes (or if a respondent selected the same answer for all items) were excluded. Furthermore, the root likelihood (RLH) fit statistic for each respondent was analyzed to evaluate within-respondent choice consistency. RLH, which has a probability value from 0 to 1.0, was used to discriminate between respondents who answered choice-questions consistently or randomly.(45) The survey design was tested by 1,000 computer-generated mock respondents to determine the median RLH for ‘random responders’ at the 95% percentile (0.5178 RLH). Survey respondents with an RLH below this score were excluded, as the inclusion of ‘random responders’ can affect the calculation of preference scores and participation rates.(45)
For conjoint analyses, the Hierarchical Bayes (HB) procedure was used to estimate part-worth utility scores (PWUS) on an individual level for its accuracy and efficiency,(46, 47) and was used to analyze the PWUS of the aggregated sample across all 16 attribute levels. The resulting PWUS of the levels under each attribute category are zero-centered; meaning that the sum of the level scores under each attribute category equal to zero. Scores that are further away from zero (0) indicate a stronger positive or negative preference for the level choice in relation to the other level choices under the same attribute.(39, 43, 47) After identifying each attribute level PWUS, the attribute relative importance scores (RIS) can then be calculated to characterize the magnitude of influence that each attribute category has on the respondents preference decision-making. The RIS for this study was calculated by dividing the range of PWUS for levels under each attribute by the sum of the ranges, and then multiplying by 100.(48, 49) Therefore, if an attribute RIS is 45%, then this means that 45% of an individual’s decision making for product engagement will be influenced by preferences within that attribute category. The PWUS were then used to predict the share of preference (participation interest) among eight hypothetical PrEP program scenarios. PrEP program scenarios were configured after a variety of currently available or hypothetical PrEP program models. For this study, participation rates for these PrEP scenarios were generated using the randomized first choice model; in which PWUS are summed across the levels corresponding to each option, and then exponentiated and rescaled, so they sum to 100.(48, 49) This approach is based on the assumption that respondents or consumers will prefer a product with the highest composite utility (or value) adjusting for both attribute and program variability.(48) The randomized choice model accounts for variation in each participant’s total utility for each option and error in point estimates of the utility, and has been shown to have better predictive ability than other shares of preference models.(49) All data analyses were performed using XLSTAT and Sawtooth Lighthouse Studio 9.0.