One in eight people worldwide is affected by a mental disorder, and the trend is rising (1). Frequently, the demand for therapeutic support exceeds available resources, especially since the number of mental health practitioners is not increasing quickly enough (2). Simultaneously, technologies enabled by artificial intelligence (AI) are advancing and gaining relevance in the support and delivery of patient care, owing to their potential for improving patient outcomes through an early detection of mental disorders and personalized treatment (3), and facilitating the work of practitioners (4). Given the proposed benefits, AI-enabled technologies provide an opportunity to bridge the gap between mental healthcare needs and available therapeutic resources.
Applications of AI-enabled technologies in mental healthcare
AI-enabled technologies refer to systems or applications characterized by humanlike capabilities, including decision-making through problem solving and continuous learning (3). To execute their tasks effectively, these technologies rely on large amounts of data. Common data sources for AI-enabled technologies in mental healthcare include behavioral data (e.g., video and audio recordings), followed by biological (e.g., blood samples) and neuroimaging data (e.g., electroencephalogram) (5). Within mental healthcare, AI-enabled technologies utilized by clinicians that leverage these datasets can be broadly categorized into four application areas: diagnostic support, treatment support, feedback, and practice management.
The first two application areas, diagnostic and treatment support, refer to patient-centered technologies. Diagnostic applications leverage AI to enhance the accuracy and efficiency of mental health assessments by evaluating a range of patient data, such as genetic information, language, voice, and facial expressions (6–8). For example, certain tools can distinguish between diagnoses that share similar symptoms but require different treatment approaches, such as various types of dementia or bipolar and unipolar depression (9).
The second area of technologies provides treatment support, making mental health treatments more personalized and precise (10). These technologies are predominantly working with genetic, neuroimaging, clinical and demographical datasets (11). For instance, AI-enabled technologies can be utilized at the beginning of therapy to estimate a patient’s potential response to different medications, such as antidepressants, or to predict remission rates (11).
Besides these patient-centered technologies, an increasing number of practitioner-centered applications are emerging, with the third area comprising feedback tools for mental health professionals: These types of applications aim to provide practitioners with feedback on the quality of their patient interactions by evaluating session data, for instance, through speech signals and the language patterns of the interaction (12–15). Feedback reports usually include an assessment of the session’s strengths and potential areas for improvement, such as increasing the times for reflections or including more open-ended questions (16).
Finally, the fourth application area of AI-enabled technologies for mental health is practice management. They are supposed to automate clinical and administrative workflows and thereby reduce the administrative burden for mental healthcare professionals (16). For example, by automatically transcribing therapy sessions using speech data and integrating the transcripts into medical records (16), patient data entry can become more efficient and structured (17).
Adoption of AI-enabled tools in mental healthcare and its antecedents
The proposed benefits of using AI tools such as an early detection of mental disorders, increasing patient access, and personalized treatment will only be realized if practitioners use them as intended (7). However, studies show widespread skepticism regarding the use of AI-enabled technologies in healthcare (18,19,9,20,21). A lack of understanding or knowledge of the mechanisms and processes underlying the technology may explain some of the suspicion that impacts the uptake of technologies (22,23). Therefore, gaining deeper insights into the current state of mental health practitioners’ understanding of and experiences with AI-enabled tools is the first step to recognize barriers to the adoption and determine starting point for measures aimed at promoting safe technology practices. However, to the best of our knowledge, no study has investigated practitioners’ understanding of AI-enabled tools for mental healthcare (RQ1), their familiarity with these technologies (RQ2), in what context they learned about them (RQ3), and whether they have used any of these tools in their clinical practice (RQ4). Besides knowledge and exposure, technology acceptance and effective use is influenced by numerous individual variables.
The role of learning in the adoption of AI-enabled technologies
Studies have highlighted the pivotal role of learning opportunities and training in the implementation process by equipping healthcare professionals with the requisite skills to effectively use AI-enabled technologies in their practice (24–26). Conversely, healthcare professionals ranked the lack of instruction and training on technology use as the primary technology-related cause of medical errors (27). Training is believed to reduce the perceived risk associated with using such tools and, further, minimize the workload arising from the implementation of AI technologies (28). It has been shown that the willingness to receive training about an AI technology is positively associated with clinicians’ use of it (28). We, therefore, hypothesized that learning intention is positively associated with use intention for AI-enabled technologies in mental healthcare (H1). Figure 1 depicts the proposed model with the related hypotheses and research questions. However, learning intentions and use intentions represent different levels of engagement with technologies. The willingness to learn and receive training is a rather theoretical interaction with a technology centered around updating knowledge (29). Yet, use intention implies the willingness to make the necessary effort to use the technology in practice (30,31). Hence, it is important to study both the learning and use intention and their respective antecedents independently.
Individual-level factors in the adoption of AI-enabled technologies
Most studies have focused on AI adoption in general healthcare settings (see (32) for a review) or different medical specialties such as dermatology (33). However, less is known about individual-level factors associated with practitioners’ intentions to learn about and use AI-enabled technologies in mental healthcare. User characteristics represent one of the key determinants for the adoption of healthcare technologies (34). Research showed that common demographic and individual differences such as gender (35), age (36), personality (32,33,37), and country of residence (38,39) influence technology uptake. Further, practitioners’ intention to use AI-enabled technologies in mental health is greatly influenced by their individual beliefs, attitudes, and perceptions (19). Hence, this study seeks to extend existing literature by systematically investigating individual factors that contribute to a holistic understanding of the determinants affecting the learning and use intention of AI-enabled technology in mental healthcare. The Capability-Opportunity-Motivation Behavior (COM-B) model developed by Michie et al.(40), a well-validated behavior change theory, has been used successfully in synthesizing and understanding healthcare-related technology adoption (for instance, see (41,42)). The COM-B model indicates that individuals’ capabilities, motivation, and opportunities determine their behavior (40). Capability is defined as an individual's psychological and physical ability required for a particular behavior, including the essential knowledge and skills. Motivation encompasses reflective or automatic cognitive processes that direct behavior, extending beyond conscious decision-making to habitual patterns, emotional responses, and analytical reasoning. Opportunity relates to external factors lying outside an individual's immediate control that influence behavior, including social and physical opportunity (40). Upon reviewing the empirical literature, we identified the most important individual-level factors relevant to technology adoption and ultimately integrated them into the COM-B framework. As opportunity includes factors outside the individual, we focused on the domains of capabilities and motivations.
First, individuals’ capability is important for engaging in a respective behavior (40). Different aspects of capability, including AI knowledge, have been found to be relevant for AI adoption. A positive relation between AI knowledge and the intention to use AI technology was found among prospective physicians (43) and among prospective therapists for feedback providing AI tools (21). Similarly, a lack of technology-related skills and knowledge among therapists was identified as a barrier in the use of technology in forensic psychiatry (44). However, one study found no significant association between AI knowledge and medical students’ intention to learn about AI (45). As AI knowledge referred to different aspects in each study, and the mixed findings consequently might have resulted from methodological differences, we are adopting a broader construct called readiness for medical AI. Readiness for medical AI can be divided into different subdimensions (46): Cognitive readiness encompasses peoples’ cognitive abilities such as knowledge of and critical thinking about AI technologies. Vision readiness involves the ability to envision and anticipate the potential impact, benefits, and challenges associated with AI technologies. Ethical readiness refers to an individual’s awareness, knowledge and adherence to ethical standards or guidelines for the use of AI technologies. The relationship between the subdimensions of medical AI readiness and the learning and use intentions of AI-enabled technologies in mental healthcare has not been examined in-depth. Only one study found a positive association between cognitive readiness and the intention to use a feedback tool in mental healthcare (21). We expected that cognitive readiness (H2a, H3a), vision readiness (H2b, H3b), and ethical readiness (H2c, H3c) are all positively associated with the learning and use intentions of AI tools for mental health (see Figure 1 for all hypotheses).
Second, automatic motivational processes influence a particular behavior (40). In the context of technology adoption, automatic processes like emotions, as a sub-component of motivation, have been shown to have an influence (40). Usually, negative valanced variables, such as AI anxiety, have been investigated (47). AI anxiety refers to the apprehension, concern, or fear experienced in response to the implementation, use, or potential consequences of AI technologies (48). The construct encompasses three subdimensions: learning anxiety, sociotechnical blindness, and job replacement anxiety (47). Learning anxiety refers to the anxiety regarding acquiring knowledge and skills related to AI technologies. Sociotechnical blindness relates to anxiety arising from a lack of understanding that AI systems currently do not operate independently without human oversight. Job replacement anxiety refers to a person’s fear that their occupation will be replaced or disrupted by AI technologies (37,49). Y.-M. Wang et al., showed that AI learning anxiety negatively affected intrinsic and extrinsic learning motivation (47). They also found that job replacement anxiety positively influenced extrinsic but not intrinsic learning motivation, indicating that some people might only gain AI-relevant skills and knowledge to avoid unemployment. Regarding use intentions, technology anxiety emerged as one important barrier of technology use in healthcare (50). AI anxiety correlated negatively with the use intention of AI-based technology in healthcare among nurses (51) and the intention to use AI-based treatment and feedback tools among prospective psychotherapists (21). While there is consistent evidence, that AI anxiety hinders AI adoption, none of these studies explored associations between all three subdimensions and learning and use intentions for AI-enabled technologies simultaneously. Therefore, we incorporated all three subdimension separately into our research model. We hypothesized that AI learning anxiety (H2d, H3d) and sociotechnical blindness (H2e, H3e) are negatively associated with both the learning and use intentions of AI tools. Job replacement anxiety is thought to be positively associated with the AI learning intentions (H2f) and but negative with use intentions (H3f).
Third, in addition to automatic motivational processes, reflective processes, are also crucial, with self-efficacy being an important factor influencing behavior uptake (40). The subcategory tailored to technology is technology-self efficacy which refers to a person’s belief in their capacity to effectively accomplish a technologically advanced task (52). It is well established that technology self-efficacy is an important predictor of technology adoption in healthcare (53). Higher technology self-efficacy has been positively associated with medical students’ intention to learn technologies (45), healthcare professionals’ readiness to adopt technologies (54) as well as their intention to use nursing apps and AI technology (51,55,56). In accordance with this large body of research, it is hypothesized, that technology self-efficacy is positively associated with AI learning and use intentions among mental health practitioners (H2g, H3g).
Fourth, affinity for technology interaction represents another motivational process. It serves as a fundamental resource for technology adoption as it is characterized as the tendency to proactively partake in extensive technological interaction (57). Higher affinity for technology was positively related to using a wider range of learning strategies for different healthcare systems among physician trainees (58). Among clinicians, a positive association between affinity for technology and attitude towards technology use has been found and higher technology affinity was linked to a preference for more advanced technologies (59,60). To the best of our knowledge, the relationship between affinity for technology interaction and the intention to learn or use AI technologies in mental healthcare has not been investigated. Based on previous evidence from the medical context, we hypothesized that affinity for technology interaction is positively associated with AI learning and use intentions (H2h, H3h).
Finally, the relevance of people’s perception of their social and professional role and identity as a motivational factor has also been highlighted in the context of technology adoption, often through professional identification. Professional identification refers to the degree to which an individual feels a deep connection and unity with their chosen occupation (61). Professional identification plays an important role in the adoption of novel work behavior (61), particularly important with the integration of AI-enabled technologies that affects practitioners’ daily tasks (62). However, changes in the workplace are likely to be resisted if they are perceived as a threat to professional identity (63). It has been shown that threats to professional identity directly impacted healthcare practitioners’ technology use (64). Moreover, aligned professional beliefs with the designated roles of technology are fundamental for technology adoption (65) as one’s professional identification influences technology integration (63). Given these insights, the following research questions are proposed as we could not derive a clear direction of the effects from the literature: Is professional identification associated with AI learning intention (RQ5) and AI use intention (RQ6)?
Prior research has shown that there are differences in use intentions and its predictors across AI tools for different application areas (21). As AI-enabled technologies in mental healthcare differ vastly in their purpose, they might also be perceived differently by mental health practitioners. Therefore, we believe it is important to look at the learning and use intentions and their antecedents individually for each application area. Providing such a nuanced understanding enables technology developers and healthcare organizations who purchase these technologies to consider the factors relevant to the tool in question, thereby facilitating a more efficient and safe design and implementation process. As a consistent methodology that allows comparisons across the different application areas on the same level is fundamental for this, we applied the same research design and sample across all four application areas of AI-enabled technologies in mental healthcare. This allows us to systematically identify potential differences, ultimately resulting in a comprehensive overview of different application areas and their antecedents.
The present study
The main goal of this mixed method study was twofold. First, we want to investigate mental health practitioners’ general understanding, familiarity, and experience with AI technologies (RQ1 – RQ4) and their attitudes towards different application areas of AI-enabled tools using qualitative content and descriptive analysis. In this line, we also examined differences in attitudes toward technology across different professions, gender, and countries. Second, this work aims to provide a differentiated insight into factors associated with learning and use intentions of AI-enabled technologies for mental health, separated by application areas (H1, H2a – H2h, H3a – 3h, and RQ5 and RQ6). Gaining a deeper understanding of the relative importance of individual factors might help for deriving training and intervention strategies tailored specifically towards practitioners' needs for different technology application areas.