2.1 Introduction
This chapter provides a comprehensive theoretical framework and literature review related to the adoption and utilization of artificial intelligence (AI) technologies, with a focus on business professionals. The chapter begins with a historical literature review of prominent frameworks and theories in the field of technology acceptance and adoption, including the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Technology-Organization-Environment (TOE) Framework, Task Technology Fit (TTF), and Innovation Diffusion Theory (IDT). These frameworks have significantly contributed to our understanding of users' behavior towards technology and have been widely applied in various research contexts. The subsequent sections of this chapter delve into the utilization of AI for business professionals, recent issues, and developments in AI adoption, both from a general perspective and an individual perspective. The adoption of AI in business processes has gained significant attention in recent years, as it holds the potential to revolutionize operations, enhance decision-making, and drive growth. The chapter explores the factors that influence AI adoption, such as performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, and trust, highlighting their importance in promoting the adoption of AI technologies among business professionals. The chapter also addresses recent issues and developments in AI adoption, particularly focusing on the questions of trust, privacy, and ethical considerations. As AI technologies become more advanced, concerns about the transparency, explainability, and accountability of AI systems arise. The chapter discusses the importance of ensuring transparency, disclosing the use of AI agents, and improving the explainability of AI algorithms to build trust and address ethical concerns. Additionally, it highlights the significance of privacy protection and the need for data security regulations and user control over data in AI adoption. Furthermore, the chapter examines empirical research conducted in the field of AI adoption, particularly studies that have utilized the Unified Theory of Acceptance and Use of Technology (UTAUT) model. It provides an overview of empirical findings regarding the factors that influence individuals' intention to adopt AI-based systems and technologies, such as attitude towards AI, performance expectancy, effort expectancy, social influence, facilitating conditions, trust, and hedonic motivation. The chapter also identifies empirical literature gaps, emphasizing the need for research that explores adoption intention across different domains and specifically investigates the adoption of AI language models like ChatGPT.
2.2 Historical Literature Review
In the field of technology acceptance and adoption, several frameworks and theories have been developed to understand and explain users' behavior towards technology. Here's a historical literature review of five prominent frameworks: Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Technology-Organization-Environment (TOE) Framework, Task Technology Fit (TTF), and Innovation Diffusion Theory (IDT).
Technology Acceptance Model (TAM)
The Technology Acceptance Model (TAM) was initially introduced by Fred Davis in 1986 and subsequently expanded by Fred Davis and Richard Bagozzi in 1989. The Technology Acceptance Model (TAM) is centred on the examination of an individual's willingness to accept and integrate technology into their daily routine. The proposition is that the perceived usefulness and perceived ease of use are the primary factors that impact an individual's attitude and intention towards technology adoption. The Technology Acceptance Model (TAM) has been extensively employed and verified in diverse settings and sectors, serving as a fundamental basis for subsequent scholarly investigations (Ashraf et al., 2014; Wibowo, 2019).
Unified Theory of Acceptance and Use of Technology (UTAUT)
In 2003, Venkatesh et al. synthesised eight pre-existing technology acceptance models to create the Unified Theory of Acceptance and Use of Technology (UTAUT). The UTAUT model posits that the acceptance and usage of technology is significantly influenced by four primary factors, namely performance expectancy, effort expectancy, social influence, and facilitating conditions. The analysis takes into account moderating variables, including but not limited to gender, age, and level of expertise. The Unified Theory of Acceptance and Use of Technology (UTAUT) has garnered considerable interest and has been implemented across diverse fields to comprehend the behaviour of technology adoption (Cai et al., 2021; Emon, 2023; Ly, 2019; Rejali et al., 2023) .
Technology-Organization-Environment (TOE) Framework
Tornatzky and Fleischer introduced the Technology-Organization-Environment (TOE) Framework in 1990. The TOE framework places emphasis on the interdependence of technological, organisational, and environmental factors in influencing the decisions made regarding the adoption of technology. The study takes into account various contextual factors that impact the adoption and implementation of technology within an organisation. These factors include technological complexity and compatibility, organisational structure and culture, as well as environmental factors such as competition and regulatory environment. The Technology-Organization-Environment (TOE) framework offers a comprehensive perspective on the implementation of technology within organisational contexts (Baker, 2012; Kulkarni & Patil, 2020; Leung et al., 2015).
Task Technology Fit (TTF):
The theory of Task Technology Fit (TTF) was introduced by Goodhue and Thompson in the year 1995. The concept of Task-Technology Fit (TTF) centres on the congruence between the attributes of a given task and the functionalities of a particular technology. TTF theory posits that the degree of congruence between the demands of a given task and the capabilities of a particular technology significantly impacts the level of user acceptance and proficiency in utilising said technology. The significance of incorporating task-related variables, such as intricacy, interconnectivity, and novelty, in conjunction with technology-related variables, is underscored by TTF in the examination of technology adoption (Bozaykut et al., 2016; Bravo & Bayona, 2020; G. Chen et al., 2015).
Innovation Diffusion Theory (IDT)
The Innovation Diffusion Theory (IDT) was initially formulated by Everett Rogers in 1962. The field of IDT elucidates the process by which novel ideas are assimilated and disseminated throughout a given societal framework. The text delineates the pivotal determinants that impact the process of adoption, encompassing the attributes of the innovation in question (such as relative advantage and compatibility), communication modalities, temporal factors, social structure, and adopter classifications (namely innovators, early adopters, early majority, late majority, and laggards). The IDT framework offers a theoretical construct for comprehending the process of technology diffusion and adoption within a given societal context (F. B. A. Rahman et al., 2021; Wani & Ali, 2015).
These frameworks and theories have significantly contributed to our understanding of technology acceptance and adoption. Researchers have built upon these foundations and extended them to different contexts, industries, and technologies, enabling a comprehensive understanding of user behavior towards technology.
2.2.1 Utilization of AI for Business Professionals
In recent years, the utilization of AI in business processes has gained significant attention and has become a topic of extensive study. Various industries and business professionals have recognized the potential of AI to revolutionize their operations, enhance decision-making, and drive growth. Researchers and practitioners have explored the benefits and challenges associated with AI adoption in order to understand how to effectively integrate this technology into business settings. The Technology Acceptance Model (TAM) is a frequently employed framework for examining the uptake of technology, encompassing AI. The Technology Acceptance Model (TAM) centres on the various determinants that impact an individual's inclination to adopt a specific technology (Kumar Bhardwaj et al., 2021). Performance expectancy is a critical factor in the adoption of artificial intelligence. AI systems are assessed by business professionals with respect to their perceived potential to enhance productivity, augment efficiency, and provide precise and insightful analyses. When individuals hold the belief that artificial intelligence (AI) can offer advantages, they are more inclined to adopt and employ the technology in their routine activities (Enholm et al., 2022; Mohr & Kühl, 2021; M. Rahman et al., 2021). The adoption of AI by business professionals is influenced significantly by the factor of effort expectancy. The concept pertains to the subjective perception of the level of intricacy and user-friendliness that are attributed to artificial intelligence systems. If professionals perceive AI technologies as difficult to understand or use, they may resist their adoption. However, if AI systems are user-friendly, intuitive, and require minimal effort to operate, professionals are more likely to embrace them and integrate them into their work processes (Enholm et al., 2022).
The adoption of AI by business professionals is significantly influenced by social factors. The aforementioned factor pertains to the impact exerted by peers, supervisors, and other significant personnel within a given institution. The likelihood of professionals adopting AI technologies is positively correlated with the promotion of AI usage and its benefits by influential individuals. Moreover, in the event that an organisation fosters a culture that values innovation and is receptive to novel technologies, the integration of AI is apt to be more readily accepted by corporate personnel (Flavián et al., 2022). The utilisation of AI in business processes is also influenced by facilitating conditions, which encompass the availability of essential resources, infrastructure, and technical support. The provision of necessary resources and support by organisations can enhance professionals' confidence in the adoption and utilisation of AI technologies (Grover et al., 2022). The Unified Theory of Acceptance and Use of Technology (UTAUT) has been employed as an additional framework for examining technology adoption. The UTAUT model is an extension of the Technology Acceptance Model (TAM) and integrates supplementary variables such as perceived usefulness, perceived ease of use, and individual traits. These factors additionally contribute to comprehending the adoption and utilisation of artificial intelligence by professionals in the business realm (Blut et al., 2021). The concept of perceived usefulness pertains to the degree to which artificial intelligence (AI) technologies are perceived as advantageous in facilitating the completion of tasks and attainment of objectives. The likelihood of business professionals incorporating AI into their daily work is positively correlated with their belief in the technology's ability to enhance decision-making, address intricate challenges, and optimise overall performance. The construct of perceived ease of use in the Unified Theory of Acceptance and Use of Technology (UTAUT) bears resemblance to the concept of effort expectancy in the Technology Acceptance Model (TAM) (Nordhoff et al., 2021; Sarfaraz, 2017). The significance of user-friendliness, simplicity, and comprehensibility of AI systems is underscored. The likelihood of business professionals incorporating AI technologies into their work routines is positively correlated with their perceived ease of use and operability of said technologies (Natale, 2021). The utilisation of AI by business professionals is also influenced by individual characteristics, such as prior experience and technical expertise. Individuals possessing greater expertise and familiarity in the realm of artificial intelligence may exhibit a greater propensity to embrace and implement AI-based technologies within their organisational workflows (Flavián et al., 2022). The adoption of artificial intelligence (AI) among business practitioners is subject to diverse determinants that are scrutinised in models such as Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT). The intention to adopt and the actual use of AI technologies are influenced by various factors such as performance expectancy, effort expectancy, social influence, facilitating conditions, perceived usefulness, perceived ease of use, and individual characteristics (AL-Nuaimi et al., 2022; Gado et al., 2022; Nascimento & Meirelles, 2021). Through comprehension of these variables, enterprises can devise tactics to encourage the adoption of artificial intelligence and guarantee its triumphant assimilation into corporate operations. The proficient utilisation of artificial intelligence has the potential to enable professionals in the business industry, foster innovation, and reveal novel prospects for expansion and competitiveness.
2.2.2 Recent Issues and Development of AI Adoption
One of the recent issues surrounding AI adoption is the question of trust. As conversational agents like ChatGPT and Quilbot become more advanced, they are often able to generate responses that mimic human language and behavior. This can make it difficult for users to distinguish between a human and an AI agent, raising concerns about deception and manipulation. Users may question whether the information provided by these agents is reliable and unbiased, leading to a lack of trust in the technology. To address this issue, it is crucial to ensure transparency in AI systems (Emon, Hassan, et al., 2023; Jacovi et al., 2021; Liao et al., 2020; Shin, 2021). Developers should clearly disclose when users are interacting with AI agents and provide information about the limitations of the technology. Additionally, efforts should be made to improve the explainability of AI algorithms so that users can understand how decisions are being made. This can help build trust by making the technology more understandable and accountable (Liao et al., 2020; Liao & Varshney, 2021; Shin, 2021). Another important concern is privacy. Conversational agents often require access to large amounts of data to generate meaningful responses. This raises questions about data security and user privacy. Users may worry about their conversations being stored, analyzed, or used for targeted advertising purposes without their consent. To address these concerns, data protection regulations and policies must be implemented. Developers should ensure that user data is handled securely and only used for the intended purposes. Users should have control over their data and be able to easily understand and manage the permissions granted to AI systems (Emon et al., 2023; Liao et al., 2020; Shin, 2021). Ethical considerations are also at the forefront of AI adoption. Conversational agents have the potential to perpetuate biases and discriminatory behaviors present in the data they are trained on. This can result in unfair or harmful outcomes, such as biased recommendations or discriminatory language. To mitigate these issues, developers should actively address bias in training data and algorithms. They should employ diverse and inclusive datasets to train AI models and implement bias detection and mitigation techniques (Liao et al., 2020; Shin, 2021). Additionally, the development of ethical guidelines and standards for AI adoption can help ensure that these technologies are used in a responsible and fair manner. Furthermore, the impact of AI on employment is another aspect to consider. As AI technologies continue to advance, there is a concern that they may replace certain jobs or reduce the demand for human labor. This can have significant societal and economic implications, including potential job displacement and income inequality (W. Wu et al., 2020; Zhou et al., 2020). To address these concerns, policymakers and organizations should focus on reskilling and upskilling initiatives to prepare the workforce for the changing job landscape. Collaboration between AI systems and human workers can also be explored to augment human capabilities and create new opportunities. Recent advancements in conversational AI technologies like ChatGPT and Quilbot have brought about exciting possibilities, but they also come with challenges that need to be addressed. Building trust, ensuring privacy, addressing ethical considerations, and managing the impact on employment are crucial aspects of AI adoption. By addressing these issues, we can foster the responsible and beneficial use of AI technologies in our society.
2.2.3 Recent Issues and Development of AI Adoption from Individual Perspective
Recent issues and developments in AI adoption from an individual perspective have highlighted the significance of various factors that influence the acceptance and use of AI technologies such as ChatGPT. Attitudes towards AI, performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, and trust are key factors that shape individuals' intentions to adopt and utilize AI technologies. Analyzing these factors can provide valuable insights into promoting the adoption of ChatGPT among Business Professionals. Attitudes towards AI play a crucial role in its adoption. Positive attitudes towards AI are associated with a higher likelihood of adoption. Individuals who view AI as a useful tool for enhancing productivity and efficiency are more likely to adopt and utilize AI technologies like ChatGPT. Educating users about the potential benefits of AI and addressing any concerns or misconceptions can help foster positive attitudes towards AI adoption. Performance expectancy, which refers to the perceived benefits and usefulness of AI technologies, influences individuals' intention to adopt AI. If users perceive ChatGPT as a valuable tool that can improve their productivity, provide accurate information, or assist in complex tasks, they are more likely to adopt it. Highlighting the specific features and capabilities of ChatGPT that align with users' needs can enhance their performance expectancy.
Effort expectancy is another important factor. Individuals are more likely to adopt AI technologies that are perceived as easy to use and require minimal effort to interact with. ChatGPT should be designed with a user-friendly interface and intuitive functionalities to minimize the perceived effort required for its utilization. Providing clear instructions and tutorials can also contribute to reducing effort expectancy. Social influence plays a significant role in AI adoption. Individuals are influenced by the opinions and behaviors of others, including friends, colleagues, and experts. Leveraging social networks and communities to create awareness and generate positive word-of-mouth can greatly enhance the adoption of ChatGPT. Encouraging influential individuals to share their positive experiences with the technology can further amplify its adoption. Facilitating conditions refer to the availability of necessary resources and support for AI adoption. Providing individuals with the required infrastructure, such as compatible devices and reliable internet connectivity, can facilitate the adoption of ChatGPT. Additionally, offering technical support, training programs, and documentation can help users overcome any potential barriers and increase their confidence in using AI technologies. Hedonic motivation, which relates to the enjoyment and pleasure derived from using AI technologies, can influence adoption decisions. Designing ChatGPT to provide an engaging and enjoyable user experience can enhance hedonic motivation. Incorporating interactive features, personalization options, and gamification elements can make the interaction with ChatGPT more enjoyable and increase user satisfaction. Trust is a critical factor in AI adoption. Individuals need to trust that ChatGPT will provide accurate and reliable information, protect their privacy, and perform as expected. Establishing transparency about the technology's limitations, ensuring data security and privacy, and regularly updating and improving the system can foster trust among users.
Understanding the factors that influence AI adoption from an individual perspective is essential for promoting the use of technologies like ChatGPT among Business Professionals. By addressing user attitudes, highlighting performance expectancy, reducing effort expectancy, leveraging social influence, providing facilitating conditions, enhancing hedonic motivation, and building trust, organizations can encourage individuals to adopt and utilize AI technologies effectively.
2.2.4 Empirical and Theoretical Literature Gaps of AI Adoption
One of the empirical literature gaps in AI adoption is the limited understanding of adoption intention across different domains. Many studies have investigated AI adoption in specific contexts, such as healthcare, finance, or customer service. While these studies provide valuable insights into the factors influencing adoption within those domains, they do not necessarily generalize to other industries or sectors. Different domains may have unique characteristics, challenges, and requirements that can impact AI adoption differently. Therefore, there is a need for research that explores adoption intention across diverse domains to provide a more comprehensive understanding of the factors influencing AI adoption. Furthermore, while there has been a significant focus on the adoption of AI technologies in general, there is a lack of empirical research specifically examining the adoption of AI language models like ChatGPT. ChatGPT and similar models have gained substantial attention and are being implemented in various applications, ranging from customer support to content generation. However, there is limited empirical research that specifically investigates the factors that drive or hinder the adoption of these AI language models. Understanding the factors influencing ChatGPT adoption is crucial for several reasons. Firstly, the adoption of AI language models raises ethical concerns related to bias, privacy, and accountability. Empirical research can shed light on how organizations are addressing these concerns and inform best practices for responsible adoption. Secondly, the adoption of AI language models also depends on user acceptance and trust. Factors such as transparency, explainability, and perceived usefulness are likely to influence users' willingness to adopt and interact with AI language models. More empirical studies are needed to explore these factors and their impact on adoption intention. Additionally, the theoretical literature on AI adoption could benefit from further development. While existing studies have identified various factors influencing adoption, there is a need for more comprehensive theoretical frameworks that integrate these factors and provide a holistic understanding of AI adoption. Such frameworks could help researchers and practitioners identify the most relevant factors and their interrelationships, guiding the development of effective adoption strategies. Moreover, as AI technology continues to evolve rapidly, there is a need for up-to-date research that captures the current state of AI adoption. Many studies in the literature may be based on data and insights from several years ago, which may not reflect the current landscape. The adoption of AI is a dynamic process influenced by technological advancements, market dynamics, and regulatory changes. Therefore, there is a need for ongoing empirical research that keeps pace with the evolving nature of AI adoption. While there is a growing body of literature on AI adoption, there are several empirical and theoretical gaps that need to be addressed. Future research should focus on exploring adoption intention across different domains, investigating the specific factors influencing the adoption of AI language models like ChatGPT, developing comprehensive theoretical frameworks, and keeping pace with the dynamic nature of AI adoption. By addressing these gaps, researchers and practitioners can gain deeper insights into the adoption process and develop strategies to promote the responsible and effective use of AI technologies.
2.3 Theoretical Review
2.3.1 History, Nature, Contents and Explanations of UTAUT
The criticality of user acceptance in new information system (IS) implementations has been a pivotal argument by (Tursunbayeva et al., 2020). Over recent years, the burgeoning interest in understanding and interpreting user responses towards IS has contributed to the development of numerous theoretical models, drawing insights from diverse fields such as IS, psychology, and sociology (Chao, 2019). A popular choice among these models is the Technology Acceptance Model (TAM), which has received significant scholarly attention and support (Marangunić & Granić, 2015; Mugo et al., 2017; Zhong et al., 2021) TAM is primarily focused on analysing perceived utility and ease of use, thereby providing insights into users' responses towards new systems. However, it has also faced criticism from some quarters for its superficial exploration of human responses (Ali & Anwar, 2021). Critics have argued that TAM fails to investigate the intricate relationship between attitudes and usage, as well as intentions and usage, thereby leaving a considerable knowledge gap. (Venkatesh et al., 2003) aimed to bridge this gap by introducing the Unified Theory of Acceptance and Use of Technology (UTAUT). The UTAUT model offers a more comprehensive framework by integrating key elements from eight existing theories and models, thereby improving the prediction and explanation of the adoption of new technologies. The successful application of UTAUT in diverse areas like home-health services, mobile health, and field communication technology has made significant contributions to the adoption of new technology (Cimperman et al., 2016; R. Hoque & Sorwar, 2017). This study aims to leverage the UTAUT model to evaluate the factors involved in adopting an AI-driven customer relationship management (CRM) system. Despite the model's success, its ability to reliably predict user responses to new technologies has been questioned by some scholars (Chao, 2019). Furthermore, Li (2020) has expressed concerns over the practicality of the four moderators used in the UTAUT model, suggesting that a simpler model, through an acceptable initial scoring approach, might achieve similar predictive power. This led to the evolution of the original UTAUT model into UTAUT 2, proposed by (Venkatesh et al., 2012). The authors added three new factors - hedonic motivation, price value, and habit - into the model, thereby enhancing its ability to capture consumer acceptance. This updated model (UTAUT 2) offers a more robust predictive capability (Tamilmani, Rana, & Dwivedi, 2021). It has been successfully utilised in various areas such as AI in healthcare and m-commerce, further establishing its effectiveness and practical relevance (Agarwal & Sahu, 2022; Albahri et al., 2022; Khan et al., 2022; Vinerean et al., 2022). However, it is crucial to note that the applicability of certain factors, such as hedonic evaluations may vary depending on the context. For example, the hedonic factor may not be relevant in situations where the technology usage is not intended to be enjoyable. Similarly, the price-value factor may not hold significance in situations where the cost of the technology is not directly perceived by the users, such as when purchases of ChatGPT Plus. The context of technology usage, thus, plays a significant role in determining the relevance of different factors in the UTAUT model. In the light of these considerations, it is clear that while the UTAUT and UTAUT 2 models offer compelling frameworks for understanding user acceptance and adoption of new technologies, they are not one-size-fits-all solutions. Both the nature of the technology and the context in which it is used can significantly influence the relevance and impact of the different elements in these models. The AI Technologies like ChatGPT that this study focuses on represents a complex and evolving technology. As such, factors such as performance expectancy, effort expectancy, social influence, and facilitating conditions from the original UTAUT model will likely play a significant role in its adoption. For example, users may be more likely to accept and use the system if they believe it will enhance their productivity (performance expectancy), if they find it easy to use (effort expectancy), if they observe their peers using it (social influence), and if they have the necessary resources and support to use it (facilitating conditions). In addition to these factors, the two elements added in UTAUT 2 – hedonic motivation and Trust – could also influence the adoption of the ChatGPT. For instance, users might find the system more appealing if they derive enjoyment from using it (hedonic motivation). However, as mentioned before, the relevance of these factors can vary depending on the context. For example, the hedonic factor might not be as important in a professional context where the primary goal is to improve productivity, not to provide enjoyment. Moreover, the unique features and capabilities of the AI technologies like ChatGPT might necessitate the consideration of other factors not included in the UTAUT or UTAUT 2 models. For instance, trust in the system’s AI capabilities might be a critical factor influencing user acceptance. Similarly, concerns about data privacy and security, which are particularly relevant in the context of AI technologies, could also impact the adoption of the system. While the UTAUT and UTAUT 2 models provide valuable frameworks for understanding user acceptance and adoption of new technologies, they should be used flexibly, and supplemented with other factors as needed, to accurately reflect the complexities and nuances of different technologies and usage contexts. Future research could explore the development of an enhanced model that incorporates these additional factors, thus providing a more nuanced and comprehensive understanding of user acceptance and adoption of complex and evolving AI technologies like ChatGPT.
2.3.2 Theoretical Development, Gaps, and Expectations
Theoretical development in technology adoption models has been a dynamic and evolving area of research. Models such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) have served as the foundation for understanding individuals' intentions to adopt and use various technologies (de Sena Abrahão et al., 2016). With the advent of artificial intelligence (AI) and its applications, there is a need to extend and modify these models to capture the unique characteristics of AI technologies. In the case of ChatGPT, a language model developed by Open AI, there are still gaps in our understanding of how specific factors influence individuals' adoption intentions.
One important factor that requires further exploration is trust. Trust is a critical component in technology adoption as it affects individuals' willingness to rely on and use a particular technology (Beldad & Hegner, 2018). In the context of ChatGPT, users need to trust that the system will provide accurate and reliable information, respect their privacy and data security, and act in their best interests. Building trust in AI systems can be challenging due to their complex nature and potential for bias or unintended consequences (Kelly et al., 2019). Future research should delve into the factors that contribute to trust in ChatGPT, such as system transparency, explainability, and accountability. Understanding how trust develops and its impact on adoption intentions can inform the design and implementation of AI systems that inspire confidence and enhance user acceptance. Another factor that warrants further investigation is hedonic motivation. While many technology adoption models have primarily focused on utilitarian factors such as usefulness and ease of use, the hedonic aspects of technology use, such as enjoyment and entertainment value, are increasingly relevant in the context of AI technologies. ChatGPT, with its conversational capabilities and ability to generate creative responses, offers users a unique and engaging experience. Exploring the role of hedonic motivation in the adoption of ChatGPT can help us understand why individuals choose to use the system beyond its functional benefits. Additionally, examining how the hedonic aspects interact with utilitarian factors can provide a more comprehensive understanding of adoption intentions and user behavior.
The theoretical development of technology adoption models should also consider the social and cultural factors that shape individuals' adoption intentions. AI technologies, including ChatGPT, are not developed and adopted in a vacuum but within specific sociocultural contexts. Factors such as social norms, perceived societal impact, and cultural values can influence individuals' attitudes and intentions towards AI adoption. For instance, individuals from collectivist cultures may prioritize the opinions of their social networks when considering the adoption of AI technologies. Future research should explore the interplay between these sociocultural factors and individual adoption intentions to provide a more nuanced understanding of AI technology adoption.
To bridge these gaps and develop a more comprehensive theoretical framework for understanding the adoption of ChatGPT, researchers should employ a multi-method approach that combines qualitative and quantitative methods. Qualitative studies can help uncover in-depth insights into users' perceptions, attitudes, and experiences with ChatGPT. These studies can be conducted through interviews, focus groups, or observations to capture the richness and complexity of users' adoption processes. On the other hand, quantitative studies can provide broader insights by examining the relationships between various factors and adoption intentions on a larger scale. Surveys and experiments can be used to collect quantitative data, allowing for statistical analysis and the identification of significant predictors of adoption intentions. While technology adoption models have provided valuable insights into the factors that influence individuals' intentions to adopt and use technology, there are still gaps in our understanding of how specific factors, such as trust and hedonic motivation, impact the adoption of AI technologies like ChatGPT. Future research should address these gaps by investigating the role of trust and hedonic motivation, considering the influence of social and cultural factors, and employing a multi-method approach.
2.3.3 Theoretical Justifications and Relationships (Between and Among) Each Variable and Dimensions:
In the context of technology acceptance and usage, there are various theoretical models that explore the relationships among the variables you mentioned. One prominent model is the UTAUT, which can provide insights into the theoretical justifications and relationships between and among these variables and dimensions.
-
Attitude towards Artificial Intelligence (ATAI): The term ATAI applies to the comprehensive assessment or positive inclination of an individual towards artificial intelligence. The phenomenon under consideration is subject to the influence of multiple factors, among which are perceived utility, simplicity of operation, social impact, and hedonic drive. A favourable disposition towards artificial intelligence is expected to lead to an increased inclination to utilise systems that are based on AI (Montag et al., 2023; Sindermann et al., 2022).
-
Performance Expectancy (PE): The concept of PE pertains to an individual's perception of the potential benefits that can be derived from the utilisation of AI technology in terms of enhancing their performance and productivity. There exists a positive correlation between one's attitude towards artificial intelligence and their behavioural intention to utilise it. This relationship bears significant implications. Individuals are more inclined to develop a favourable attitude towards AI when they perceive it as advantageous and anticipate that it will have a positive influence on their performance (Alfalah, 2023; Upadhyay et al., 2022; Venkatesh et al., 2003).
-
Effort Expectancy (EE): Effort Expectancy (EE) pertains to the subjective perception of the level of ease involved in the adoption and utilisation of AI technology. The focus of this study pertains to the perceived ease of learning and operating artificial intelligence (AI) systems by individuals. A positive correlation exists between elevated levels of EE and a favourable disposition towards AI, as well as an increased propensity to engage in AI utilisation (Sohn & Kwon, 2020; Upadhyay et al., 2022; Venkatesh et al., 2003).
-
Social Influence (SI): Social Influence (SI) applies to the impact that other individuals have on an individual's behaviour and attitude. This pertains to the subjective norms and the influence of social interactions in relation to the adoption and utilisation of artificial intelligence. Favourable attitudes towards AI and increased behavioural intention to use it can be influenced by positive social factors, such as recommendations from credible sources or observing positive experiences of others with AI (Sohn & Kwon, 2020; Upadhyay et al., 2022; Venkatesh et al., 2003).
-
Facilitating Conditions (FC): Facilitating Conditions (FC) pertain to the provision of essential resources and support that enable the utilisation of AI. The aforementioned comprises of elements such as technological framework, institutional backing, and the accessibility of educational resources and guidance. Individuals' positive attitude towards AI and their behavioural intention to use it are more likely to be influenced by the perception of adequate facilitating conditions (Chatterjee & Bhattacharjee, 2020; Venkatesh et al., 2003).
-
Hedonic Motivation (HM): HM relates to the enjoyment and pleasure individuals derive from using AI technology. It encompasses factors such as entertainment, curiosity, and the enjoyment of novel experiences. Higher hedonic motivation leads to a more positive attitude towards AI and an increased likelihood of behavioral intention to use it (Aldossari & Sidorova, 2020; Venkatesh et al., 2012).
-
Trust: Trust is relevant to the level of confidence and dependence that an individual places on artificial intelligence technology. The aforementioned factors, namely system reliability, security, privacy, and transparency, exert a significant influence on it. The significance of trust in influencing an individual's perspective towards AI and their inclination to utilise it cannot be overstated. A positive correlation exists between elevated levels of trust and a greater propensity to adopt AI, as well as a more favourable attitude towards it (Chatterjee & Bhattacharjee, 2020; Venkatesh et al., 2012).
-
Behavioral Intention to Use (BIU): Behavioural Intention to Use (BIU) applies to an individual's personal inclination or intention to utilise artificial intelligence (AI) technology. The factors that impact it include attitudes towards artificial intelligence, perceived usefulness, perceived ease of use, social influence, facilitating conditions, hedonic motivation, and trust. The likelihood of an individual's intention to use AI is positively influenced by several factors, including a favourable attitude towards AI, perceived usefulness and ease of use, higher social influence, facilitating conditions, hedonic motivation, and trust (Sharma et al., 2022; Shi et al., 2021; Venkatesh et al., 2003).
-
Actual Use (AU): Actual Use refers to the extent to which individuals actively engage in the utilization of artificial intelligence (AI) technology in real-world settings. It represents the observable behavior of individuals employing AI systems or applications to perform tasks, achieve goals, or fulfill specific needs. Actual Use serves as an outcome measure and can be influenced by various factors, including BIU, PE, EE, SI, FC etc (Malodia et al., 2021).
These variables and dimensions are interconnected, and their relationships can be explained through the UTAUT framework, highlighting the importance of attitude, perceived usefulness, perceived ease of use, social influence, facilitating conditions, hedonic motivation, trust, and behavioral intention to use AI.
2.5 Empirical Review
Empirical investigations have been carried out to explore the determinants that impact the adoption and utilisation of systems and technologies based on artificial intelligence by individuals. Numerous scholarly inquiries have employed the Unified Theory of Acceptance and Use of Technology (UTAUT) framework to investigate the determinants that impact the assimilation of artificial intelligence (AI)-based systems and technologies. (Huang et al., 2018) conducted a study that investigated the implementation of a smart chatbot system within the healthcare industry. The research revealed that the factors of performance expectancy, effort expectancy, social influence, and facilitating conditions had a significant impact on the users' inclination to utilise the system. Furthermore, the research revealed that users' inclination to utilise the system was also impacted by their trust in the system and hedonic motivation. (Brill et al., 2019) conducted a study to examine the utilisation of personal assistants that are based on artificial intelligence. The research revealed that the variables of performance expectancy, effort expectancy, social influence, and facilitating conditions exerted a noteworthy impact on the users' inclination to utilise the personal assistant. The research additionally discovered that users' inclination to utilise the personal assistant was positively influenced by their trust in the system and hedonic motivation.
(Mogaji et al., 2021) conducted a study to investigate the determinants of AI-based chatbot adoption within the banking sector. The research revealed that the variables of performance expectancy, effort expectancy, social influence, and facilitating conditions exerted a significant impact on the users' inclination to utilise the chatbot. The research additionally revealed that the level of trust placed in the chatbot had a noteworthy impact on the users' inclination to utilise the chatbot.
In general, empirical studies indicate that various factors, such as performance expectancy, effort expectancy, social influence, facilitating conditions, trust, and hedonic motivation, have an impact on the adoption and utilisation of AI-based systems and technologies by individuals. The results align with the Unified Theory of Acceptance and Use of Technology (UTAUT) model, and offer significant implications for designers and developers of artificial intelligence (AI) systems and technologies, in terms of encouraging their adoption and utilisation.
2.5.1 Critical Literature Review and Justification to Create the Relationships (Between and Among) Each Variables and Dimensions
Numerous scholarly inquiries have examined the interconnections and dimensions of the UTAUT model concerning artificial intelligence-based systems and technologies. Presented herein is a comprehensive literature review and rationale for the interrelationships and interdependencies among each variable and dimension:
ATAI and BIU:
Research has indicated that there exists a favourable correlation between ATAI and BIU. This implies that individuals who exhibit a more optimistic outlook towards AI are inclined towards utilising AI-based systems and technologies (Venkatesh et al., 2012). The correlation between an individual's attitude towards a technology and their perception of its usefulness and ease of use is a justifiable basis for their intention to use said technology.
PE and BIU:
The relationship between Performance Expectancy (PE) and Behavioural Intention to Use (BIU) has been established through research. The findings suggest that individuals who hold the perception that the utilisation of AI-based systems and technologies will improve their performance and productivity are more inclined to intend to use these technologies. This relationship has been documented in studies conducted by (Rho et al., 2015; Venkatesh et al., 2003). The correlation between the adoption of a technology and an individual's perception of its potential to enhance their performance and productivity can be rationalised.
EE and BIU:
The relationship between EE and BIU has been established in previous research, indicating that individuals who perceive the ease and simplicity of using AI-based systems and technologies are more inclined to adopt these technologies (Venkatesh et al., 2003). The correlation between ease of use and user adoption of technology can be substantiated by the observation that individuals are more inclined to utilise a technological system if they perceive it as user-friendly and requiring minimal exertion.
SI and BIU:
Research has shown that there exists a noteworthy affirmative correlation between social influence (SI) and behavioural intention to use (BIU). This suggests that individuals who hold the perception that significant others in their lives endorse the use of artificial intelligence (AI)-based systems and technologies are more inclined to have the intention to utilise such technologies (Bu et al., 2021; Venkatesh et al., 2003). The correlation between social influence and an individual's beliefs and attitudes towards a technology can be rationalised, as it can have a substantial impact on their inclination to utilise said technology.
FC and BIU:
Research has indicated that there exists a noteworthy affirmative correlation between FC and BIU. This implies that individuals who possess the essential resources and assistance to operate AI-based systems and technologies are inclined towards utilising these technologies (Venkatesh et al., 2003). The rationale for this association can be substantiated by the premise that the availability of resources and assistance can enhance an individual's capacity to utilise a technological tool, thereby influencing their inclination to employ said technology.
HM and BIU:
Research has demonstrated a noteworthy affirmative correlation between HM and BIU. This suggests that individuals who experience satisfaction and delight from utilising AI-based systems and technologies are more inclined to have the intention to use these technologies (L. Chen et al., 2021; Venkatesh et al., 2012). The rationale for this association can be substantiated by the premise that hedonic motivation has the potential to exert an impact on an individual's conduct and perceptions vis-à-vis a technological innovation, thereby influencing their inclination to adopt and utilise the technology.
Trust and BIU:
Research has revealed a noteworthy affirmative correlation between trust and BIU. This implies that individuals who exhibit trust towards AI-based systems and technologies are more inclined to have the intention to use them (L. Chen et al., 2021; Venkatesh et al., 2012). The rationale for this association can be supported by the notion that trust plays a pivotal role in shaping an individual's assessment of the dependability, safety, and efficacy of a given technology, thereby influencing their behavioural inclination.
BIU and AU:
The correlation between Behavioural Intention to Use (BIU) and Actual Use (AU) has been extensively examined in various studies pertaining to the acceptance and utilisation of technology. The literature has consistently demonstrated a positive correlation between behavioural intention to use (BIU) and actual usage (AU) of artificial intelligence (AI)-based systems and technologies. This suggests that individuals who possess a strong intention to utilise such systems are more inclined to actively engage with them in practical settings (Eraslan Yalcin & Kutlu, 2019; Venkatesh et al., 2003). The justification for this association is rooted in the theoretical framework of planned behaviour, which posits that the intention to engage in a behaviour is a crucial determinant of its actual execution. According to Venkatesh et al. (2003), the presence of a robust intention to utilise a technology can act as a motivational catalyst, propelling individuals towards active adoption and engagement with the technology, ultimately leading to increased levels of effective usage. Furthermore, the realisation of their purpose via practical implementation reinforces their conviction in the utility and efficacy of the technology, thereby enhancing their dedication to sustained utilisation. Furthermore, the association between BIU and AU may be impacted by diverse additional factors within the UTAUT framework. The translation of behavioural intention to actual usage (BIU to AU) can be influenced by factors such as perceived performance expectancy and effort expectancy of artificial intelligence (AI)-based systems and technologies. According to Venkatesh et al. (2003), when users encounter favourable results and perceive the technology as user-friendly during their initial adoption, it strengthens their intention to use it and promotes sustained usage. Likewise, the impact of social elements, such as social influence and facilitating conditions, can also contribute to the conversion of Behavioural Intention to Use (BIU) into Actual Use (AU). The presence of supportive resources and positive social interactions can enhance individuals' self-assurance and ability to effectively utilise technology, as noted by Bu et al. (2021) and Venkatesh et al. (2003). It is noteworthy that extant research has consistently demonstrated a favourable association between Behavioural Intention to Use (BIU) and Actual Use (AU). However, it is imperative to acknowledge that the magnitude and character of this association may differ depending on the technological milieu, user demographics, and particular operationalizations of the Unified Theory of Acceptance and Use of Technology (UTAUT) framework. Further investigation and empirical analysis are required to thoroughly examine and substantiate the correlation between Behavioural Intention to Use (BIU) and Actual Usage (AU) within particular Artificial Intelligence (AI) systems and technologies.
In general, the Unified Theory of Acceptance and Use of Technology (UTAUT) model offers a valuable structure for comprehending the determinants that impact the adoption and utilisation of artificial intelligence (AI)-based systems and technologies by individuals. The UTAUT model's variables and dimensions exhibit coherence with both theoretical and empirical evidence, thereby offering significant perspectives for technology designers and developers, as well as organisations aiming to encourage technology adoption and utilisation.