The latent profile analysis of universities’ acceptance of generative AI tools in higher education revealed four distinct profiles, each characterized by unique patterns in acceptance of generative AI, international student ratios, citation per faculty, academic reputation, and faculty-student ratios. These profiles provide valuable insights into how different types of universities are navigating the integration of generative AI, highlighting varying degrees of acceptance and the underlying characteristics associated with their stance.
Profile 1, which comprises 29.4% of the sample, represents universities with a strong opposition to the unauthorized use of generative AI. These institutions emphasize academic integrity and view the unpermitted use of AI as plagiarism. This profile is consistent with previous findings that highlighted concerns about academic integrity and the potential for AI tools to be misused for plagiarism and creating fake references (Kohnke et al., 2023; Yusuf et al., 2024). The high international-student ratios, citation per faculty scores, and academic reputation of these universities suggest that institutions with strong research output and internationalization efforts are more likely to adopt stringent stances on AI usage. The diversity of international student ratios brings a range of opinions and behaviors concerning generative AI, causing uncertainty. Especially, universities with high citation per faculty scores and academic reputations, often prestigious ones, might be likely to perceive risk associated with uncertainty, and thus hesitate to accept the technology.
Profile 2, accounting for 20.6% of the sample, includes universities that are supportive of the responsible and ethical use of generative AI. These institutions recognize the educational value of AI tools and provide clear guidelines for their appropriate use. This support for AI is in line with the benefits identified in the literature, such as assisting non-native English speakers, improving art education, aiding idea generation, and accurately scoring essays (Chan & Hu, 2023; Dehouche & Dehouche, 2023; Mizumoto & Eguchi, 2023). However, these universities have low international-student ratios, citation per faculty scores, and moderate academic reputations, indicating that while they may not be leading in research output or internationalization, they are proactive in integrating innovative technologies probably to be more competitive (Kasneci et al., 2023; Michel-Villarreal et al., 2023). In line with this notion of competitive motivation, profile 2 scores high in faculty-student ratios, which means more academic staff resources are made available to students, such as teaching and supervision.
Profile 3, representing 23.5% of the sample, includes universities with a neutral stance on generative AI. These institutions neither endorse nor prohibit AI use but stress the importance of academic integrity and ethical usage. The high international-student ratios and moderate academic reputations of these universities suggest a strong international presence but a struggle with research output, as indicated by low citation per faculty scores. This profile might indicate that these universities try to be competitive for academic reputation by benefitting from AI tools, but are concerned about the risk associated with uncertainty from diversity (Chan & Lee, 2023).
Profile 4, which accounts for 26.5% of the sample, also represents universities with a neutral stance on generative AI. These institutions have high academic reputations and very high citation per faculty scores, indicating robust research output. They have moderate international presence but very low faculty-student ratios, meaning fewer academic staff resources are available to students for teaching and supervision. The profile suggests that to effectively integrate generative AI and promote student development, universities are expected not to forget to guarantee the quality of education as well as continuing research excellence, which is in line with previous research suggesting that universities manage the affordances and contradictions of AI-generated text in a manner that supports student learning outcomes (Chan & Hu, 2023; Warschauer et al., 2023).
Implications to Practice
To effectively manage the use of generative AI tools like ChatGPT and Bard while ensuring academic integrity, universities are expected to establish clear guidelines. The main research question in the present study focuses on understanding the types of universities that accept generative AI, providing valuable insights for creating these guidelines in three respects.
First, universities may well assess their level of acceptance of generative AI and consider their unique characteristics—such as the ratio of international students, research output, academic reputation, and faculty-student ratios—to tailor their AI integration strategies. Interestingly, institutions with higher international student ratios and stronger research outputs are often less accepting of AI tools, indicating a potential bias in the conversation about AI applications. Rather than relying on assumptions, universities need to base their AI integration strategies on real experiences. By actively engaging with students through surveys and forums, institutions can gain valuable insights into how AI technologies are being used in practice. This approach ensures that policies reflect student needs and expectations. On the other hand, universities with lower international presence and research output might be more inclined to adopt AI tools to improve their competitive edge. For these institutions, creating a supportive environment and establishing clear ethical guidelines for AI use can encourage responsible adoption and maximize the benefits of these technologies.
Second, the different policy orientations towards generative AI, categorized into four groups, are important for faculty members. Students are generally more familiar with information technology, including generative AI than faculty members (Chan & Lee, 2023). The discrepancy of this familiarity between students and faculty members is notable across different regions and educational contexts. In Australia, for instance, universities have been slow to react, often taking countermeasures against AI-related fraud only after incidents have occurred (Slade, 2023). As part of these measures, assessment tasks are being re-evaluated to ensure they accurately measure student learning outcomes, even when AI tools are used. (see an example from University of Texas at Austin). This trend is reflected in recent discussions within our department on what constitutes effective assessment tasks. The practical implications of the findings in the present study can help educators select appropriate university tasks to reference when designing their own assignments.
Third, the different policy orientations towards generative AI can inform governments as well as higher educational institutions. Table 2 shows that almost all Australian universities fall into Profile 4. This may be partly due to government-level guidelines regarding generative AI. Moreover, Australian universities may be less well-known internationally than their US counterparts, which are more likely to be in Profile 1. It is also notable that Profile 2 is composed mainly of East Asian universities. These distinctions highlight the varying levels of AI acceptance and integration policies across different regions, providing further context for developing effective AI strategies in higher education.
Future Directions
Our findings, which delineate profiles of universities based on their stance towards the use of generative AI, can be effectively integrated with previous research on AI acceptance at the student and faculty levels.
A survey study on AI acceptance at the student level found the role of supportive environments and expectancy–value beliefs in fostering students’ intentions to learn AI (Guo & Wang, 2023). Our study supports this by showing that supportive environments at the institutional level can significantly be associated with AI acceptance. Specifically, our Profile 2 universities, which support the responsible and ethical use of AI and provide clear guidelines, reflect the positive impact of a supportive environment. These universities, despite having low international-student ratios and research output, prioritize individualized supervision and ethical guidelines, which is consistent with Wang et al.’s finding on the importance of supportive environments for fostering positive AI adoption intentions among students.
Further, an international survey study on AI acceptance at the student level explored the multifaceted aspects driving the adoption of ChatGPT among university students and highlighted the importance of institutional policies on technology acceptance (Abdaljaleel et al., 2024). Our study extends these insights by providing a typology of institutional responses to AI use. For instance, Profile 1 universities, which are against unauthorized AI use and emphasize ethical guidelines, are in line with Abdaljaleel et al.’s findings that clear institutional policies and guidelines are crucial for responsible AI adoption among students and faculties.
Furthermore, another survey study on AI acceptance at the student and faculty level highlighted the role of habit, performance expectancy, and hedonic motivation in the behavioral intention to adopt AI tools (Strzelecki, 2024). Profiles in our study show how institutional characteristics can shape these factors. For example, Profile 4 universities, with their strong international presence and high research output, might foster a culture where performance expectancy and habitual use of AI tools are more prevalent. This suggests that institutions with high citation per faculty scores and robust international-student ratios can leverage their strengths to promote habitual and motivated use of AI tools, thereby enhancing overall acceptance.
Therefore, future research can build on our findings by examining how specific institutional characteristics and policies influence the individual-level determinants of AI acceptance identified in previous studies. By integrating institutional-level insights with student and faculty-level factors, researchers can develop more holistic strategies for promoting the ethical and effective use of AI in higher education.
Limitations
While the current study has implications to research and practice, it also has some limitations worth noting. The sample used for the study is restricted to guidelines written in English from the top 100 universities worldwide. Although this focus allows for international comparisons, it overlooks universities that provide guidelines solely in their local language, potentially missing important perspectives. Moreover, the majority of top-ranked universities are located in the U.S. and a few other countries, which might have resulted in limited representation from other nations. To address these issues, collaboration with researchers who are native speakers of their respective languages is essential for a more comprehensive understanding. Additionally, concerning the relationship between institutional characteristics and the acceptance of generative AI, the indexes were measured before the development of generative AI guidelines, ensuring the temporal ordering of variables, a crucial component in causal inference. However, this study presents only initial evidence of the relationship, and further validation is necessary to establish a robust link between institutional characteristics and the acceptance of generative AI.