The growing use of generative artificial intelligence (AI) in higher education raises questions about how students assess and adopt these systems, particularly whether traditional utilitarian models are sufficient to explain their use. This study compares the Technology Acceptance Model (TAM) and the Human-Centered AI Acceptance Model (HCAIAM) in explaining students’ behavioral intention to use ChatGPT, while examining how functional and human-centered factors operate within the same framework. A cross-sectional design was used, involving 100 undergraduate students in Indonesia selected through convenience sampling, and the data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that TAM provides stronger explanatory power and better model fit (R² = 0.765; SRMR = 0.073) than HCAIAM (R² = 0.709; SRMR = 0.136). Perceived usefulness and perceived ease of use emerge as the main drivers of intention, indicating that students tend to use ChatGPT primarily as a tool to support academic tasks. In contrast, human-centered factors such as transparency and ethical alignment influence intention indirectly through trust and attitude. The autonomy construct shows weak reliability and overlaps with other variables, suggesting limitations in its measurement. These findings indicate that utilitarian factors remain central in this context, while human-centered aspects play a more conditional role, and point to a layered pattern of AI acceptance in which different types of factors operate at different levels.
Copyrights © 2026