This study addresses the need for responsible AI adoption in higher education by developing a human-centred ethical extension of the UTAUT2 model. It integrates two new constructs; AI fairness and human autonomy support and three ethical moderators: ethical risk awareness, perceived algorithm bias and user autonomy concern. To validate the framework, an empirical investigation was conducted with 400 respondents using a structured questionnaire, with data analyzed via regression. All sixteen hypotheses were supported. The model demonstrated strong predictive power, explaining 72.2% of the variance in behavioural intention and 69.1% in use behaviour. The results provide meaningful insights into how ethical perceptions influence adoption. Ultimately, the framework offers practical guidance for policymakers, educators and developers to ensure fair, trustworthy and human-centric AI integration in learning environments.
Copyrights © 2026