Claim Missing Document
Check
Articles

Found 3 Documents
Search

Advancing Inclusive Educational VR: A Bibliometric Study of Interface Design Maguraushe, Kudakwashe; Masimba, Fine; Chimbo, Bester
Journal of Information System and Informatics Vol 7 No 3 (2025): September
Publisher : Universitas Bina Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51519/journalisi.v7i3.1271

Abstract

While virtual reality (VR) has shown transformative potential in education, its accessibility and inclusivity for learners with disabilities remain insufficiently explored. This study offers the first bibliometric mapping of educational VR interface design for inclusivity, analysing 4,735 documents from 1,714 sources (2020-2025) using Biblioshiny and VOSviewer. The analysis reveals a 13.22% annual publication growth rate, an average of 10 citations per document, and an international co-authorship rate of 25.85%, reflecting both rapid expansion and increasing collaboration. Dominant research themes include user experience, usability, and the metaverse, while underexplored areas such as cognitive accessibility and neurodiverse learners highlight emerging opportunities. The findings demonstrate a concentration of scholarly activity in North America and Asia, with limited representation from the Global South. Practically, the study informs developers on designing adaptive interfaces, guides educators in implementing inclusive VR pedagogies, and provides policymakers with evidence for promoting equitable digital learning ecosystems. By identifying trends, gaps, and collaboration patterns, this research advances the discourse on inclusive educational VR and underscores the need for interdisciplinary, AI-driven accessibility strategies that ensure equitable participation for all learners.
Integrating Human-Centered AI into the Technology Acceptance Model: Understanding AI-Chatbot Adoption in Higher Education Masimba, Fine; Maguraushe, Kudakwashe; Chimbo, Bester
Journal of Information System and Informatics Vol 7 No 4 (2025): December
Publisher : Asosiasi Doktor Sistem Informasi Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.63158/journalisi.v7i4.1316

Abstract

Artificial intelligence (AI) is transforming education by enhancing assessments, personalizing learning, and improving administrative efficiency. However, the adoption of AI-powered chatbots in higher education remains limited, primarily due to concerns about trust, transparency, explainability, perceived control, and alignment with human values. While the Technology Acceptance Model (TAM) is commonly used to explain technology adoption, it does not fully address the challenges posed by AI systems, which require human-centered safeguards. To address this gap, this study extends TAM by incorporating Human-Centered AI (HCAI) principles—explainability, transparency, trust, and perceived control—resulting in the HCAI-TAM framework. An empirical study with 300 respondents was conducted using a structured English questionnaire, and regression analysis was applied to assess the relationships among variables. The model explained 65% (R² = 0.65) of the variance in behavioral intention and 55% (R² = 0.55) in usage behavior. The findings highlight that integrating HCAI principles into TAM enhances user adoption of AI chatbots in higher education, contributing both theoretically and practically.
Ethical Adoption of AI-Powered EdTech in Higher Education: Human-AI Interaction through an Ethically Extended UTAUT2 Model Masimba, Fine; Maguraushe, Kudakwashe; Chimbo, Bester
The Indonesian Journal of Computer Science Vol. 15 No. 1 (2026): The Indonesian Journal of Computer Science
Publisher : AI Society & STMIK Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33022/ijcs.v15i1.5079

Abstract

This study addresses the need for responsible AI adoption in higher education by developing a human-centred ethical extension of the UTAUT2 model. It integrates two new constructs; AI fairness and human autonomy support and three ethical moderators: ethical risk awareness, perceived algorithm bias and user autonomy concern. To validate the framework, an empirical investigation was conducted with 400 respondents using a structured questionnaire, with data analyzed via regression. All sixteen hypotheses were supported. The model demonstrated strong predictive power, explaining 72.2% of the variance in behavioural intention and 69.1% in use behaviour. The results provide meaningful insights into how ethical perceptions influence adoption. Ultimately, the framework offers practical guidance for policymakers, educators and developers to ensure fair, trustworthy and human-centric AI integration in learning environments.