Al Ishlah Jurnal Pendidikan
Vol 17, No 3 (2025): SEPTEMBER 2025

English Education Students' Perceptions of Automated vs Human Assessment in Spoken English Proficiency

Nur Aeni (Universitas Negeri Makassar)
Muhalim Muhalim (Universitas Negeri Makassar)
Hasriani Ganteng (Universitas Negeri Makassar)
Muhammad Tahir (Universitas Negeri Makassar)
Ahmad Talib (Universitas Negeri Makassar)



Article Info

Publish Date
02 Oct 2025

Abstract

The increasing use of automated evaluation systems in language assessment raises questions about their acceptance and perceived fairness compared to human evaluation. This study examines how English Education students perceive automated and human assessment of spoken English proficiency, focusing on factors influencing acceptance and preferences for hybrid models. A mixed-methods design was employed with 120 English Education students (80 female, 40 male) from Universitas Negeri Makassar. Quantitative data were collected using a 20-item Likert-scale questionnaire (Cronbach’s α = .87) covering six dimensions: Perceived Ease of Use, Perceived Usefulness, Attitude Toward Technology, Self-Efficacy, Behavioral Intention, and Personal Innovativeness. Qualitative data from semi-structured interviews explored students’ experiences and preferences regarding automated and human evaluation. Descriptive statistics indicated generally positive perceptions of automated evaluation, with the highest mean scores for “Automated feedback helps improve pronunciation and fluency” (M = 3.9, SD = 0.928) and “I enjoy playing with new technology in language acquisition” (M = 4.0, SD = 1.071). However, the lowest score for “I plan to use automated evaluation frequently” (M = 2.7, SD = 1.071) reflected hesitancy toward regular use. Thematic analysis revealed three main themes: appreciation of efficiency but skepticism about accuracy, preference for human empathy and contextual understanding, and concerns about algorithmic bias, particularly for non-standard accents. Students strongly favored a hybrid approach, endorsing AI for preliminary feedback and routine practice while valuing human evaluation for comprehensive assessment and motivational support. These findings suggest the need for transparent, inclusive AI tools integrated with human oversight to achieve balanced, pedagogically sound evaluation frameworks in English language education.

Copyrights © 2025






Journal Info

Abbrev

alishlah

Publisher

Subject

Education Languange, Linguistic, Communication & Media Mathematics Other

Description

This journal focuses on advancing scholarly research and critical discourse in the field of education. It publishes original research articles that address contemporary issues and emerging trends in curriculum development, instructional practices, learning processes, educational policy, and teacher ...