Hasriani Ganteng
Universitas Negeri Makassar

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

English Education Students' Perceptions of Automated vs Human Assessment in Spoken English Proficiency Nur Aeni; Muhalim Muhalim; Hasriani Ganteng; Muhammad Tahir; Ahmad Talib
AL-ISHLAH: Jurnal Pendidikan Vol 17, No 3 (2025): SEPTEMBER 2025
Publisher : STAI Hubbulwathan Duri

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35445/alishlah.v17i3.7655

Abstract

The increasing use of automated evaluation systems in language assessment raises questions about their acceptance and perceived fairness compared to human evaluation. This study examines how English Education students perceive automated and human assessment of spoken English proficiency, focusing on factors influencing acceptance and preferences for hybrid models. A mixed-methods design was employed with 120 English Education students (80 female, 40 male) from Universitas Negeri Makassar. Quantitative data were collected using a 20-item Likert-scale questionnaire (Cronbach’s α = .87) covering six dimensions: Perceived Ease of Use, Perceived Usefulness, Attitude Toward Technology, Self-Efficacy, Behavioral Intention, and Personal Innovativeness. Qualitative data from semi-structured interviews explored students’ experiences and preferences regarding automated and human evaluation. Descriptive statistics indicated generally positive perceptions of automated evaluation, with the highest mean scores for “Automated feedback helps improve pronunciation and fluency” (M = 3.9, SD = 0.928) and “I enjoy playing with new technology in language acquisition” (M = 4.0, SD = 1.071). However, the lowest score for “I plan to use automated evaluation frequently” (M = 2.7, SD = 1.071) reflected hesitancy toward regular use. Thematic analysis revealed three main themes: appreciation of efficiency but skepticism about accuracy, preference for human empathy and contextual understanding, and concerns about algorithmic bias, particularly for non-standard accents. Students strongly favored a hybrid approach, endorsing AI for preliminary feedback and routine practice while valuing human evaluation for comprehensive assessment and motivational support. These findings suggest the need for transparent, inclusive AI tools integrated with human oversight to achieve balanced, pedagogically sound evaluation frameworks in English language education.