This study seeks to assess the strengths and limitations of the automated scoring system and the traditional scoring method in writing and speaking tasks among English Language Learners attending four secondary schools in Ankpa Local Government, Kogi State, Nigeria. The study employed a mixed methods approach, maximizing quantitative tests for scoring accuracy, reliability, and consistency, while the qualitative approach was used to gather data for feedback quality and learner perception. The results reveal that automated systems excel in technical accuracy and consistency, achieving high reliability for grammar and syntax tasks, with Cronbach's Alpha = 0.94, but tend to perform low on higher-order constructs assessment-as for creativity and coherence, r = 0.52-which remains a strong point of the teacher assessment. Feedback by a teacher had strengths in its depth and personal touch; however, this was subjected to subjectivity and required considerable amounts of time. The research does call for an assessment model in a hybrid avatar-automated system scalability with nuanced, motivational feedback from a teacher. Included recommendations are professional training for educators, ethical policies to guide the implementation of technologies within schools, and algorithm improvement guidelines for developers. This paper contributes to the discourse on equitable and effective assessment practices, emphasizing a balance of technological innovation with human expertise in education.
Copyrights © 2025