The incorporation of Artificial Intelligence (AI) into language evaluation has revolutionized how learners receive feedback on their speaking and writing abilities. Nevertheless, empirical information about the precision and educational efficacy of AI-generated feedback—especially in advanced language competencies—continues to be scarce. This study seeks to evaluate the efficacy of deep learning–driven automated feedback systems in enhancing English learners' speaking and writing skills. The study utilized a mixed-methods research approach and included 100 undergraduate students participating in an English for Academic Purposes course, focusing on English as a Foreign Language (EFL). Quantitative data were gathered via pre-test and post-test writing and speaking activities evaluated using AI tools (Grammarly, ETS e-rater, and Google Automatic Speech Recognition), whilst qualitative data were derived from surveys and interviews to capture learners' impressions. The findings demonstrate statistically significant enhancements in grammatical accuracy, lexical diversity, coherence, fluency, pronunciation, and intelligibility following exposure to AI-generated feedback. However, inconsistencies were identified between AI and human assessments regarding speech coherence and contextual relevance. The results indicate that AI-generated feedback serves as an excellent additional evaluation instrument, especially for form-focused linguistic elements, however it is constrained in its ability to measure higher-order communication competencies. This study underscores the significance of amalgamating AI-driven feedback with human discernment to establish a more holistic and pedagogically robust language assessment framework.
Copyrights © 2026