This study compares the efficacy of artificial intelligence (AI)-generated feedback against traditional teacher-written feedback in improving the English as a Foreign Language (EFL) writing of university students. A quasi-experimental design was employed, involving two groups of learners (n=64) who completed pre-test and post-test writing assignments. The control group received conventional feedback from professors, while the experimental group received iterative, automated feedback from an AI tool during revision. The assignments were graded using an analytical rubric. Results indicated that both groups showed significant improvement; however, the AI-feedback group demonstrated substantially greater gains in overall writing quality, particularly in vocabulary, grammar, and textual organization. These outcomes suggest that AI-driven feedback facilitates more frequent and focused revisions, promoting greater student engagement with the writing process. The conclusion underscores the potential of AI tools to complement lecturer guidance, enhancing formative assessment practices. This integration presents significant implications for feedback design and writing pedagogy in EFL contexts.