The integration of generative Artificial Intelligence (AI) in English writing assessment has gained considerable attention for its potential to enhance the efficiency and accuracy of evaluations. However, while AI tools like Grammarly and GPT-based platforms have been employed for basic writing assessments, their effectiveness in evaluating more complex writing aspects remains underexplored. This study aims to fill this gap by examining the role of generative AI in automating English writing assessment, focusing on its benefits, challenges, and implications for both students and educators. The research design is a mixed-methods approach, incorporating both quantitative and qualitative data through surveys and semi-structured interviews. The study involved 100 students and 10 educators from the Faculty of Sharia at UIN Raden Intan Lampung, who had experience using AI-based writing tools. Data was collected through structured surveys and in-depth interviews, as well as by comparing AI-generated writing assessments with human grading. The findings revealed that while AI tools are highly effective in evaluating grammar and structure, they struggle to assess higher-order writing skills, such as content coherence and critical thinking. Both students and educators acknowledged the time-saving benefits of AI tools, but also highlighted the limitations of AI in understanding nuanced writing aspects. The study implies that AI tools can complement traditional assessment methods but should not fully replace human judgment, especially in complex writing tasks. Future research should focus on enhancing AI algorithms to better assess the depth of writing and address ethical concerns regarding data privacy and biases in AI models.
Copyrights © 2025