This research aims to develop and evaluate an automated English paragraph scoring system using the Gemini API aligned with the Thai upper secondary curriculum's writing indicators, to address issues related to teacher workload and delayed feedback in writing assessments. The system integrates the Gemini 2.5 Pro API with a prompt-engineering framework designed to simulate expert EFL assessors. This research employs a sequential mixed-methods research approach. For the quantitative component, 160 upper secondary EFL students in Thailand were sampled from their written assignments, consisting of three expository paragraph assignments aligned with the Thai core curriculum. Cluster sampling was used to select participants. The students' writings were assessed using a validated analytical evaluation criterion comprising four aspects. The essays were independently scored by three evaluators, and the results were compared to automated scores generated by a Gemini-based system. Reliability between human evaluators was first checked using the Intraclass Correlation Coefficient (ICC), and the agreement between human and AI scores was measured using the Quadratic Weighted Kappa (QWK). The results showed a high level of agreement between the Gemini-generated scores and the human evaluators (QWK = 0.82), indicating that the system can approximate human judgment in evaluating English as a Foreign Language writing. Qualitative analysis of the AI-generated feedback further revealed that the system could provide diagnostic recommendations related to grammar, vocabulary, and sentence structure. These findings suggest that the system can support teachers in reducing grading workload while providing timely, criteria-based feedback to enhance students’ writing development.