Jurianto
Universitas Airlangga

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Optimizing Automated Essay Scoring with Lightweight Large Language Models and Validated Rubrics Prayitno; Fahima Choirun Nabila; Mohammad Khambali; Afandi Nur Aziz Thohari; Karisma Trinanda Putra; Viqi Ardaniah; Jurianto
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 10 No 2 (2026): April - In progress
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v10i2.7012

Abstract

Manual grading of English as a Foreign Language (EFL) essays often leads to inconsistent scores among educators, despite the use of rubrics. While traditional Automated Essay Scoring (AES) systems offer speed, they often fail due to high computational cost, reliance on extensive datasets, and an inability to capture holistic writing qualities such as creativity and humanistic expression. This study addresses these issues by introducing AESCORE, a novel, lightweight, and cost-effective AES framework. Our methodology centers on integrating validated rubric criteria (identified via VOSviewer analysis) with open-source Large Language Models (LLMs), specifically emphasizing a human-centered approach. We evaluated AESCORE across 100 EFL essays using several prompting techniques, including few-shot and multi-trait specialization. The system achieved its most robust performance and high scoring consistency (Quadratic Weighted Kappa QWK = 0.6660) using the DeepSeek-R1 8B LLM with few-shot prompting. AESCORE represents a significant contribution by demonstrating that sophisticated, pedagogically-aligned writing assessment and generative feedback can be achieved with accessible AI, offering a reliable alternative for improving productive writing skills in higher education.