In high-resource languages, automated grammatical error detection has rapidly evolved; however, there are still few technologies that are comparable for Bahasa Indonesia, especially in secondary school settings. Although spelling, morphology, syntax, and diction are common problems for Indonesian senior high school students, AI-assisted feedback systems specifically designed for Indonesian writing are still in their infancy. The use of IndoBERT-base for grammatical error analysis in 82 senior high school student essays totaling 10,911 words is examined in this work. Following two expert raters' hand annotation, 1,872 grammatical mistakes were found in four different categories. Prior to analysis utilizing a refined IndoBERT-base model, the essays underwent pre-processing procedures including as tokenization, normalization, and alignment with gold-standard annotations. F1-score, which is calculated by comparing predicted labels with teacher-validated error tags, accuracy, precision, and recall were used to assess the model's performance. The model demonstrated good agreement (80%) with human raters and correctly identified 1,594 mistakes, yielding a detection rate of 85.1%. Due to their contextual and semantic complexity, syntax and diction showed reduced accuracy, whereas spelling and morphology identification showed especially good performance. These results suggest that automated grammatical analysis of Indonesian student writing can be successfully supported by transformer-based models. Nonetheless, shortcomings in managing discourse-level interdependence underscore the ongoing significance of human assessment. The study supports the incorporation of hybrid human–AI feedback systems to improve writing teaching in the classroom and advances the development of AI-assisted grammar tools for Indonesian education.