Automated Essay Scoring (AES) is a computer-based scoring system that uses appropriate features to automatically assess or give feedback to students, by combining the power of Artificial Intelligence and natural language processing (NLP) to provide convenience and benefits for evaluators. This study aims to analyze the most effective algorithmic models in evaluating the accuracy and reliability of the Automated Essay Scoring (AES) system, especially in the context of Islamic religious education assessment, as well as examine its advantages and disadvantages in supporting objective and efficient learning evaluation. This study uses the Systematic Literature Review (SLR) approach by following the PRISMA protocol. A total of 31 relevant articles published in the period 2020 to 2025 from the Scopus and Springer databases were analyzed to evaluate the use and effectiveness of algorithms in the development of AES systems. The results show that transformer-based models, specifically BERT, are the most effective algorithms in current AES implementations. BERT excels because of its ability to understand bidirectional context and semantic depth in text. These models generate accurate scores and can provide automated feedback that is close to the quality of human judgment. However, the use of BERT requires large training data and high computing resources. While BERT demands substantial data and computing power, its application in Islamic education highlights the potential of AES to support more objective, consistent, and scalable assessment of students’ essays