The advancement of information and communication technology has significantly transformed the education landscape, particularly through the implementation of e-learning systems that enable flexible, remote learning. However, a major challenge persists in evaluating students’ essay-type answers, which are still predominantly assessed manually by lecturers. This manual process is time-consuming, subjective, and inefficient. To address this issue, this study aims to develop an automated evaluation system for essay assessments using machine learning techniques. This research adopts a Research and Development (R&D) methodology, involving several stages including problem identification, data collection, system design, validation, testing, and refinement. The system was built using web-based technology and implemented supervised machine learning algorithms, particularly Random Forest and Naive Bayes, trained with previously assessed student answers. The evaluation focused on the system's accuracy compared to manual grading. The results indicate that the Random Forest algorithm achieved the highest accuracy rate of 88%, with strong precision and recall scores, suggesting a high level of consistency with lecturers’ assessments. A functionality test involving 25 students and 10 lecturers showed positive user responses regarding the system's ease of use, speed, and perceived usefulness in the learning process. This research demonstrates the potential of automated essay scoring to improve efficiency and objectivity in e-learning environments. While limitations remain, particularly in assessing complex logical reasoning and cultural context, the system lays a strong foundation for future integration with NLP models and Learning Management Systems (LMS). The findings contribute to the development of more modern, responsive, and scalable educational technologies.