Claim Missing Document
Check
Articles

Found 1 Documents
Search

The Effectiveness and Challenges of Digital Learning Evaluation in Islamic Education: A Scopus-Based Systematic Literature Study Rahma, Aulia Ainnur; Shohib, Muhammad Wildan
Proceeding ISETH (International Summit on Science, Technology, and Humanity) 2025: Proceeding ISETH (International Summit on Science, Technology, and Humanity)
Publisher : Universitas Muhammadiyah Surakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

This study aims to systematically examine the effectiveness and challenges of digital learning evaluation based on Scopus-indexed empirical articles, with an emphasis on the principles of quality evaluation validity, reliability, and fairness as well as its relevance to the context of Islamic Religious Education (PAI) which demands the achievement of cognitive, affective, and psychomotor competencies. This study focuses on the analysis of how the effectiveness of digital learning evaluation is reported, the evaluation methods and instruments used, as well as the methodological and contextual challenges that affect the quality of assessment. The research uses the Systematic Literature Review (SLR) approach referring to the PRISMA framework to ensure transparency and replicability of study selection. Scopus is used as a single database with inclusion criteria: open access journal articles, in English, subject areas of Social Sciences and Arts and Humanities, and presents empirical evidence related to the evaluation of digital learning or blended learning. Of the 52 articles at the identification stage, six met all the criteria and were analyzed through structured data extraction and narrative synthesis. The results showed that the effectiveness of digital learning evaluation was generally reported through quantitative indicators such as increased pre-test scores, post-tests, competency achievements, N-Gain scores, project-based performance, and comparison of results between groups. The dominant instruments include objective tests, rubric-based performance assessments, and high-reliability perception questionnaires. However, significant challenges still arise, especially related to the validity and reliability of the instruments, the heterogeneity of student characteristics, the limitations of research design, academic integrity, and the dependence of evaluation results on the context and infrastructure support. Synthesisously, digital evaluation tends to be effective in measuring the cognitive domain and some authentic performance, but risks being less representative of the affective and psychomotor domains that are crucial in PAI if it is not supported by authentic assessments and triangulation of indicators. These findings confirm that digital learning evaluation needs to be understood as a multidimensional process that integrates learning outcomes, authentic performance, and learning experiences, so that digital evaluation systems can be more valid, reliable, fair, and contextual.