The advancement of technology has accelerated the adoption of Computerized Adaptive Testing (CAT) in educational assessment due to its ability to dynamically adjust item difficulty levels, thereby producing more precise, efficient, and valid measurements compared to conventional tests. While Item Response Theory (IRT) serves as the primary psychometric foundation of CAT, traditional IRT implementation faces computational challenges because ability estimation requires lengthy iterative processes, resulting in reduced system responsiveness. To address this issue, Artificial Intelligence (AI), particularly Fuzzy Logic, offers a promising solution through rapid inference mechanisms and monotonic reasoning that can adaptively map students’ cognitive abilities to corresponding item difficulty levels. This study aims to develop a hybrid CAT system that integrates Fuzzy Logic for fast inference with IRT as a robust and valid psychometric framework in the context of science learning. The research employs a systematic literature review using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework, encompassing the stages of Identification, Screening, and Inclusion of relevant studies. The findings indicate that the integration of AI/ML with IRT in CAT consistently enhances assessment accuracy and efficiency. Algorithms such as Maximum Information (MI) and Expected a Posteriori (EAP) effectively reduce test length without compromising reliability, while Fast Adaptive Cognitive Diagnosis (FACD) improves early-stage ability prediction. Furthermore, Fuzzy Logic demonstrates strong effectiveness in selecting adaptive test items aligned with students’ ability levels. The study concludes that developing CAT systems based on AI and IRT yields adaptive, personalized, efficient, and diagnostic evaluation mechanisms that support personalized science learning.