Artificial Intelligence (AI) is transforming science education through virtual labs, intelligent tutoring, and adaptive assessments. However, pre-service teachers often lack formal training in AI integration. This study aims to validated the Artificial Intelligence Competence Self-Efficacy (AICS) instrument using Rasch model, covering AI knowledge (AIK), AI Pedagogy (AIP), AI Assessment (AIA), AI Ethics (AIE), Human-Centred Education (HCE), and Professional Engagement (PEN). This study used a quantitative survey with 338 third-year pre-service science teachers selected through convenience sampling. Data were collected via Google Forms where ethical considerations and back-translation ensured data integrity. Data were analyzed through reliability, separation, item fit statistics, unidimensionality and Differential Item Functioning (DIF). The findings indicate that the AICS instrument is psychometrically sound, with high reliability (person reliability = 0.94, item reliability = 0.95) and excellent separation indices. The Wright Map showed that item difficulty was well-aligned with participant ability, effectively capturing various levels of AI self-efficacy. Item fit statistics confirmed all items functioned within acceptable ranges, and unidimensionality analysis supported the measurement of a single, coherent construct. DIF analysis showed minimal gender bias, though one item (AIP1) favored males. Overall, the instrument is valid and reliable for being used to assess AI competence self-efficacy among pre-service science teachers.
Copyrights © 2025