This study aimed to (1) develop an AI-integrated Audio-Lingual Method (ALM) learning media to enhance students’ speaking self-efficacy and (2) evaluate its effectiveness in classroom settings. Employing a Research and Development (R&D) approach with the ADDIE model, the media integrated AI Speech Recognition and Text-to-Speech (TTS) features within ALM-based learning sequences—dialogue modelling, imitation, repetition drills, and guided practice—across five modules: Materials, Reading Practice, Listening Practice, Speaking Tools, and Chatbot Practice. A quasi-experimental implementation with undergraduate students measured self-efficacy using a 19-item questionnaire adapted from Darmawan et al. (2021) and based on Bandura’s self-efficacy theory. The instrument showed high reliability (Cronbach’s α = 0.930) and item validity (r > 0.30). Results revealed significant increases in all self-efficacy dimensions—Personal Ability Belief (3.596→4.387), Growth Through Effort (3.620→4.480), and Influencing Factors (3.583→4.361). The Wilcoxon Signed-Rank Test confirmed a significant improvement (p = 0.000). Overall, AI-assisted, repetition-based learning with instant feedback effectively enhanced learners’ confidence, persistence, and emotional regulation in English speaking.
Copyrights © 2026