This study addresses a persistent gap in fostering autonomous speaking development among EFL learners in suburban, resource‑constrained environments, where access to high‑quality interactional practice and sustained self‑regulation supports is limited in digitally mediated settings. The research aims to design, implement, and evaluate an AI–VR‑enabled Virtual Speaking Partner (VSP) that scaffolds independent speaking practice and self‑regulated learning in junior high school EFL programs. Adopting a research‑and‑development design informed by constructivist and self‑regulated learning theories, the study carried out needs analysis, iterative prototyping, usability testing, and school‑based implementation. The intervention integrated the VSP into an extracurricular digital platform to deliver simulated, context‑rich dialogues with adaptive AI feedback and VR‑based situational immersion. Evaluation data consisted of speaking performance assessments, learner autonomy/self‑regulation scales, system logs, and qualitative learner feedback, analyzed through mixed‑methods approaches. Findings indicated notable gains in speaking fluency, interactional management, and confidence, alongside increased indicators of learner autonomy such as planning, monitoring, and strategic help‑seeking. Students and teachers further reported high levels of usability and feasibility. The study demonstrates that coupling AI‑driven feedback with VR‑mediated communicative scenarios can meaningfully enhance autonomous EFL speaking practice in suburban contexts. Implications include a practical development framework for school‑level implementation, guidance for integrating formative analytics into autonomy‑oriented pedagogy, and policy relevance for scaling immersive language technologies in under‑resourced educational ecosystems. Keywords: Learner Autonomy; EFL Speaking; Virtual Speaking Partner; Artificial Intelligence; Virtual Reality.