This study investigates how the SPEAK-BOT framework shapes group dynamics and collaborative interaction quality in EFL speaking pedagogy. Tthis study addresses the underexplored social dimension of language learning. Specifically, it examines how chatbot-generated prompts, when embedded in a pedagogical framework, influence turn-taking, elaboration, responsiveness, and cohesion during group discussions. A qualitative research design was employed with 20 third-semester English education students at Universitas Teuku Umar, Indonesia. The Students were organized into four small groups and engaged in structured speaking tasks that required consulting chatbot prompts as discussion starters. The instruments used in this research were an audio recorder for data collection, a thematic coding framework for discourse analysis, and a rubric-based scoring sheet to evaluate participants’ performance. The data were analyzed using discourse analysis and rubric-based scoring supported by descriptive statistics. The findings revealed a clear variation across groups. One group achieved very high interaction quality, marked by equal participation, deep elaboration, and strong cohesion. Two groups performed moderately, each showing strengths in some dimensions but gaps in others. One group demonstrated weak collaboration, relying heavily on chatbot output and producing fragmented discussions. The results suggest that the SPEAK-BOT framework has the potential to foster richer collaboration when learners use AI critically, but risks weakening interaction when prompts are adopted passively. The study contributes by reframing AI not as a substitute for peer dialogue but as a pedagogical mediator that can strengthen collaborative speaking pedagogy.
Copyrights © 2025