Rapid growth of studies on Chat-GPT acceptance within the broader context of AI in education (AIEd) has provided valuable insights into how participants across settings perceive and use this tool for teaching and learning. This study replicates earlier investigations on AI acceptance but narrows the focus to a specific task: editing and proofreading. It also expands the inquiry to address ethical concerns and overreliance-two recurring themes in AIEd research. A modified extended TAM questionnaire covering seven aspects was distributed to 71 first-year EFL university students enrolled in a writing course that permitted Chat-GPT only for editing and proofreading, with clear restrictions. Group interviews were also conducted. Quantitative data were analyzed using descriptive statistics; qualitative data were examined thematically. Findings reveal a consistent three-step use of Chat-GPT: prompting, pasting the manuscript, and reviewing. Students treated AI output as a draft for enhancement, not as final work. Variation emerged in how much students revised AI-suggested edits, suggesting differing levels of reliance. The study confirms that perceived usefulness and ease of use contribute to students’ attitudes and intentions, moderated by self-image and subjective norms. While long-term dependency remains unclear, students appeared cautious when boundaries were set. This study suggests that when lecturers provide clear guidelines, students tend to view Chat-GPT as a learning aid and show awareness of academic integrity and authorship. The findings underline the need for well-defined institutional policies on AI use in writing instruction, while acknowledging the study’s contextual limitations and the need for further research.
Copyrights © 2025