The increasing use of automated evaluation systems in language assessment raises questions about their acceptance and perceived fairness compared to human evaluation. This study examines how English Education students perceive automated and human assessment of spoken English proficiency, focusing on factors influencing acceptance and preferences for hybrid models. A mixed-methods design was employed with 120 English Education students (80 female, 40 male) from Universitas Negeri Makassar. Quantitative data were collected using a 20-item Likert-scale questionnaire (Cronbach’s α = .87) covering six dimensions: Perceived Ease of Use, Perceived Usefulness, Attitude Toward Technology, Self-Efficacy, Behavioral Intention, and Personal Innovativeness. Qualitative data from semi-structured interviews explored students’ experiences and preferences regarding automated and human evaluation. Descriptive statistics indicated generally positive perceptions of automated evaluation, with the highest mean scores for “Automated feedback helps improve pronunciation and fluency” (M = 3.9, SD = 0.928) and “I enjoy playing with new technology in language acquisition” (M = 4.0, SD = 1.071). However, the lowest score for “I plan to use automated evaluation frequently” (M = 2.7, SD = 1.071) reflected hesitancy toward regular use. Thematic analysis revealed three main themes: appreciation of efficiency but skepticism about accuracy, preference for human empathy and contextual understanding, and concerns about algorithmic bias, particularly for non-standard accents. Students strongly favored a hybrid approach, endorsing AI for preliminary feedback and routine practice while valuing human evaluation for comprehensive assessment and motivational support. These findings suggest the need for transparent, inclusive AI tools integrated with human oversight to achieve balanced, pedagogically sound evaluation frameworks in English language education.