Machine translation evaluation ensures that systems meet both linguistic and functional standards, providing end-users with reliable tools. This study examines the domestic machine translation system, Abadis Translator, focusing on the syntax, semantics, and pragmatics included in Wilss’ (1982) matrix. An 80-statement translation test was developed, consisting of 60 instrumental, descriptive, and argumentative statements, along with 20 idiomatic statements to evaluate pragmatic accuracy. The statements were translated using Abadis Translator and assessed using Wilss’ criteria. A Likert-scale questionnaire, with options such as incorrect, inappropriate, undesirable, correct, and appropriate, was employed. ChatGPT was used to post-edit the incorrect and inappropriate translations to enhance quality, helping to assess the reliability of both Abadis Translator and ChatGPT as an Artificial Intelligence (AI) editor. The results demonstrated that Abadis Translator exhibited strong grammatical accuracy, although it occasionally encountered subject-verb agreement issues in complex sentences. While it achieved moderate success in semantic translation, it struggled with pragmatic nuances, especially with idiomatic expressions and participle constructions. Nevertheless, post-editing with ChatGPT significantly improved overall translation quality by correcting grammatical errors and clarifying implied meanings.
Copyrights © 2025