Claim Missing Document
Check
Articles

Found 1 Documents
Search

Exploiting Vulnerabilities of Machine Learning Models on Medical Text via Generative Adversarial Attacks Akmal Shahib, Maulana; Basuki, Setio; Aulia Arif, Wardhana
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 10, No. 3, August 2025
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v10i3.2280

Abstract

Significant developments in artificial intelligence (AI) technology have fueled its adoption across a range of fields. The use of AI, particularly machine learning (ML), has expanded significantly in the medical field due to its high diagnostic precision. However, the AI model faces a serious challenge to handle the adversarial attacks. These attacks use perturbed data (modified data), which is unnoticeable to humans but can significantly alter prediction results. This paper uses a medical text dataset containing descriptions of patients with lung diseases classified into eight categories. This paper aims to implement the TextFooler technique to deceive predictive models on medical text against adversarial attacks. The experiment reveals that three ML models developed using popular approaches, i.e., transformer-based model based on Bidirectional Encoder Representations from Transformers (BERT), Stack Classifier that combines three traditional machine learning models, and individual traditional algorithms achieved the same classification accuracy of 99.98%.  The experiment reveals that BERT is the weakest model, with an attack success rate of 76.8%, followed by traditional machine learning methods and the stack classifier, with success rates of 28.73% and 5.21%, respectively. This implies that although BERT classification demonstrates good performance, it is highly vulnerable to adversarial attacks. Therefore, there is an urgency to develop predictive models that are robust and secure against potential attacks.