Pervin, Mst. Tasnim
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Adversarial attack driven data augmentation for medical images Pervin, Mst. Tasnim; Tao, Linmi; Huq, Aminul
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 6: December 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i6.pp6285-6292

Abstract

An important stage in medical image analysis is segmentation, which aids in focusing on the required area of an image and speeds up findings. Fortunately, deep learning models have taken over with their high-performing capabilities, making this process simpler. The deep learning model’s reliance on vast data, however, makes it difficult to utilize for medical image analysis due to the scarcity of data samples. Too far, a number of data augmentations techniques have been employed to address the issue of data unavailability. Here, we present a novel method of augmentation that enabled the UNet model to segment the input dataset with about 90% accuracy in just 30 epochs. We describe the us- age of fast gradient sign method (FGSM) as an augmentation tool for adversarial machine learning attack methods. Besides, we have developed the method of Inverse FGSM, which im- proves performance by operating in the opposite way from FGSM adversarial attacks. In comparison to the conventional FGSM methodology, our strategy boosted performance up to 6% to 7% on average. The model became more resilient to hostile attacks because to these two strategies. An innovative implementation of adversarial machine learning and resilience augmentation is revealed by the overall analysis of this study.