This work explores the vulnerability of Convolutional Neural Networks (CNNs) to adversarial attacks, particularly focusing on the Fast Gradient Sign Method (FGSM). Adversarial attacks, which subtly manipulate input images to deceive machine learning models, pose significant threats to the security and reliability of CNN-based systems. The research introduces an enhanced methodology for identifying and mitigating these adversarial threats by incorporating an anti-noise predictor to separate adversarial noise and images, thereby improving detection accuracy. The proposed method was evaluated against multiple adversarial attack strategies using the MNIST dataset, demonstrating superior detection performance compared to existing techniques. Additionally, the study highlights the integration of Fourier domain-based noise accommodation, enhancing robustness against attacks. The findings contribute to the development of more resilient CNN models capable of effectively countering adversarial manipulations, emphasizing the importance of continuous adaptation and multi-layered defense strategies in securing machine learning systems.
Copyrights © 2025