Nisar, Maaz
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : Scientific Journal of Engineering Research

Incremental Development of a Framework for Mitigating Adversarial Attacks on CNN Models Nisar, Maaz; Fayyaz, Nabeel; Ahmed, Muhammad Abdullah; Shams, Muhammad Usman; Fareed, Bushra
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.349

Abstract

This work explores the vulnerability of Convolutional Neural Networks (CNNs) to adversarial attacks, particularly focusing on the Fast Gradient Sign Method (FGSM). Adversarial attacks, which subtly manipulate input images to deceive machine learning models, pose significant threats to the security and reliability of CNN-based systems. The research introduces an enhanced methodology for identifying and mitigating these adversarial threats by incorporating an anti-noise predictor to separate adversarial noise and images, thereby improving detection accuracy. The proposed method was evaluated against multiple adversarial attack strategies using the MNIST dataset, demonstrating superior detection performance compared to existing techniques. Additionally, the study highlights the integration of Fourier domain-based noise accommodation, enhancing robustness against attacks. The findings contribute to the development of more resilient CNN models capable of effectively countering adversarial manipulations, emphasizing the importance of continuous adaptation and multi-layered defense strategies in securing machine learning systems.