Jurnal Sistem Cerdas
Vol. 8 No. 2 (2025): August

Analysis of Defense Mechanisms Against FGSM Adversarial Attacks on ResNet Deep Learning Models Using the CIFAR-10 Dataset

Miranti Jatnika Riski (Unknown)
Krishna Aurelio Noviandri (Unknown)
Yoga Hanggara (Unknown)
Nugraha Priya Utama (Unknown)
Ayu Purwarianti (Unknown)



Article Info

Publish Date
31 Aug 2025

Abstract

Adversarial attacks threaten the reliability of deep learning models in image classification, requiring effective defense mechanisms. This study evaluates how defense distillation and adversarial training protect ResNet18 models trained on CIFAR-10 data against Fast Gradient Sign Method (FGSM) attacks. The baseline model achieves 85.01% accuracy on clean data but its accuracy falls to 19.23% when FGSM attacks at epsilon 0.3. The accuracy of defense distillation drops to 23.68% when epsilon reaches 0.3 but adversarial training maintains 0.34% accuracy at epsilon 0.25 although it reduces clean data accuracy to 57.08%. The analysis shows that classes with similar visual characteristics such as cats and dogs remain vulnerable to attacks. The study demonstrates the requirement for balanced defense approaches while indicating additional work needs to improve model robustness.

Copyrights © 2025






Journal Info

Abbrev

jsc

Publisher

Subject

Automotive Engineering Computer Science & IT Control & Systems Engineering Education Electrical & Electronics Engineering

Description

Jurnal Sistem Cerdas dengan eISSN : 2622-8254 adalah media publikasi hasil penelitian yang mendukung penelitian dan pengembangan kota, desa, sektor dan kesistemam lainnya. Jurnal ini diterbitkan oleh Asosiasi Prakarsa Indonesia Cerdas (APIC) dan terbit setiap empat bulan ...