IAES International Journal of Artificial Intelligence (IJ-AI)
Vol 11, No 3: September 2022

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Sandhya Aneja (Universiti Brunei Darussalam)
Nagender Aneja (Universiti Brunei Darussalam)
Pg Emeroylariffion Abas (Universiti Brunei Darussalam)
Abdul Ghani Naim (Universiti Brunei Darussalam)



Article Info

Publish Date
01 Sep 2022

Abstract

Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has been used to generate adversarial examples from the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research (CIFAR-10) data sets. Under perturbation, the method provided absolute accuracy improvements of up to 9.3% in the MNIST data set and 13% in the CIFAR-10 data set. Training using transformed images with higher luminance values increases the robustness of the classifier. We have shown that transfer learning is disadvantageous for adversarial machine learning. The results indicate that simple adversarial examples can improve resilience and make deep learning easier to apply in various applications.

Copyrights © 2022






Journal Info

Abbrev

IJAI

Publisher

Subject

Computer Science & IT Engineering

Description

IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like ...