Cancer stands as the world’s second-leading cause of death, arising from abnormal cell growth that invades the body’s cells and tissues. Simultaneous occurrences of lung and colon cancer are not uncommon, with lung cancer often emerging as the second primary cancer in colon cancer patients. While Deep Learning (DL) approaches have shown promise in accurate cancer classification, recent studies highlight the susceptibility of DL models to perturbations in input images. Merely achieving accuracy is insufficient; models must demonstrate resilience against even the slightest perturbations by applying adversarial defence methods. This study aims to enhance the reliability of the Convolutional Neural Network (CNN) algorithm in the face of adversarial attacks by implementing adversarial training. Leveraging the LC25000 dataset and various pre-trainedCNNmodels for classification,we employ adversarial attack methods such as Carlini and Wagner, DeepFool, and SaliencyMap alongside adversarial training for defence. Evaluation metrics include precision, recall, F1-score, accuracy. Our assessment involves scrutinizing adversarial attacks and defences on histopathology images related to lung and colon issues, representing a state-of-the-art endeavour. The results indicate a significant improvement in susceptibility to adversarial attacks on histopathological images of the lungs and colon, from 0% to 81%.
Copyrights © 2024