Adversarial attacks represent a substantial threat to the security and reliability of machine learning models employed in wireless sensor networks (WSNs). This study tries to solve this difficulty by evaluating the efficiency of different defensive mechanisms in minimizing the effects of evasion assaults, which try to mislead ML models into misclassification. We employ the Edge-IIoTset dataset, a comprehensive cybersecurity dataset particularly built for IoT and IIoT applications, to train and assess our models. Our study reveals that employing adversarial training, robust optimization, and feature transformations dramatically enhances the resistance of machine learning models against evasion attempts. Specifically, our defensive model obtains a significant accuracy boost of 12% compared to baseline models. Furthermore, we study the possibilities of combining alternative generative adversarial networks (GANs), random forest ensembles, and hybrid techniques to further boost model resilience against a broader spectrum of adversarial assaults. This study underlines the need for proactive methods in preserving machine learning systems in real-world WSN contexts and stresses the need for continued research and development in this quickly expanding area.
Copyrights © 2024