Fitriana, Aulia
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Hybrid Explainable AI (XAI) Framework for Detecting Adversarial Attacks in Cyber-Physical Systems Taufik, Mohammad; Aziz, Mohammad Saddam; Fitriana, Aulia
Journal of Technology Informatics and Engineering Vol. 4 No. 1 (2025): APRIL | JTIE : Journal of Technology Informatics and Engineering
Publisher : University of Science and Computer Technology

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51903/jtie.v4i1.295

Abstract

Cyber-Physical Systems (CPS) are increasingly deployed in critical infrastructure yet remain vulnerable to adversarial attacks that manipulate sensor data to mislead AI-based decision-making. These threats demand not only high-accuracy detection but also transparency in model reasoning. This study proposes a Hybrid Explainable AI (XAI) Framework that integrates Convolutional Neural Networks (CNN), SHAP-based feature interpretation, and rule-based reasoning to detect adversarial inputs in CPS environments. The framework is tested on two simulation scenarios: industrial sensor networks and autonomous traffic sign recognition. Using datasets of 10,000 samples (50% adversarial via FGSM and PGD), the model achieved an accuracy of 97.25%, precision of 96.80%, recall of 95.90%, and F1-score of 96.35%. SHAP visualizations effectively distinguished between normal and adversarial inputs, and the added explainability module increased inference time by only 8.5% over the baseline CNN (from 18.5 ms to 20.1 ms), making it suitable for real-time CPS deployment. Compared to prior methods (e.g., CNN + Grad-CAM, Random Forest + LIME), the proposed hybrid framework demonstrates superior performance and interpretability. The novelty of this work lies in its tri-level integration of predictive accuracy, explainability, and rule-based logic within a single real-time detection system—an approach not previously applied in CPS adversarial defense. This research contributes toward trustworthy AI systems that are robust, explainable, and secure by design.