Industrial control systems (ICS) are often the target of cyber-attacks, leading to undesirable consequences. ICSs operate without human supervision, making them vulnerable to adversaries. In recent years, numerous deep learning-based solutions have demonstrated their efficiency in detecting anomalies in ICSs. However, there is a lack of ability to pinpoint the sensors and actuators that contributed to the anomaly. In this research work, we use kernel Shapley additive explanations (SHAP) to explain anomalies detected by a temporal convolution autoencoder (TCAE). The proposed TCAE model handles the long-term dependency effectively and is computationally effective on a large dataset. A comprehensive explanation is provided, focusing on the feature that contributed to the anomaly for each identified attack. The SHAP values are extracted for each identified attack and visually depict the feature that contributed to the anomaly for each attack, helping the expert to handle the attack and build user trust.
Copyrights © 2025