Claim Missing Document
Check
Articles

Found 1 Documents
Search

Interpretable Deep Learning for Industrial Fault Detection Syarif, Ahmet Yılmaz; Demir, Elif; Kaya, Mehmet
International Journal of Smart Systems Vol. 1 No. 2 (2023): May
Publisher : Etunas

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.63876/ijss.v1i2.74

Abstract

The integration of deep learning into industrial fault detection systems has significantly enhanced predictive accuracy and operational efficiency. However, the lack of model interpretability poses a critical barrier to its widespread adoption in safety-critical environments. This study proposes an interpretable deep learning framework that combines Convolutional Neural Networks (CNNs) with attention mechanisms and Layer-wise Relevance Propagation (LRP) to enable transparent fault diagnosis in complex machinery. Using a benchmark dataset from a rotating machinery system, the model achieves high classification performance while providing intuitive visual and quantitative explanations for its predictions. The attention module highlights critical temporal and spatial features, while LRP decomposes prediction scores to reveal feature-level contributions. Experimental results demonstrate that the proposed model not only maintains high accuracy (above 95%) but also delivers interpretable outputs that align with domain expert reasoning. Additionally, the model supports root cause analysis and facilitates trust in automated systems, which is essential for industrial stakeholders. This research bridges the gap between black-box deep learning models and real-world industrial applications by promoting transparency, accountability, and actionable insights. The proposed framework serves as a practical step toward deploying explainable AI in industrial settings, supporting both real-time monitoring and decision-making processes.