Claim Missing Document
Check
Articles

Found 1 Documents
Search

Stacked LSTM with Multi Head Attention Based Model for Intrusion Detection Praveen, S Phani; Panguluri, Padmavathi; Sirisha, Uddagiri; Dewi, Deshinta Arrova; Kurniawan, Tri Basuki; Efrizoni, Lusiana
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.764

Abstract

The rapid advancement of digital technologies, including the Internet of Things (IoT), cloud computing, and mobile communications, has intensified reliance on interconnected networks, thereby increasing exposure to diverse cyber threats. Intrusion Detection Systems (IDS) are essential for identifying and mitigating these threats; however, traditional signature-based and rule-based methods fail to detect unknown or complex attacks and often generate high false positive rates. Recent studies have explored machine learning (ML) and deep learning (DL) approaches for IDS development, yet many suffer from poor generalization, limited scalability, and an inability to capture both spatial and temporal dependencies in network traffic. To overcome these challenges, this study proposes a hybrid deep learning framework integrating Convolutional Neural Networks (CNN), Stacked Long Short-Term Memory (LSTM) networks, and a Multi-Head Self-Attention (MHSA) mechanism. CNN layers extract spatial features, stacked LSTM layers capture long-term temporal dependencies, and MHSA enhances focus on the most relevant time steps, improving accuracy and reducing false alarms. The proposed model was trained and evaluated on the UNSW-NB15 dataset, which represents modern attack vectors and realistic network behavior. Experimental results show that the model achieves state-of-the-art performance, attaining 99.99% accuracy and outperforming existing ML and DL-based intrusion detection systems in both precision and generalization capability.