Sentiment analysis is a key task in natural language processing (NLP) with applications in a wide range of domains. This study examines the impact of self-attention and global attention placement in CNN-BiLSTM and CNN-LSTM models, exploring their effectiveness when positioned before, after or both before and after BiLSTM/LSTM, particularly for texts of different lengths. Instead of applying attention mechanisms in a fixed position, this research explores the most suitable type and placement of attention to improve model understanding and adaptability across datasets with different text lengths. Experiments were conducted using the IMDB Movie Reviews Dataset and the Twitter US Airline Sentiment dataset. The results show that for long texts, CNN-BiLSTM with self-attention before and after BiLSTM achieves an F1 score of 93. 77% (+2. 72%), while for short texts, it reaches 82.70% (+2.24%). These findings highlight that optimal attention placement significantly improves sentiment classification accuracy. The study provides insights into designing more effective hybrid deep learning models. It contributes to future research on multilingual and multi-domain sentiment analysis, where attention mechanisms can be adapted to different textual structures.
Copyrights © 2025