Text classification plays a crucial role in natural language processing, and enhancing its performance is an ongoing area of research. This study investigates the impact of integrating attention mechanisms into a recurrent neural network (RNN) based architectures, including RNN, LSTM, GRU, and their bidirectional variants (BiLSTM and BiGRU), for text sentiment analysis. Three attention mechanisms Multihead Attention, Self Attention, and Adaptive Attention are applied to evaluate their effectiveness in improving model accuracy. The results reveal that attention mechanisms significantly enhance performance by enabling models to focus on the most relevant parts of the input text. Among the tested configurations, the LSTM model with Multihead Attention achieved the highest accuracy of 68.34%. The findings underscore the critical role of attention mechanisms in overcoming traditional RNN limitations, such as difficulty in capturing long-term dependencies, and highlight the potential for their application in broader text classification tasks.
Copyrights © 2024