Aqilla, Livia Naura
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Best Word2vec Architecture in Sentiment Classification of Fuel Price Increase Using CNN-BiLSTM Aqilla, Livia Naura; Sibaroni, Yuliant; Prasetiyowati, Sri Suryani
Sinkron : jurnal dan penelitian teknik informatika Vol. 7 No. 3 (2023): Article Research Volume 7 Issue 3, July 2023
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v8i3.12639

Abstract

The policy of increasing fuel prices has been carried out frequently in recent years, due to the instability of international price fluctuations. This study uses sentiment analysis to examine fuel price increases and their impact on public sentiment. Sentiment analysis is a data processing method to obtain information about an issue by recognizing and extracting emotions or opinions from existing texts. The method used is Word2vec Continuous Bag of Words (CBOW) and Skip-gram. Testing uses different vector dimensions in each architecture and uses a CNN-BiLSTM deep learning hybrid which performs better on sizable datasets for sentiment categorization. The results showed that the CBOW model with 300 vector dimensions produced the best performance with 87% accuracy, 87% recall, 89% precision and 88% F1 score.
Memeriksa Mekanisme Perhatian dalam Hybrid Deep Learning untuk Analisis Sentimen di Seluruh Panjang Teks Aqilla, Livia Naura; Sibaroni, Yuliant
JURNAL INFOTEL Vol 17 No 3 (2025): August
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v17i3.1396

Abstract

Sentiment analysis is a key task in natural language processing (NLP) with applications in a wide range of domains. This study examines the impact of self-attention and global attention placement in CNN-BiLSTM and CNN-LSTM models, exploring their effectiveness when positioned before, after or both before and after BiLSTM/LSTM, particularly for texts of different lengths. Instead of applying attention mechanisms in a fixed position, this research explores the most suitable type and placement of attention to improve model understanding and adaptability across datasets with different text lengths. Experiments were conducted using the IMDB Movie Reviews Dataset and the Twitter US Airline Sentiment dataset. The results show that for long texts, CNN-BiLSTM with self-attention before and after BiLSTM achieves an F1 score of 93. 77% (+2. 72%), while for short texts, it reaches 82.70% (+2.24%). These findings highlight that optimal attention placement significantly improves sentiment classification accuracy. The study provides insights into designing more effective hybrid deep learning models. It contributes to future research on multilingual and multi-domain sentiment analysis, where attention mechanisms can be adapted to different textual structures.