Hadinata, Faustine Ilone
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

A Comparative Analysis of Building Hidden Layer, Activation Function, and Optimizer on Neural Network Sentiment Analysis Sanjaya, Samuel Ady; Kristiyanti, Dinar Ajeng; Irmawati, Irmawati; Hadinata, Faustine Ilone; Karaeng, Cristin Natalia
JOIV : International Journal on Informatics Visualization Vol 9, No 3 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.3.2906

Abstract

The increasing diversity of opinions on social media offers a rich source for sentiment analysis, especially on controversial issues like the potential recession in Indonesia. This study aims to examine social media sentiment by utilizing three Deep Learning methods: Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN). The main objective is to configure key hyperparameters, including the number of hidden layers, activation functions, and optimizers, to optimize performance. A dataset of 38,000 cleaned Twitter posts was used for this study. The preprocessing steps involve various techniques to prepare analysis, including case folding to standardize text, removal of punctuation to eliminate noise, stemming to reduce words to their root forms, and sentiment labeling using advanced tools like VADER and BERT to ensure accurate classification. Each deep learning model is trained using a diverse range of configurations for activation functions, such as Sigmoid and Swish, as well as optimizers like Adam and others to fine-tune performance. Among the models, the CNN, configured with 15 hidden layers, a Sigmoid activation function, and the Adam optimizer, outperformed the others, achieving the highest accuracy of 0.870 and a low loss of 0.316. The results highlight that while the number of hidden layers influences model performance, the choice of activation function and optimizer has a more significant impact on accuracy. Furthermore, the findings offer implications for future research, suggesting that activation functions and optimizers should be prioritized over hidden layers when aiming for improved sentiment analysis performance in various contexts.