Septi Asriani, Aveny
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

HyRoBERTa: Hybrid Robustly Optimized BERT Approach Model for Sentiment and Sarcasm Detection in Post-Flood Social Media Analysis Yuliyanti, Siti; Septi Asriani, Aveny; Purwayoga, Vega; Gusnadi, Zakwan
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 10 No 1 (2026): February 2026
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v10i1.6963

Abstract

The detection problem is a crucial step in sentiment classification because it strengthens the validity and reliability of the model's interpretation of ambiguous text, especially in complex social contexts such as post-disaster public communication. Without this detection, the model is prone to significant classification errors. This study presents a hybrid approach for sentiment analysis with sarcasm detection after a flood disaster by combining the RoBERTa model with sequential deep learning architectures such as GRU, LSTM, and BiLSTM. We used a dataset of 17,520 tweets that were pre-processed using cleaning, normalization, and tokenization. Then, the positive class is further detected to determine whether it is sarcasm. The model was trained using a transformer-based transfer learning method with a combination of hyperparameters: the number of epochs, batch size, dropout rate, and learning rate. The experimental results show that the RoBERTa-GRU model achieved the highest accuracy for sentiment classification at 97. 26%, whereas the RoBERTa-BiLSTM model excels in detecting sarcasm with an accuracy of 98. 74%. RoBERTa-BiLSTM excels in sarcasm detection because it provides a bidirectional sequential mechanism and better long-term memory, effectively leveraging RoBERTa's rich embedding to identify contextual contradictions that are characteristic of sarcasm. Meanwhile, RoBERTa-GRU succeeds in sentiment classification because its architecture is more concise yet effective enough to infer dominant sentiments that have been filtered from the robust representation provided by RoBERTa, making the model more efficient for less complex tasks.