Cyberbullying on Indonesian-language social media has become a serious issue with significant psychological and social consequences, necessitating the development of reliable automated detection systems. However, the informal, ambiguous, and highly contextual nature of social media language, including the frequent use of slang and sarcasm, poses substantial challenges for conventional text classification approaches. This study proposes a hybrid cyberbullying detection model that integrates the domain-specific pre-trained language model IndoBERTweet with a Bidirectional Long Short-Term Memory (BiLSTM) architecture. IndoBERTweet is employed to generate contextualized semantic representations aligned with the linguistic characteristics of Indonesian Twitter data, while BiLSTM is utilized to capture bidirectional sequential dependencies at the sentence level. Experiments were conducted using a publicly available, manually annotated Indonesian Twitter dataset consisting of 13,091 samples, which were reformulated into a binary classification scheme. To address class imbalance, a combination of class weighting and label smoothing was applied during model training. Model performance was evaluated using Accuracy, Precision, Recall, F1-Score, ROC-AUC, and PR-AUC metrics. Experimental results show that the IndoBERTweet–BiLSTM model achieved the best performance with an F1-Score of 87.53%, Recall of 88.80%, Precision of 86.31%, ROC-AUC of 92.91%, and PR-AUC of 94.25%. This performance consistently outperforms baseline models based on IndoBERT and IndoBERT-p1 with identical architectural configurations. These findings highlight the critical role of domain alignment in enhancing cyberbullying detection performance for Indonesian social media text.