The development of depression detection can be done using exploration of social media content. However, the classification of depression indicative texts faces a major challenge in the form of class distribution imbalances, which can degrade the model's generalization capabilities. This study aims to analyze how the method of overcoming class imbalance affects the performance of the IndoBERT model in the classification of Indonesian depression indication texts by emphasizing the analysis of training stability based on the dynamics of training loss and validation loss. The dataset used consists of 3,863 data, data that has gone through the process of cleaning, removing duplicate data, tokenization, encoding, and dividing data into stratification into training data, validation data, and test data. The IndoBERT-base-p1 model was fine-tuned using three training scenarios, namely baseline, class weight, and focal loss with an early stopping mechanism based on validation loss. The test results showed that the baseline IndoBERT scenario produced an accuracy of 77.52%, a weighted precision of 0.7752, a weighted recall of 0.7752, a weighted F1-score of 0.7737, and a ROC-AUC of 0.8528 with a relatively stable training pattern. The class weight method produced an accuracy of 74.68%, a weighted F1-score of 0.7467, and a ROC-AUC of 0.8342 which showed an increase in class discrimination ability but accompanied by a decrease in overall accuracy. Meanwhile, the focal loss method produced an accuracy of 72.87%, a weighted F1-score of 0.7291, and a ROC-AUC of 0.8188 with more balanced training characteristics than the weight class. The findings suggest that handling classroom imbalances does not necessarily improve global performance, so model evaluations need to consider a balance between accuracy, sensitivity, and stability of training.