The evolution of sentiment analysis has increasingly relied on semi-supervised learning (SSL) models, particularly due to their efficiency in utilizing large amounts of unlabeled data. This study employed four Indonesian datasets—Ridife (sentiment classification), Emotion Indonlu (emotion classification), Sentiment Indonlu (sentiment classification), and Hate Speech (offensive content detection). The LSTM model was trained using labeled data and used to generate pseudo-labels for unlabeled data across three iterations. The performance of the pseudo-labels was evaluated using Random Forest, Logistic Regression, and Support Vector Machine (SVM). The LSTM model demonstrated varying effectiveness across different datasets. For the Sentiment Ridife dataset, LSTM achieved an accuracy of 70.23%, slightly lower than Random Forest but higher than Logistic Regression and SVM. In the Sentiment IndoNLU dataset, LSTM's accuracy was 86.12%, showing strong performance but slightly below Random Forest and Logistic Regression. The Emotion IndoNLU dataset revealed similar performance across models, while the Hate Speech dataset saw LSTM perform well with an accuracy of 86.49%. The results indicate that while LSTM-based SSL can effectively generate pseudo-labels and enhance model performance, its performance varies depending on the dataset and task. This study underscores the need for further research into optimizing pseudo-labeling techniques and exploring advanced NLP models to improve sentiment and emotion analysis in diverse languages.