Natural Language Processing (NLP) is a branch of artificial intelligence that is widely used to analyze whether a sentence contains positive, negative, or neutral sentiment, particularly in the context of expressing opinions in the online environment. This study compares several models to identify the most optimal one, namely Naïve Bayes, Support Vector Machine (SVM), XGBoost, and IndoBERT. The dataset used in this research was obtained from Kaggle and consists of 5,644 data points in the neutral class, 2,934 data points in the positive class, and 2,606 data points in the negative class. Prior to model implementation, the dataset underwent a preprocessing stage that included case folding, cleansing, tokenization, stemming, and stopword removal. Subsequently, the data were trained using the four aforementioned methods. The results indicate that Naïve Bayes achieved an accuracy of 75%, SVM reached 79%, XGBoost obtained 76%, while IndoBERT achieved the highest accuracy at 85%. Therefore, it can be concluded that, using this dataset, IndoBERT performed sentiment classification very effectively.