This research presents a comparative analysis of four popular sentiment classification models: Naive Bayes, Support Vector Machine (SVM), Long Short-Term Memory (LSTM) networks, and Bidirectional Encoder Representations from Transformers (BERT). The models are evaluated using the Amazon Product Reviews dataset based on their ability to classify sentiments into positive or negative categories. The results show that BERT outperforms the other models in accuracy, precision, recall, and F1-score, demonstrating its superior ability to capture complex contextual relationships in text. LSTM performed well, particularly in recalling positive sentiments, but was outperformed by BERT overall. Conversely, Naive Bayes and SVM exhibited lower accuracy and higher false positive rates, highlighting their limitations in handling nuanced, context-dependent text. This study emphasizes the trade-offs between traditional machine learning models and advanced deep learning techniques.
Copyrights © 2026