Sentiment analysis plays a crucial role in understanding user perceptions of products and services in the digital era. However, its implementation is still constrained by the need for high computational resources. This research aims to evaluate the impact of implementing transformer-based Natural Language Processing (NLP) algorithms—such as BERT, RoBERTa, and ELECTRA—on the quality and efficiency of sentiment analysis, especially in multilingual and real-time data contexts. This study uses a Systematic Literature Review (SLR) approach with the PRISMA protocol to assess the performance, challenges, and solutions offered by various NLP models. The study results show that transformer-based models consistently outperform traditional approaches; BERT and RoBERTa can achieve accuracy above 95% with F1-scores ranging from 0.92–0.95, while ELECTRA records the highest accuracy up to 98.09% with average precision and recall above 0.90 on e-commerce data. Furthermore, the transfer learning approach has been proven to reduce training time by 50–70% compared to conventional methods, without compromising analysis quality. Nevertheless, the need for large computational power remains a major obstacle. Several strategies, such as model distillation and data augmentation, have proven effective in reducing computational load while maintaining high performance. These findings confirm that transformer-based NLP technology not only improves the quality of sentiment analysis but also opens up innovation opportunities for cross-language and cross-domain applications. This research recommends optimizing models for resource-constrained languages and developing real-time systems to achieve inclusivity and efficiency in modern data processing.
                        
                        
                        
                        
                            
                                Copyrights © 2025