Fake reviews have emerged as a serious threat to the integrity of digital platforms, particularly in e-commerce and online review sites. This study explores the application of RoBERTa (Robustly Optimized BERT Approach), a transformer-based architecture optimized for natural language processing (NLP), in automatically detecting fake reviews. The methodology includes data collection from online platforms, contextual feature extraction using RoBERTa embeddings, model training through supervised learning, and evaluation using classification metrics such as accuracy, precision, recall, and F1-score. The training results indicate a significant convergence trend in the training loss, while the validation loss remains relatively unstable, reflecting challenges in model generalization. Nevertheless, experimental results demonstrate that RoBERTa outperforms other approaches such as Logistic Regression PU, K-NN with EM, and LDA-BPTextCNN, achieving an accuracy of 86.25%. These findings highlight RoBERTa's strong potential in detecting manipulative content and underscore its value as an essential tool in building a transparent and trustworthy digital ecosystem.
                        
                        
                        
                        
                            
                                Copyrights © 2025