Claim Missing Document
Check
Articles

Found 1 Documents
Search

Benchmarking Transformer Models Against Classical Approaches for Fake Review Detection on the Deceptive Opinion Spam Corpus Lokeshwaran, K.; Komal Kumar, N.; Senthil Murugan, J.; Elanangai, V.; Sathya, S.
International Journal of Environment, Engineering and Education Vol. 7 No. 3 (2025)
Publisher : Three E Science Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55151/ijeedu.v7i3.334

Abstract

In today’s digital environment, online reviews have become one of the key factors that influence the decisions of customers. This is especially true in areas such as e-commerce, travel and the hospitality industry, where buyers depend heavily on the shared experiences of others before making a choice. At the same time, the growing issue of fake or fabricated reviews has raised serious concerns, as it reduces the reliability of online platforms and creates confusion for consumers. Detecting such misleading reviews is not an easy task, since the language used in them is often very close to what is seen in genuine opinions. In the present work, an attempt has been made to compare the performance of traditional machine learning techniques with that of transformer-based deep learning models for the identification of fake reviews. As part of the baseline, Logistic Regression and Linear SVM were applied with TF-IDF features. On the other hand, advanced architectures like BERT, RoBERTa and XLNet were fine-tuned on the Deceptive Opinion Spam Corpus. The results clearly indicated that the classical models gave accuracies in the range of mid-80 percent, whereas the transformer-based models performed much better, crossing or coming close to 90 percent. Among the transformer models, RoBERTa showed the most balanced performance across precision and recall, XLNet gave the highest recall, which is very important when sensitivity is the main concern, while BERT achieved competitive results with less demand on computing resources.