The rapid growth of digital communication has intensified opinion exchanges across languages and cultures on social media, enriching public discourse while also increasing the risk of polarization that deepens social divisions. Conventional sentiment analysis methods that rely on translation often distort meaning, overlook emotional nuances, and fail to capture rhetorical devices such as irony and sarcasm, thereby limiting their reliability in multilingual contexts. This study examines the capability of XLM-RoBERTa, a multilingual transformer model pretrained on more than 100 languages, to address these challenges by generating consistent semantic representations and accommodating linguistic and cultural diversity without translation. The research employs bibliometric analysis using VOSviewer on 357 Scopus-indexed publications from 2020 to 2025 to map research trends, combined with a literature review that evaluates XLM-RoBERTa in sentiment and opinion analysis. The findings reveal that although XLM-RoBERTa has been widely employed for sentiment classification, text categorization, and offensive language detection, research explicitly focused on multilingual opinion polarization remains limited. Benchmark evaluations further indicate that XLM-RoBERTa surpasses earlier multilingual models, achieving 79.6% accuracy on XNLI and an 81.2% F1-score on MLQA, confirming its robustness in capturing semantic nuances, cultural variations, and rhetorical complexity without translation. The novelty of this research lies in integrating trend-mapping with methodological evaluation, thereby establishing XLM-RoBERTa as a reliable framework for real-time monitoring of global public opinion, supporting evidence-based policymaking, and advancing scholarly understanding of multilingual communication dynamics in the digital era.