The proliferation of social media has established a global discursive space where opinions are expressed simultaneously across diverse languages. However, the scarcity of linguistic resources for Low-Resource Languages (LRL) remains a primary obstacle to accurate sentiment analysis. This paper explores the challenges and strategies of Cross-Language Sentiment Analysis (CLSA) within multilingual social media platforms. Employing a Systematic Literature Review (SLR) methodology, this study analyzes the efficacy of transfer learning techniques and pre-trained language models, specifically mBERT and XLM-RoBERTa. The review results indicate that while multilingual models successfully bridge linguistic gaps, cultural nuances and local slang present significant technical challenges. This research concludes that integrating cultural context into model architectures is essential for enhancing cross-lingual sentiment detection accuracy. These findings offer theoretical contributions to the development of Natural Language Processing (NLP) frameworks that are more inclusive of non-English languages within the digital ecosystem.
Copyrights © 2024