Claim Missing Document
Check
Articles

Found 2 Documents
Search

Arabic NLP: A Survey of Pre-Processing and Representation Techniques Alkaabi, Hussein Ala'a; Jasim, Ali kadhim; Darroudi, Ali
Journal of Computer Science, Information Technology and Telecommunication Engineering Vol 6, No 2 (2025)
Publisher : Universitas Muhammadiyah Sumatera Utara, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30596/jcositte.v6i2.25562

Abstract

The rapid growth of Arabic Natural Language Processing (NLP) has underscored the vital role of upstream tasks that prepare raw text for modeling. This review systematically examines the key steps in Arabic text pre-processing and representation learning, highlighting their impact on downstream NLP performance. We discuss the unique linguistic challenges posed by Arabic, such as rich morphology, orthographic ambiguity, dialectal diversity, and code-switching phenomena. The survey covers traditional rule-based and statistical methods and modern deep learning approaches, including subword tokenization and contextual embeddings. Special attention is given to how pre-trained language models like AraBERT and MARBERT interact with pre-processing pipelines, often redefining the balance between explicit text normalization and implicit representation learning. Furthermore, we analyze existing tools, benchmarks, and evaluation metrics, and identify persistent gaps such as dialect adaptation and Romanized Arabic (Arabizi) processing. By mapping current practices and open issues, this review aims to guide researchers and practitioners towards more robust, adaptive, and linguistically-aware Arabic NLP pipelines, ensuring that the data fed into models is as clean, consistent, and semantically meaningful as possible.
A Multi-Feature Fusion Framework for Sentiment Analysis Based on Textual and Affective Signals Alkaabi, Hussein Ala'a; jasim, ali kadhim
International Journal of Artificial Intelligence Research Vol 9, No 2 (2025): December
Publisher : Universitas Dharma Wacana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29099/ijair.v9i2.1634

Abstract

Sentiment analysis of social media content, particularly on platforms like Twitter, presents significant challenges due to the informal, brief, and context-dependent nature of user-generated text. Traditional lexicon-based and shallow machine learning approaches often fail to capture nuanced sentiment expressions, especially in the presence of slang, abbreviations, sarcasm, and emotionally charged language. To address these limitations, this paper proposes a novel tri-stream feature fusion framework that integrates contextual semantics, sequential dependencies, and affective signals for robust sentiment classification. The framework employs RoBERTa to extract rich contextual embeddings, Bidirectional Long Short-Term Memory (BiLSTM) networks to capture word-order and temporal patterns, and lexicon-based emotion vectors to enhance emotional cue detection. These heterogeneous features are concatenated at the representation level to form a comprehensive feature space, which is subsequently used to predict sentiment polarity via a fully connected neural network classifier. Extensive experiments conducted on the Sentiment140 dataset, comprising 1.6 million labeled tweets, demonstrate that the proposed approach significantly outperforms conventional baselines and recent hybrid models, achieving an accuracy of 92.1%. Additionally, ablation studies and misclassification analyses reveal each feature stream’s complementary contributions and highlight challenges in detecting sarcasm and implicit sentiment. Future work will integrate sarcasm-aware components and external knowledge sources to further enhance model interpretability and robustness.