Claim Missing Document
Check
Articles

Found 2 Documents
Search

From Static to Contextual: A Survey of Embedding Advances in NLP Alkaabi, Hussein; Jasim, Ali Kadhim; Darroudi, Ali
PERFECT: Journal of Smart Algorithms Vol. 2 No. 2 (2025): PERFECT: Journal of Smart Algorithms, Article Research July 2025
Publisher : LEMBAGA KAJIAN PEMBANGUNAN PERTANIAN DAN LINGKUNGAN (LKPPL)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62671/perfect.v2i2.77

Abstract

Embedding techniques have been a cornerstone of Natural Language Processing (NLP), enabling machines to represent textual data in a form that captures semantic and syntactic relationships. Over the years, the field has witnessed a significant evolution—from static word embeddings, such as Word2Vec and GloVe, which represent words as fixed vectors, to dynamic, contextualized embeddings like BERT and GPT, which generate word representations based on their surrounding context. This survey provides a comprehensive overview of embedding techniques, tracing their development from early methods to state-of-the-art approaches. We discuss the strengths and limitations of each paradigm, their applications across various NLP tasks, and the challenges they address, such as polysemy and out-of-vocabulary words. Furthermore, we highlight emerging trends, including multimodal embeddings, domain-specific representations, and efforts to mitigate embedding bias. By synthesizing the advancements in this rapidly evolving field, this paper aims to serve as a valuable resource for researchers and practitioners while identifying open challenges and future directions for embedding research in NLP.
Arabic NLP: A Survey of Pre-Processing and Representation Techniques Alkaabi, Hussein Ala'a; Jasim, Ali kadhim; Darroudi, Ali
Journal of Computer Science, Information Technology and Telecommunication Engineering Vol 6, No 2 (2025)
Publisher : Universitas Muhammadiyah Sumatera Utara, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30596/jcositte.v6i2.25562

Abstract

The rapid growth of Arabic Natural Language Processing (NLP) has underscored the vital role of upstream tasks that prepare raw text for modeling. This review systematically examines the key steps in Arabic text pre-processing and representation learning, highlighting their impact on downstream NLP performance. We discuss the unique linguistic challenges posed by Arabic, such as rich morphology, orthographic ambiguity, dialectal diversity, and code-switching phenomena. The survey covers traditional rule-based and statistical methods and modern deep learning approaches, including subword tokenization and contextual embeddings. Special attention is given to how pre-trained language models like AraBERT and MARBERT interact with pre-processing pipelines, often redefining the balance between explicit text normalization and implicit representation learning. Furthermore, we analyze existing tools, benchmarks, and evaluation metrics, and identify persistent gaps such as dialect adaptation and Romanized Arabic (Arabizi) processing. By mapping current practices and open issues, this review aims to guide researchers and practitioners towards more robust, adaptive, and linguistically-aware Arabic NLP pipelines, ensuring that the data fed into models is as clean, consistent, and semantically meaningful as possible.