NLP advancements have accelerated Automatic Text Summarization research, but development remains skewed toward high-resource languages. Low-resource languages are underrepresented due to limited digital corpora, scarce linguistic tools, and a lack of locally suitable pre-trained models. This research aims to map, identify, and analyze research trends related to extractive summarization in low-resource languages and to formulate future research directions. This study employs a systematic literature review following the PRISMA 2020 protocol. Articles were collected from the ScienceDirect, IEEE Xplore, and Google Scholar databases, covering the 2020–2025 period. A total of nine publications meeting the inclusion criteria were thoroughly analyzed based on six research questions (RQ) formulated using the PICOC framework. Most studies rely on unsupervised approaches such as TextRank, LexRank, and LSA, with key features including word frequency, sentence position, and semantic proximity. News corpora dominate the domain, while system performance evaluation remains limited to traditional metrics such as ROUGE and F1-Score. Identified challenges include limited annotated datasets, the absence of local NLP models, and a lack of meaning-based evaluation approaches. This study confirms that linguistic inequality persists in text summarization, with most research relying on unsupervised methods and lexical evaluation. To address this, three strategic directions are recommended: developing open, diverse language corpora; adopting adaptable lightweight NLP models; and advancing semantic evaluation approaches. Cross-community and interdisciplinary collaboration is essential for building more inclusive and sustainable automatic text summarization systems.
Copyrights © 2026