Text classification is a fundamental task in natural language processing (NLP) aimed at categorizing text documents into predefined categories or labels. Leveraging artificial intelligence (AI) tools, particularly deep learning and machine learning, has significantly enhanced text classification capabilities. However, for the Arabic language, which lacks comprehensive resources in this domain, the challenge is even more pronounced. Hierarchical text classification, which organizes categories into a tree-like structure, presents added complexity due to inter-category similarities and connections across different levels. In addressing this challenge, we propose a deep learning model based on BERT (Bidirectional Encoder Representations from Transformers) and BiLSTM (Bidirectional Long Short-Term Memory). Experimental evaluations demonstrate the effectiveness of our approach compared to existing methods, yielding promising results. Our study contributes to advancing text classification methodologies, particularly in the context of Arabic language processing.
Copyrights © 2024