This paper presents a deep learning approach for Arabic question classification, leveraging the strengths of pre-trained language models and advanced neural network architectures to address the unique challenges of Arabic text processing. The proposed methodology employs BERT and Word2Vec to generate contextualized and semantic-rich representations of Arabic questions, effectively capturing their linguistic intricacies and morphological complexity. These embeddings are fed into a hybrid classification framework combining Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (BiLSTM) networks, enabling the extraction of both spatial and sequential features from the input. Experimental results demonstrate the model’s effectiveness, achieving an accuracy of 85.12%, along with high precision, recall, and F1-score metrics. These findings highlight the potential of integrating pre-trained Arabic-specific language models with hybrid deep learning architectures, providing a robust solution for Arabic question classification. This work contributes to advancing Arabic natural language processing, offering a strong foundation for the development of high-performance question-answering systems and related applications.
Copyrights © 2025