The Al-Qur’an serves as a fundamental guide for Muslims, requiring both comprehension and practice. Accurate recitation according to tajweed rules is essential for a deeper understanding of its meaning. Despite the growing focus on classification across various modalities, studies specifically targeting audio objects remain relatively limited, motivating further exploration in this area. This study focused on the classification of the tajweed rule as the decided audio object, leveraging the potential of Natural Language Processing (NLP) to support Qur’an research and studies, as well as developing applications that may help learners understand the Qur’an, so further study is needed on the recognition of tajweed reading rules, one of which is the noon saakin or tanween tajweed rule. Audio features were extracted using Mel-Frequency Cepstrum Coefficients (MFCC) technique, which has been widely adopted in various study within the scope of audio processing tasks. These features were subsequently used to train a classification model based on Deep Neural Network (DNN) algorithm. Experiment results demonstrate that the DNN classification model produces an accuracy of 71% and f1 score respectively for iqlab of 0.8, idgham of 0.46, idzhar of 0.77, and ikhfa of 0.72. The results of testing the model with new foreign data, each class tested with one data has successful rate of 50%. These findings indicate that the classification model needs to be further improved in terms of its design or diversity of the audio data, especially model improvements in the recognizing idgham, idzhar, and ikhfa laws.