Azis, Aaz Muhammad Hafidz
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Automatic Detection of Hijaiyah Letters Pronunciation using Convolutional Neural Network Algorithm Gerhana, Yana Aditia; Azis, Aaz Muhammad Hafidz; Ramdania, Diena Rauda; Dzulfikar, Wildan Budiawan; Atmadja, Aldy Rialdy; Suparman, Deden; Rahayu, Ayu Puji
JOIN (Jurnal Online Informatika) Vol 7 No 1 (2022)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v7i1.882

Abstract

Abstract— Speech recognition technology is used in learning to read letters in the Qur'an. This study aims to implement the CNN algorithm in recognizing the results of introducing the pronunciation of the hijaiyah letters. The pronunciation sound is extracted using the Mel-frequency cepstral coefficients (MFCC) model and then classified using a deep learning model with the CNN algorithm. This system was developed using the CRISP-DM model. Based on the results of testing 616 voice data of 28 hijaiyah letters, the best value was obtained for accuracy of 62.45%, precision of 75%, recall of 50% and f1-score of 58%.
XGBoost and Convolutional Neural Network Classification Models on Pronunciation of Hijaiyah Letters According to Sanad Azis, Aaz Muhammad Hafidz; Lestari, Dessi Puji
JOIN (Jurnal Online Informatika) Vol 8 No 2 (2023)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v8i2.1081

Abstract

According to Sanad, the pronunciation of Hijaiyah letters can serve as a benchmark for correct or valid reading based on the makhraj and properties of the letters. However, the limited number of Qur'anic Sanad teachers remains one of the obstacles to learning the Qur'an. This study aims to identify the most practical combination of classification models in constructing a voice recognition system that facilitates learning without requiring direct interaction with a teacher. The methods employed include the XGBoost algorithm and CNN. As a result, out of the 12 letter trait labels, the CNN model was utilized for 10 of them, specifically for traits S1, S2, S4, S5, T1, T2, T3, T4, T5, and T6, on trait labels S3 and T7 applying the XGBoost model. Furthermore, the inclusion of additional data yielded performance results for each property, with an average accuracy of 78.14% for property S (letters with opposing properties), 70.69% for property T (letters without opposing properties), and an overall average of 73.79% per letter.