Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Journal of Applied Data Sciences

Modeling Ramadan Hilal Classification with Image Processing Technology Using YOLO Algorithm Anggraini, Nenny; Zulkifli, Zulkifli; Hakiem, Nashrul
Journal of Applied Data Sciences Vol 5, No 3: SEPTEMBER 2024
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v5i3.311

Abstract

This research aims to create a model for classifying hilal using the YOLO algorithm. The determination of the beginning of the month of Ramadan is an important aspect of the Islamic calendar that has an impact on the implementation of fasting. With technological advances, especially in image processing, there is potential to overcome the limitations of conventional methods currently used in hilal detection for determining the beginning of Ramadan. This research uses the prototyping method in its implementation. The dataset in this research comes from videos on the BMKG Youtube channel and images from various sources such as NASA Planetary Data System and Google Images. YOLOv5 and YOLOv8 algorithms are used to develop the object detection model. The novelty of this research is the use of the YOLO algorithm with video datasets to detect hilal to determine the beginning of the month of Ramadan and Shawwal. The best-performing model, YOLOv5m with 100 epochs and a batch size of 30, achieved a precision of 0.838 and a mAP of 0.5-0.95 of 0.735. The results indicate that YOLOv5m is more effective in hilal detection, providing a novel approach to determine the beginning of Ramadan and Shawwal with greater accuracy and consistency. This integration of advanced object detection technology with religious practice offers a significant improvement over traditional method.
CNN-LSTM with Multi-Acoustic Features for Automatic Tajweed Mad Rule Classification Anggraini, Nenny; Rahman, Yusuf; Hidayanto, Achmad Nizar; Sukmana, Husni Teja
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1062

Abstract

The rules of mad recitation in the Qur’an are a crucial aspect of tajwīd, governing the lengthening of vowel sounds that affect both meaning and recitational accuracy. Despite its importance, there is currently no reliable automatic system capable of classifying mad rules based on voice input. This study proposes a deep learning-based approach using a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) model to automatically classify mad rules from Qur’anic recitations. The research follows the CRISP-DM methodology, covering data understanding, preparation, modeling, and evaluation stages. Acoustic features were extracted from 3,816 annotated audio segments of Surah Al-Fātiḥah, combining Mel-Frequency Cepstral Coefficients (MFCC), Chroma, Spectral Contrast, and Root Mean Square (RMS) to represent phonetic and prosodic attributes. The CNN layers captured spatial characteristics of the spectrum, while LSTM layers modeled temporal dependencies of the audio. Experimental results show that the combination of all four features achieved an accuracy of 97.21%, precision of 95.28%, recall of 95.22%, and F1-score of 95.25%. These findings indicate that multi-feature integration enhances model robustness and interpretability. The proposed CNN-LSTM framework demonstrates potential for practical deployment in voice-based tajwīd learning tools and contributes to the broader field of Qur’anic speech recognition by offering a systematic, ethically grounded, and data-driven approach to mad classification.