This study aims to develop an identification model for Islamic religious sounds, specifically Takbir and Sholawat pronunciations, using audio signal processing and machine learning techniques. With the increasing need for intelligent systems capable of recognizing speech patterns in religious contexts, the implementation of reliable audio classification methods becomes essential. This research utilizes Mel-Frequency Cepstral Coefficients (MFCC) to extract relevant spectral features from audio samples, representing the unique characteristics of Takbir and Sholawat utterances. The dataset consists of 300 audio recordings, evenly distributed between the two classes. Each audio file is preprocessed and converted into a fixed-length MFCC feature vector, which is then labeled accordingly. The feature vectors are split into training and testing sets using an 70:30 ratio. A Support Vector Machine (SVM) classifier is trained using the training data to recognize the distinction between Takbir and Sholawat patterns based on their acoustic signatures. Performance evaluation is carried out using accuracy, precision, recall, and F1-score metrics. The dataset used consists of 300 audio recordings with a division of 200 takbir recordings and 100 sholawat recordings. The MFCC feature extraction process uses 13 coefficients with optimized parameters to capture discriminative spectral characteristics. As a baseline, a Support Vector Machine (SVM) implementation with Radial Basis Function (RBF) kernel was performed for performance comparison.
                        
                        
                        
                        
                            
                                Copyrights © 2025