Claim Missing Document
Check
Articles

Found 14 Documents
Search

Implementation of Mel Frequency Cepstral Coefficient and Dynamic Time Warping For Bird Sound Classification Prapcoyo, Hari; Adhita Putra, Bertha Pratama; Perwira, Rifki Indra
SENATIK STT Adisutjipto Vol 5 (2019): Peran Teknologi untuk Revitalisasi Bandara dan Transportasi Udara [ISBN 978-602-52742-
Publisher : Institut Teknologi Dirgantara Adisutjipto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28989/senatik.v5i0.326

Abstract

Lovebird (Agapornis) is a type of bird that has become the belle of new pet birds lately. The interest of the hobbyist in this one song is because Lovebird has a unique chirp. For beginner lovebird fans, the lack of knowledge and experience about lovebird birds results in various cases of fraud in choosing a quality lovebird. They were disappointed expensive lovebirds that had been purchased but did not match what was expected.Lovebird chirping voice recognition can be learned and recognized through the learning process of speaker recognition, which is part of voice recognition. Speaker recognition captures the frequency of the lovebird's voice, then compares it with the sound frequency of the existing training data. The sound frequency and the long duration of chirping of lovebird birds will be extracted through the Mel-Frequency Cepstral Coefficient (MFCC) method. Information in the form of Mel Frequency Cepstrum Coefficients from input data and training data is then compared to the Dynamic Time Warping method. The methodology used in this study uses the grapple method.The results of this study were obtained an accuracy value of sound validation by 80%. It is hoped that with the capabilities of this system, it can help bird chirping lovers know the sound quality of lovebird birds that are good, moderate, and less. Also, it can help the jury of birds chirping, so that it can be used as an accurate standard in classifying lovebird sounds.
Implementasi Perancangan dan Pemeliharaan Jaringan Internet Menuju Smart School pada MA Raden Fattah Ahmad Taufiq Akbar; Bagus Muhammad Akbar; Shoffan Saifullah; Andiko Putro Suryotomo; Rochmat Husaini; Hari Prapcoyo
Masyarakat Berkarya : Jurnal Pengabdian dan Perubahan Sosial Vol. 2 No. 1 (2025): Februari : Masyarakat Berkarya : Jurnal Pengabdian dan Perubahan Sosial
Publisher : Lembaga Pengembangan Kinerja Dosen

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62951/karya.v2i1.1079

Abstract

Internet Network is one of the fields in informatics and electronics engineering which is now growing rapidly due to the issue of the industrial revolution 4.0 which is increasingly closely related to Cloud computing technology and the Internet of Things. Without resources and knowledge about computer networks, the Internet of things and Cloud computing are quite impossible to design. Computer networks give birth to internet access which is very much needed by every agency and even the entire community in the world. Especially in educational institutions such as Madrasah Aliyah (MA) Raden Fatah, which is located in Kalasan, Yogyakarta when in the era of the Covid-19 pandemic, it faces the challenge of disruption from offline learning to online learning. To answer the demands of the times, MA Raden Fattah is very enthusiastic in developing its institution towards a quality smart school. The network infrastructure available at MA Raden Fattah has not been optimized, so through this service, network design and management are carried out so that the need for access points that help students and teachers can be met. This service has succeeded in increasing the number of access points, optimizing the management of internet network resources at MA Raden Fattah, and improving the quality of teaching and learning services at the institution
Klasifikasi Ekspresi Wajah Menggunakan Covolutional Neural Network Taufiq Akbar, Ahmad; Akbar, Ahmad Taufiq; Saifullah, Shoffan; Prapcoyo, Hari
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 11 No 6: Desember 2024
Publisher : Fakultas Ilmu Komputer, Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25126/jtiik.1168888

Abstract

Pengenalan ekspresi wajah adalah tantangan penting dalam pengolahan citra dan interaksi manusia-komputer karena kompleksitas dan variasi yang ada. Penelitian ini mengusulkan arsitektur sederhana Convolutional Neural Network (CNN) untuk meningkatkan efisiensi klasifikasi emosi pada dataset kecil. Dataset yang digunakan adalah Jaffe, yang terdiri dari 213 citra berukuran 256x256 piksel dalam tujuh kategori ekspresi. Citra-citra tersebut di-resize menjadi 128x128 piksel untuk mempercepat pemrosesan. Data diproses menggunakan arsitektur CNN yang terdiri dari 3 lapisan konvolusi, 2 lapisan subsampling, dan 2 lapisan dense. Kami mengevaluasi model dengan 5-fold dan 10-fold cross-validation untuk estimasi kinerja yang robust, serta teknik hold-out (70:30, 80:20, 85:15, dan 90:10) untuk perbandingan hasil yang jelas. Hasil menunjukkan akurasi tertinggi sebesar 90.6% dengan learning rate 0.001 pada pembagian 85% data latih dan 15% data uji, melebihi model yang lebih kompleks. Meskipun tidak menggunakan transfer learning atau augmentasi data, model ini tetap unggul dibandingkan pendekatan tradisional seperti Local Binary Pattern (LBP) dan Histogram Oriented Gradient (HOG). Dengan demikian, arsitektur CNN yang sederhana ini terbukti efektif untuk pengenalan ekspresi wajah pada dataset kecil.   Abstract Facial expression recognition is a significant challenge in image processing and human-computer interaction due to its inherent complexity and variability. This study proposes a simple Convolutional Neural Network (CNN) architecture to enhance the efficiency of emotion classification on small datasets. Jaffe's dataset consists of 213 images sized 256x256 pixels across seven expression categories. These images were resized to 128x128 pixels to accelerate processing. The data was processed using a CNN architecture comprising 3 convolutional layers, 2 subsampling layers, and 2 dense layers. We evaluated the model with 5-fold- and 10-fold cross-validation for robust performance estimation and hold-out techniques (70:30, 80:20, 85:15, and 90:10) for clear result comparison. The results indicated the highest accuracy of 90.6% with a learning rate of 0.001 using the 85% training and 15% testing data split, surpassing that of more complex models. Although the model does not employ transfer learning or data augmentation, it still outperforms traditional approaches such as Local Binary Pattern (LBP) and Histogram Oriented Gradient (HOG). Thus, this simple CNN architecture proves effective for facial expression recognition on small datasets.
EfficientNet B0 Feature Extraction with L2-SVM Classification for Robust Facial Expression Recognition Akbar, Ahmad Taufiq; Saifullah, Shoffan; Prapcoyo, Hari; Rustamadji, Heru; Cahyana, Nur Heri
Journal of Information System and Informatics Vol 7 No 2 (2025): June
Publisher : Universitas Bina Darma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.51519/journalisi.v7i2.1071

Abstract

Facial expression recognition (FER) remains a challenging task due to the subtle visual variations between emotional categories and the constraints of small, controlled datasets. Traditional deep learning approaches often require extensive training, large-scale datasets, and data augmentation to achieve robust generalization. To overcome these limitations, this paper proposes a hybrid FER framework that combines EfficientNet B0 as a deep feature extractor with an L2-regularized Support Vector Machine (L2-SVM) classifier. The model is designed to operate effectively on limited data without the need for end-to-end fine-tuning or augmentation, offering a lightweight and efficient solution for resource-constrained environments. Experimental results on the JAFFE and CK+ benchmark datasets demonstrate the proposed method’s strong performance, achieving up to 100% accuracy across various hold-out splits (90:10, 80:20, 70:30) and 99.8% accuracy under 5-fold cross-validation. Evaluation metrics including precision, recall, and F1-score consistently exceeded 95% across all emotion classes. Confusion matrix analysis revealed perfect classification of high-intensity emotions such as Happiness and Surprise, while minor misclassifications occurred in more ambiguous expressions like Fear and Sadness. These results validate the model’s generalization ability, efficiency, and suitability for real-time FER tasks. Future work will extend the framework to in-the-wild datasets and incorporate model explainability techniques to improve interpretability in practical deployment Keywords: Facial Expression Recognition, EfficientNet, SVM, Deep Features, Emotion Classification