Helmy Dzulfikar
Program Studi Sistem Informasi, Universitas Siliwangi, Tasikmalaya, Indonesia

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

The Comparison of Audio Analysis Using Audio Forensic Technique and Mel Frequency Cepstral Coefficient Method (MFCC) as the Requirement of Digital Evidence Helmy Dzulfikar; Sisdarmanto Adinandra; Erika Ramadhani
JOIN (Jurnal Online Informatika) Vol 6 No 2 (2021)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v6i2.702

Abstract

Audio forensics is the application of science and scientific methods in handling digital evidence in the form of audio. In this regard, the audio supports the disclosure of various criminal cases and reveals the necessary information needed in the trial process. So far, research related to audio forensics is more on human voices that are recorded directly, either by using a voice recorder or voice recordings on smartphones, which are available on Google Play services or iOS Store. This study compares the analysis of live voices (human voices) with artificial voices on Google Voice and other artificial voices. This study implements the audio forensic analysis, which involves pitch, formant, and spectrogram as the parameters. Besides, it also analyses the data by using feature extraction using the Mel Frequency Cepstral Coefficient (MFCC) method, the Dynamic Time Warping (DTW) method, and applying the K-Nearest Neighbor (KNN) algorithm. The previously made live voice recording and artificial voice are then cut into words. Then, it tests the chunk from the voice recording. The testing of audio forensic techniques with the Praat application obtained similar words between live and artificial voices and provided 40,74% accuracy of information. While the testing by using the MFCC, DTW, KNN methods with the built systems by using Matlab, obtained similar word information between live voice and artificial voice with an accuracy of 33.33%.
IoT Application in a Vision-Based Security System at the At-Taqwa Mosque in Cijulang Nurul Hiron; Rian Nurdiansyah; M.Aris Risnandar; Andri Ulus R; Helmy Dzulfikar; Aldy Putra Aldya
ABDIMAS: Jurnal Pengabdian Masyarakat Vol. 8 No. 1 (2025): ABDIMAS UMTAS: Jurnal Pengabdian Kepada Masyarakat
Publisher : LPPM Universitas Muhammadiyah Tasikmalaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35568/abdimas.v8i1.5564

Abstract

The use of Internet of Things (IoT) technology has become a trend in various areas of life. In the context of security and monitoring, IoT applications can provide innovative solutions to improve monitoring and protection. This community service program aims to apply the IoT concept in a vision-based security system at the At-Taqwa Cijulang Mosque. Through the integration of sensors and vision devices connected to an IoT network, this program proposes a more effective and efficient solution for monitoring mosque security. Motion and presence sensors will be installed to detect suspicious activity and manage the environment within the mosque. The vision-based security system will use cameras connected to an IoT network to monitor and identify people and activities occurring in the mosque area. In addition, a mobile application will be developed to provide real-time monitoring access to mosque administrators and the local community. The results of the service showed increased security in the environment around the mosque as well as increased awareness of security in the community around the At-Taqwa Cijulang Mosque.
The Comparison of Audio Analysis Using Audio Forensic Technique and Mel Frequency Cepstral Coefficient Method (MFCC) as the Requirement of Digital Evidence Dzulfikar, Helmy; Adinandra, Sisdarmanto; Ramadhani, Erika
JOIN (Jurnal Online Informatika) Vol 6 No 2 (2021)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v6i2.702

Abstract

Audio forensics is the application of science and scientific methods in handling digital evidence in the form of audio. In this regard, the audio supports the disclosure of various criminal cases and reveals the necessary information needed in the trial process. So far, research related to audio forensics is more on human voices that are recorded directly, either by using a voice recorder or voice recordings on smartphones, which are available on Google Play services or iOS Store. This study compares the analysis of live voices (human voices) with artificial voices on Google Voice and other artificial voices. This study implements the audio forensic analysis, which involves pitch, formant, and spectrogram as the parameters. Besides, it also analyses the data by using feature extraction using the Mel Frequency Cepstral Coefficient (MFCC) method, the Dynamic Time Warping (DTW) method, and applying the K-Nearest Neighbor (KNN) algorithm. The previously made live voice recording and artificial voice are then cut into words. Then, it tests the chunk from the voice recording. The testing of audio forensic techniques with the Praat application obtained similar words between live and artificial voices and provided 40,74% accuracy of information. While the testing by using the MFCC, DTW, KNN methods with the built systems by using Matlab, obtained similar word information between live voice and artificial voice with an accuracy of 33.33%.