Anif Hanifa Setianingrum, Anif Hanifa
Unknown Affiliation

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Sinyal Elektroensefalografi Untuk Deteksi Emosi Saat Mendengar Stimulus Pembacaan Al-Quran Menggunakan Wavelet Transform Hulliyah, Khodijah; Setianingrum, Anif Hanifa; Santoso, William
Technomedia Journal Vol 8 No 2 Special Issues (2023): Special Issue: Sistem Informasi Manajemen Dalam Menunjang Teknolog
Publisher : Pandawan Incorporation, Alphabet Incubator Universitas Raharja

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33050/tmj.v8i2SP.2060

Abstract

Mendengarkan suara membaca Al-Qur'an (Murottal) diketahui sering digunakan untuk membuat suasana terasa santai. Oleh karena itu, dalam penelitian ini, kami menyelidiki sejauh mana stimulasi suara murottal mempengaruhi penampilan gelombang alfa yang terlihat pada gelombang otak menggunakan detektor sinyal Electoencephalography (EEG). Menggunakan Transformasi Wavelet. Gelombang otak yang terdeteksi oleh sinyal EEG kemudian dianalisis untuk setiap fase gelombang pada frekuensi alfa (8-13 Hz) untuk melihat keadaan rileks. Kami merekam data gelombang EEG dalam 4 kondisi, yaitu kondisi tenang, kondisi tegang, dan keduanya dengan stimulus suara murottal. Setiap kondisi dilakukan masing-masing selama 2 menit. Suara murottal diambil secara acak untuk mendapatkan variasi data. Hasil klasifikasi menggunakan Recurrent Neural Network (RNN) menunjukkan bahwa t raining menggunakan n data ormal dengan tombak s mencapai akurasi 52% ~ 59%, Normal dengan m urottal n ormal menghasilkan nilai akurasi 55% ~ 56%, normal dengan tombak m urottal s mendapatkan nilai akurasi terkecil 35% ~ 46%, s Pike dengan m urrottal n ormal mencapai akurasi 57% ~ 67%, pike S dengan pike M urottal smenghasilkan akurasi 51% ~ 60%, M urottal normal dengan pike M urottal S mencapai nilai akurasi tertinggi 78%. Hal ini menunjukkan bahwa terdapat pengaruh yang signifikan dalam mendengarkan Murottal Al-Quran.
Feature Extraction Using Mel-Frequency Cepstral Coefficients (MfCC) Technique For A Tajweed Guess Based on Android Application Development Hulliyah, Khodijah; Kultsum, Lilik Ummi; Wibowo, Wahyu Hendarto; Setianingrum, Anif Hanifa; Arini, Arini; Durachman, Yusuf
JURNAL TEKNIK INFORMATIKA Vol. 18 No. 1: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v18i1.44721

Abstract

The development of information and communication technology today has had a significant impact on various aspects of life, including education. One notable example is the increasing number of applications designed for learning to recite the Quran with proper tartil. The growing trend of tahfidz (Quran memorization) is undoubtedly a positive development from a religious perspective. However, many individuals focus solely on memorization without acquiring the ability to recite the Quran properly and accurately. One discipline that supports proper Quran recitation is the knowledge of tajweed. Numerous applications have been developed in this field, especially on Android platforms. However, applications that utilize artificial intelligence (AI) to recognize tajweed rules and involve users in guessing tajweed readings are still in need of further development. The aim of this research is to develop a tajweed learning application using the concept of Automatic Speech Recognition (ASR). This study employs data collection methods such as literature review, quantitative methods, and testing. The design is represented using Unified Modeling Language (UML), while the application is tested using the Black Box Testing method. For data analysis and testing of the speech recognition model, the Hidden Markov Model (HMM) algorithm is employed, with Mel-Frequency Cepstral Coefficients (MFCC) used for feature extraction. The output of this research is an Android-based tajweed learning application that integrates speech recognition and allows users to guess tajweed rules interactively.
A Better Performance of GAN Fake Face Image Detection Using Error Level Analysis-CNN Siregar, Maria Ulfah; Nurochman, Nurochman; Setianingrum, Anif Hanifa; Larasati, Dwi; Santoso, William; Stefany, Meisia Dhea
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2698

Abstract

The use of face images has been widely established in various fields, including security, finance, education, social security, and others. Meanwhile, modern scientific and technological advances make it easier for individuals to manipulate images, including those of faces. In one of these advancements, the Generative Adversarial Network method creates a fake image similar to the real one. An error-level analysis algorithm and a convolutional neural network are proposed to detect manipulated images generated by generative adversarial networks. There are two scenarios: a stand-alone convolutional neural network and a combination of error-level analysis and a convolutional neural network. Furthermore, the combined scenario has three sub-scenarios regarding the compression levels of the error-level analysis algorithm: 10%, 50%, and 90%. After training the data obtained from a public source, it becomes evident that using a convolutional neural network combined with compression of error level analysis can improve the model’s overall performance: accuracy, precision, recall, and other parameters. Based on the evaluation results, it was found that the highest quality convolutional neural network training was obtained when using 50% error level analysis compression because it could achieve 94% accuracy, 93.3% precision, 94.9% recall, 94.1% F1 Score, 98.7% ROC-AUC Score, and 98.8% AP Score. This research is expected to be a reference for implementing image detection processes between real and fake images from generative adversarial networks.