Claim Missing Document
Check
Articles

Found 12 Documents
Search

Multiscale Retinex Application to Analyze Face Recognition Supriyanto, Supriyanto; Harika, Maisevli; Ramadiani, Maya Sri; Ramdania, Diena Rauda
JOIN (Jurnal Online Informatika) Vol. 5 No 2 (2020)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v5i2.668

Abstract

The main challenge that facial recognition introduces is the difficulty of uneven lighting or dark tendencies. The image is poorly lit, which makes it difficult for the system to perform facial recognition. This study aims to normalize the lighting in the image using the Multiscale Retinex method. This method is applied to a face recognition system based on Principal Component Analysis to determine whether this method effectively improves images with uneven lighting. The results showed that the Multiscale Retinex approach to face recognition's correctness was better, from 40% to 76%. Multiscale Retinex has the advantage of dark facial image types because it produces a brighter image output.
Automatic Detection of Hijaiyah Letters Pronunciation using Convolutional Neural Network Algorithm Gerhana, Yana Aditia; Azis, Aaz Muhammad Hafidz; Ramdania, Diena Rauda; Dzulfikar, Wildan Budiawan; Atmadja, Aldy Rialdy; Suparman, Deden; Rahayu, Ayu Puji
JOIN (Jurnal Online Informatika) Vol 7 No 1 (2022)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v7i1.882

Abstract

Abstract— Speech recognition technology is used in learning to read letters in the Qur'an. This study aims to implement the CNN algorithm in recognizing the results of introducing the pronunciation of the hijaiyah letters. The pronunciation sound is extracted using the Mel-frequency cepstral coefficients (MFCC) model and then classified using a deep learning model with the CNN algorithm. This system was developed using the CRISP-DM model. Based on the results of testing 616 voice data of 28 hijaiyah letters, the best value was obtained for accuracy of 62.45%, precision of 75%, recall of 50% and f1-score of 58%.