Claim Missing Document
Check
Articles

Found 4 Documents
Search

Automatic Vocal Completion for Indonesian Language Based on Recurrent Neural Network Prasetiadi, Agi; Dwi Sripamuji, Asti; Riski Amalia, Risa; Saputra, Julian; Ramadhanti, Imada
IT Journal Research and Development Vol. 9 No. 1 (2024)
Publisher : UIR PRESS

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25299/itjrd.2024.14171

Abstract

Most Indonesian social media users under the age of 25 use various words, which are now often referred to as slang, including abbreviations in communicating. Not only causes, but this variation also poses challenges for the natural language processing of Indonesian. The previous researchers tried to improve the Recurrent Neural Network to correct errors at the character level with an accuracy of 83.76%. This study aims to normalize abbreviated words in Indonesian into complete words using a Recurrent Neural Network in the form of Bidirected Long Short-Term Memory and Gated Recurrent Unit. The dataset is built with several weight confgurations from 3-Gram to 6-Gram consisting of words without vowels and complete words with vowels. Our model is the frst model in the world that tries to fnd incomplete Indonesian words, which eventually become fully lettered sentences with an accuracy of 97.44%.
YOLOv5 and U-Net-based Character Detection for Nusantara Script Prasetiadi, Agi; Saputra, Julian; Kresna, Iqsyahiro; Ramadhanti, Imada
JOIN (Jurnal Online Informatika) Vol 8 No 2 (2023)
Publisher : Department of Informatics, UIN Sunan Gunung Djati Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15575/join.v8i2.1180

Abstract

Indonesia boasts a diverse range of indigenous scripts, called Nusantara scripts, which encompass Bali, Batak, Bugis, Javanese, Kawi, Kerinci, Lampung, Pallava, Rejang, and Sundanese scripts. However, prevailing character detection techniques predominantly cater to Latin or Chinese scripts. In an extension of our prior work, which concentrated on the classification of script types and character recognition within Nusantara script systems, this study advances our research by integrating object detection techniques, employing the YOLOv5 model, and enhancing performance through the incorporation of the U-Net model to facilitate the pinpointing of fundamental Nusantara script's character locations within input document images. Subsequently, our investigation delves into rearranging these character positions in alignment with the distinctive styles of Nusantara scripts. Experimental results reveal YOLOv5's performance, yielding a loss rate of approximately 0.05 in character location detection. Concurrently, the U-Net model exhibits an accuracy ranging from 75% to 90% for predicting character regions. While YOLOv5 may not achieve flawless detection of all Nusantara scripts, integrating the U-Net model significantly enhances the detection rate by 1.2%.
CLOTHING RECOMMENDATION AND FACE SWAP MODEL BASED ON VGG16, AUTOENCODER, AND FACIAL LANDMARK POINTS Ramadhanti, Imada; Prasetiadi, Agi; Kresna A, Iqsyahiro
Jurnal Teknik Informatika (Jutif) Vol. 5 No. 1 (2024): JUTIF Volume 5, Number 1, February 2024
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2024.5.1.1016

Abstract

The selection of clothes in e-commerce sometimes contains doubts about the clothes that consumers choose because the clothes are not yet known to suit the consumer's body. So this research provides a solution through a clothing recommendation model according to the size and concept of clothing. Furthermore, there is a face exchange model whose job is to exchange faces between the consumer's face and the face on the recommended clothing. The dataset used in the classification model is clothing that is put into 8 classes with variations in size, clothing concept, and veiled or without headscarves, while making the autoencoder model requires source and target face datasets of 3,000 faces each. The method used to make clothing model recommendations is VGG16 and the face exchange model uses the autoencoder and facial landmark points methods. The results of the classification model with 2 different architectures obtain an accuracy of 97.01% and 94.49% respectively. Then the results of the autoencoder models for the 12 models produced the lowest loss values ​​with autoencoder I of 0.00012951 and in autoencoder II of 8.01e-05. The face landmark point method is used if the autoencoder method does not produce a good face swap.
Understanding of Deaf Students Using Interactive Media of the BISINDO Sign Speaking Song (Merakit-Yura Yunita) in Online Learning Alika, Shintia Dwi; Arifa, Amalia Beladinna; Sripamuji, Asti Dwi; Saputra, Julian; Amalia, Risa Riski; Ramadhanti, Imada
Jurnal Paedagogy Vol. 10 No. 4: Jurnal Paedagogy (October 2023)
Publisher : Universitas Pendidikan Mandalika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33394/jp.v10i4.8845

Abstract

This research aims to analyze the understanding of deaf students using the interactive media of the BISINDO sign-speaking song (Merakit-Yura Yunita) in online learning. The method used in this research was qualitative method. The subjects of this research were deaf students in SLB Yakut B Purwokerto. The research instrument used observation and questionnaires. Data analysis used interactive analysis, which includes data reduction, data presentation, and drawing conclusions. The results of the research showed that the use of the interactive media of the BISINDO sign language song (Merakit-Yura Yunita) could be applied to online learning for deaf students at the school. Deaf students could gain an understanding of the content contained in this song. In addition, this media is expected to be an innovation in the online learning of deaf students.