Claim Missing Document
Check
Articles

Found 2 Documents
Search

Indonesia rupiah currency detection for visually impaired people using transfer learning VGG-19 Alfatikarani, Raissa; Suciningtyas, Laras; Bimasakti, Genta Garuda; Mardhatillah, Faqisna Putra; Paragas, Jessie R.; Tjahyaningtijas, Hapsari Peni Agustin
SINERGI Vol 29, No 1 (2025)
Publisher : Universitas Mercu Buana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22441/sinergi.2025.1.022

Abstract

People with visual impairments often face difficulties in determining the authenticity of paper money, which is a crucial skill to avoid fraud. The limitations of traditional methods, like blind codes for visually impaired people, require a more advanced and efficient solution. Previous methods of currency detection using Convolutional Neural Network (CNN) techniques, including the VGG-19 architecture, have often encountered challenges, particularly the long training times required. Therefore, we propose using transfer learning techniques and modifying the top layers of the VGG-19 model, known as fully connected layers, within a mobile application with audio feedback built using Android Studio. These modifications involve substituting the three fully connected layers with dense and flattened layers. We also implemented hyperparameter tuning, including adjusting the batch sizes and setting the number of epochs. The datasets used Indonesian Rupiah paper currency from the 2022 emission year, specifically Rp 50,000 and Rp 100,000 denominations. The best transfer learning VGG-19 model achieved a batch size of 32 and an epoch of 50, resulting in a high accuracy of 88%. Response speed testing with performance profiling on Android Studio showed an overall average response time of 458 ms. The main advantage of using transfer learning with the VGG-19 model is that it significantly reduces training time while still achieving high accuracy, differentiating this work from previous studies that relied on training from scratch, which is more time-consuming and resource-intensive. Therefore, this mobile app can be categorized as having a fast response time.
Markerless Facial Reconstruction Motion Capture Using Triangulation Method Alwali, Muhammad; Pambudi, Sevito Fernanda; Suciningtyas, Laras; Yuniarno, Eko Mulyanto
JAREE (Journal on Advanced Research in Electrical Engineering) Vol 9, No 2 (2025): July
Publisher : Department of Electrical Engineering ITS and FORTEI

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/jaree.v9i2.456

Abstract

Motion capture is a popular research topic, with one of its main applications being human face reconstruction. The demand for converting 2D images into 3D reconstructions continues to increase, especially in facial reconstruction, where progress is made in improving the accuracy of facial position prediction. However, there is still a significant gap in developing facial reconstruction technologies that can consistently convert 2D to 3D data with high accuracy, especially in scenarios involving dynamic facial expressions, diverse facial angles, and complex environmental conditions. Therefore, an approach using the triangulation method for 3D face reconstruction in the real world was developed. In the experiments, two cameras were used to obtain two face landmark coordinates so that the triangulation method can be implemented for 3D face reconstruction. This research aims to develop a motion capture approach that is able to accurately and efficiently transform 2D data into 3D face models without the need for complex hardware. The main contribution of this research is the development of a machine learning-based markerless motion capture technique designed to improve the accuracy of face position prediction in 3D face reconstruction from 2D data in realistic environments. This method seeks to bridge the current technology gap by providing a more flexible and reliable solution, expanding the potential applications of motion capture in various fields without dependence on specialized hardware. The results of face reconstruction research using markerless motion capture and triangulation method show RMSE values of 3.560839 for eyes, 1.644749 for nose, and 4.054638 for lips.