Claim Missing Document
Check
Articles

Found 6 Documents
Search
Journal : TEKNIK INFORMATIKA

Human Fall Motion Prediction: Fall Motion Forecasting and Detection with GRU Andi Prademon Yunus; Amalia Beladinna Arifa; Yit Hong Choo
JURNAL TEKNIK INFORMATIKA Vol 17, No 2: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v17i2.41027

Abstract

The human fall motion prediction system is a preventive tool aimed at reducing the risk of falls. In our research, we developed a deep learning model that utilizes pose estimation to track human body posture and integrated this with a Gated Recurrent Unit (GRU) to forecast human motion and predict falls. GRU, an enhancement of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) models offers improved memorization and more efficient memory usage and performance. Our study presents the human fall motion prediction, which combines the forecasting and classification of potential falls.The CAUCAFall dataset is used as the benchmark of this study, which contains the image sequences of single human motion with ten actions conducted by ten actors. We employed the YOLOv8 Pose model to track the 2D human body pose as the input in our system. A thorough evaluation of the CAUCAFall dataset highlights the effectiveness of our proposed system. Evaluation using the CAUCAFall dataset demonstrates that the model achieved a Mean Per Joint Position Error (MPJPE) of 4.65 pixels from the ground truth, with a 70% accuracy rate in fall prediction. However, the model also exhibited a Mean Relative Error (MRE) of 0.3, indicating that 30% of the predictions were incorrect. These findings underscore the potential of the GRU-based system in fall prevention
Detection of Vulgarity in Anime Character: Implementation of Detection Transformer Amalia Suciati; Dian Kartika Sari; Andi Prademon Yunus; Nuuraan Rizqy Amaliah
JURNAL TEKNIK INFORMATIKA Vol 18, No 1: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v18i1.46064

Abstract

Vulgar and pornographic content has become a widespread issue on the internet, appearing in various fields include anime. Vulgar pornographic content in anime is not limited to the sexuality genre; anime from general genres such as action, adventure, and others also contain vulgar visual. The main focus of this research is the implementation of the Detection Transformer (DETR) object detection method to identify vulgar parts of anime characters, particularly female characters. DETR is a deep learning model designed for object detection tasks, adapting the attention mechanism of Transformers. The dataset used consists of 800 images taken from popular anime, based on viewership rankings, which were augmented to a total of 1,689 images. The research involved training models with different backbones, specifically ResNet-50 and ResNet-101, each with dilation convolution applied at different stages. The results show that the DETR model with a ResNet-50 backbone and dilation convolution at stage 5 outperformed other backbones and dilation configurations, achieving a mean Average Precision of 0.479 and  of 0.875. The other result is dilated convolution improves small object detection by enlarging the receptive field, applying it in early stages tends to reduce spatial detail and harm performance on medium and large objects. However, the primary focus of this research is not solely on achieving the highest performance but on exploring the potential of transformer-based models, such as DETR, for detecting vulgar content in anime. DETR benefits from its ability to understand spatial context through self-attention mechanisms, offering potential for further development with larger datasets, more complex architectures, or training at larger data scales.
Small Object Detection and Object Counting for Primary Roe Dataset Based on Yolo Wahyu Andi Saputra; Nicolaus Euclides Wahyu Nugroho; Dany Candra Febrianto; Andi Prademon Yunus; Muhammad Azrino Gustalika; Yit Hong Choo
JURNAL TEKNIK INFORMATIKA Vol 18, No 1: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v18i1.46063

Abstract

This research offers an initial exploration into the effectiveness of three variations of the YOLOv8 model original, trimmed, and YOLOv8n.pt in combination with two distinct datasets characterized by tight and loose distributions of roe, aimed at enhancing small object detection and counting accuracy. Utilizing a primary roe dataset across 776 images, the research systematically compares these model-dataset configurations to identify the most effective combination for precise object detection. The experimental results reveal that the YOLOv8n.pt model combined with the loosely distributed dataset achieves the highest detection performance, with a mean Average Precision (mAP) of 53.86%. This outcome underscores the critical impact of both model selection and data distribution on the detection accuracy in machine learning applications. The findings highlight the importance of tailored model and dataset synergies in optimizing detection tasks, particularly in complex scenarios involving small, densely clustered objects. This research contributes valuable insights into the strategic deployment of neural network architectures for refined object detection challenges.
Human Fall Motion Prediction: Fall Motion Forecasting and Detection with GRU Yunus, Andi Prademon; Arifa, Amalia Beladinna; Choo, Yit Hong
JURNAL TEKNIK INFORMATIKA Vol. 17 No. 2: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v17i2.41027

Abstract

The human fall motion prediction system is a preventive tool aimed at reducing the risk of falls. In our research, we developed a deep learning model that utilizes pose estimation to track human body posture and integrated this with a Gated Recurrent Unit (GRU) to forecast human motion and predict falls. GRU, an enhancement of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) models offers improved memorization and more efficient memory usage and performance. Our study presents the human fall motion prediction, which combines the forecasting and classification of potential falls.The CAUCAFall dataset is used as the benchmark of this study, which contains the image sequences of single human motion with ten actions conducted by ten actors. We employed the YOLOv8 Pose model to track the 2D human body pose as the input in our system. A thorough evaluation of the CAUCAFall dataset highlights the effectiveness of our proposed system. Evaluation using the CAUCAFall dataset demonstrates that the model achieved a Mean Per Joint Position Error (MPJPE) of 4.65 pixels from the ground truth, with a 70% accuracy rate in fall prediction. However, the model also exhibited a Mean Relative Error (MRE) of 0.3, indicating that 30% of the predictions were incorrect. These findings underscore the potential of the GRU-based system in fall prevention
Small Object Detection and Object Counting for Primary Roe Dataset Based on Yolo Saputra, Wahyu Andi; Nugroho, Nicolaus Euclides Wahyu; Febrianto, Dany Candra; Yunus, Andi Prademon; Gustalika, Muhammad Azrino; Choo, Yit Hong
JURNAL TEKNIK INFORMATIKA Vol. 18 No. 1: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v18i1.46063

Abstract

This research offers an initial exploration into the effectiveness of three variations of the YOLOv8 model original, trimmed, and YOLOv8n.pt in combination with two distinct datasets characterized by tight and loose distributions of roe, aimed at enhancing small object detection and counting accuracy. Utilizing a primary roe dataset across 776 images, the research systematically compares these model-dataset configurations to identify the most effective combination for precise object detection. The experimental results reveal that the YOLOv8n.pt model combined with the loosely distributed dataset achieves the highest detection performance, with a mean Average Precision (mAP) of 53.86%. This outcome underscores the critical impact of both model selection and data distribution on the detection accuracy in machine learning applications. The findings highlight the importance of tailored model and dataset synergies in optimizing detection tasks, particularly in complex scenarios involving small, densely clustered objects. This research contributes valuable insights into the strategic deployment of neural network architectures for refined object detection challenges.
Detection of Vulgarity in Anime Character: Implementation of Detection Transformer Suciati, Amalia; Sari, Dian Kartika; Yunus, Andi Prademon; Amaliah, Nuuraan Rizqy
JURNAL TEKNIK INFORMATIKA Vol. 18 No. 1: JURNAL TEKNIK INFORMATIKA
Publisher : Department of Informatics, Universitas Islam Negeri Syarif Hidayatullah

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15408/jti.v18i1.46064

Abstract

Vulgar and pornographic content has become a widespread issue on the internet, appearing in various fields include anime. Vulgar pornographic content in anime is not limited to the sexuality genre; anime from general genres such as action, adventure, and others also contain vulgar visual. The main focus of this research is the implementation of the Detection Transformer (DETR) object detection method to identify vulgar parts of anime characters, particularly female characters. DETR is a deep learning model designed for object detection tasks, adapting the attention mechanism of Transformers. The dataset used consists of 800 images taken from popular anime, based on viewership rankings, which were augmented to a total of 1,689 images. The research involved training models with different backbones, specifically ResNet-50 and ResNet-101, each with dilation convolution applied at different stages. The results show that the DETR model with a ResNet-50 backbone and dilation convolution at stage 5 outperformed other backbones and dilation configurations, achieving a mean Average Precision of 0.479 and  of 0.875. The other result is dilated convolution improves small object detection by enlarging the receptive field, applying it in early stages tends to reduce spatial detail and harm performance on medium and large objects. However, the primary focus of this research is not solely on achieving the highest performance but on exploring the potential of transformer-based models, such as DETR, for detecting vulgar content in anime. DETR benefits from its ability to understand spatial context through self-attention mechanisms, offering potential for further development with larger datasets, more complex architectures, or training at larger data scales.