Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

Fermented and Unfermented Cocoa Beans for Quality Identification Using Image Features Basri, Basri; Indrabayu, Indrabayu; Achmad, Andani; Areni, Intan Sari
JOIV : International Journal on Informatics Visualization Vol 8, No 3 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.3.2578

Abstract

Fermented cocoa bean products are one of the high-quality requirements of the cocoa processing industry. On an automated industrial scale, early identification of cocoa bean quality is essential in the processing industry. This study aims to identify the condition of quality cocoa beans based on fermentation and non-fermentation characteristics. This study applies analysis based on static images taken using a camera with a distance variation of 5 cm, 10 cm, and 15 cm in both classes, with 500 image data each. The Feature extraction Approach uses the Oriented Gradient (HOG) method with a Support Vector Machine (SVM) classification technique. Image analysis of both object classes was also performed with a color change to show the dominance of the color pattern on the skin of the cocoa beans to be analyzed. The results showed that fermented cocoa beans show a color pattern and texture that tends to be darker and coarser than non-fermented cocoa beans. Computational results with performance analysis using Receiver Operating Characterisic (ROC) on both classes showed the results that the distance of 5 cm and 15 cm has 100% accuracy, but based on the best performance, comprehensively seen in terms of Precision, Recall, and F1-Score shows the best value is at a distance of 15 cm. The results of this research based on the literature review conducted have better achievements, thus enabling further research on the development of conveyor models with real-time video data for automation systems.
A Thorough Review of Vehicle Detection and Distance Estimation Using Deep Learning in Autonomous Cars Rahmat, Muhammad Abdillah; Indrabayu, Indrabayu; Achmad, Andani; Salam, Andi Ejah Umraeni
JOIV : International Journal on Informatics Visualization Vol 8, No 4 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.4.2665

Abstract

Autonomous vehicle technologies are rapidly advancing, and one key factor contributing to this progress is the enhanced precision in vehicle detection and distance calculation. Deep Learning Networks (DLNs) have emerged as powerful tools to address this challenge, offering remarkable capabilities in accurately detecting and estimating vehicle positions. This study comprehensively reviews DLN applications for vehicle detection and distance estimation. It examines prominent DLN models such as YOLO, R-CNN, and SSD, evaluating their performance on widely used datasets such as KITTI, PASCAL VOC, and COCO. Analysis results indicate that YOLOv5, developed by Farid et al. achieves the highest accuracy level with a mAP (mean Average Precision) of 99.92%. Yang et al. showcased that YOLOv5 performs exceptionally in detection and distance estimation tasks, with a mAP of 96.4% and a low mean relative error (MRE) of 10.81% for distance estimation. These achievements highlight the potential of DLNs to enhance the accuracy and reliability of vehicle detection systems in autonomous vehicles. The study also emphasizes the importance of backbone architectures like DarkNet 53 and ResNet in determining model efficiency. The choice of the appropriate model depends on the specific task requirements, with some models prioritizing real-time detection and others prioritizing accuracy. In conclusion, developing DLN-based methods is crucial in advancing autonomous vehicle technology. Research and development remain crucial in ensuring road safety and efficiency as autonomous vehicles become more common in transportation systems.
Comparative Performance Analysis of YOLO and Faster R-CNN in Detecting Species and Estimating the Weight of Grouper and Snapper Fish Using Digital Images Sidehabi, Sitti Wetenriajeng; Indrabayu, Indrabayu; Warni, Elly; Bake, Sabda Ansari
JOIV : International Journal on Informatics Visualization Vol 9, No 4 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.4.3121

Abstract

Grouper and snapper fish are widely consumed species with high economic value in the global market. To determine their economic value, identifying the species and estimating the weight are essential in the pricing and quality determination of the traded fish. The commonly used manual methods are often time-consuming and labor-intensive. Based on this, a more effective computer-based method is needed for these repetitive tasks. This research aims to analyze the performance of two commonly used deep learning models, YOLO and Faster R-CNN, in detecting species and estimating the weight of specific grouper and snapper fish. The data used consisted of 2991 samples divided into 18 classes. This data was then augmented using rotate and flip features to create 6843 image samples. A threshold of 0.8 was used in the detection process, meaning objects detected with confidence below 0.8 would be ignored. Once trained, the performance of both models was tested using precision, recall, and accuracy parameters to assess how accurately the models predicted fish species from the input data and Mean Absolute Percentage Error (MAPE) to evaluate the estimation results of the models. There were differences in the quantitative evaluation results between the YOLO and Faster R-CNN models. The YOLO model achieved precision, recall, and accuracy rates of 0.98, 0.98, and 0.96, respectively, while the Faster R-CNN model had precision, recall, and accuracy rates of 0.97, 0.98, and 0.95, respectively. Additionally, the MAPE for weight estimation was 2.42% for image data and 3.66% for video data for the YOLO model. In contrast, for the Faster R-CNN model, the results were 14.62% for image data and 13.59% for video data. Thus, it can be concluded that the YOLO model provides better quantitative evaluation results compared to the Faster R-CNN model.