Claim Missing Document
Check
Articles

Found 3 Documents
Search

Multi-Stage Computer Vision Framework with Ensemble Learning for Real-Time Glass Packaging Defect Detection in Industrial Applications Jonah Alfred Mekel; Rick Resa Wahani; Motulo, Firmansyah Reskal; Alfred Noufie Mekel; Tineke Saroinsong; Tammy Tinny V. Pangow; Jerry Heisye Purnama; Jedithjah Naapia Tamedi Papia
Frontier Advances in Applied Science and Engineering Vol. 3 No. 2 (2025)
Publisher : Tinta Emas Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59535/faase.v3i2.572

Abstract

Transparent glass packaging inspection presents significant challenges for automated quality control systems due to optical complexities including reflections, refractions, and low-contrast defect patterns. This research develops a comprehensive multi-stage computer vision framework integrating specialized algorithmic modules with ensemble machine learning for real-time defect detection in industrial glass packaging lines. The framework implements four specialized detection stages: (1) meniscus-corrected liquid level measurement using dual-camera validation and polynomial surface fitting, (2) seal integrity assessment through Circular Hough Transform combined with geometric, texture, and color feature extraction, (3) lid positioning evaluation via calibrated geometric centroid analysis with tolerance-based classification, and (4) multi-method contamination detection integrating color aberration analysis, histogram-based particle detection, and morphological operations. The system employs an ensemble classification architecture combining modified MobileNetV2 convolutional neural network with Random Forest classifier, optimized for edge computing deployment. Industrial validation at PT AQUWAR Bintang Semesta demonstrated 91.6% overall detection accuracy with 347 milliseconds average processing time per container across 2,847 test samples spanning multiple defect categories. The modular framework architecture enables independent optimization of detection stages while maintaining real-time processing capabilities, providing a robust foundation for transparent packaging quality control in high-volume manufacturing environments.
Computer Vision-Based Automated Waste Sorting System for Plastic and Organic Waste Classification Using Color and Shape Features Rick Resa Wahani; Michael Edward G. Kimbal; Deko Trio Desembara; Leonardo Frando Pasla; Motulo, Firmansyah Reskal
International Journal Science and Technology Vol. 4 No. 3 (2025): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i3.2384

Abstract

The increasing volume of municipal solid waste demands low-cost, real-time sorting solutions to improve recycling efficiency and reduce landfill burden. Objective: This study develops and evaluates a low-cost, real-time computer vision system to classify plastic waste and organic leaf waste for automated sorting. Methodology: The system uses a standard RGB camera (640×480, 30 fps) and OpenCV-based processing, including Gaussian blurring, HSV color-space conversion, morphological operations, contour detection, and geometric feature extraction (circularity, solidity, aspect ratio, and extent). Classification is performed using a hierarchical rule-based logic that combines HSV color masks with a proposed overlap ratio to quantify the spatial correspondence between object contours and leaf-color regions. Findings: Experimental testing under controlled illumination (500–1000 lux) achieved 89% overall accuracy with an average processing time of 45 ms/frame and an operational throughput of approximately 7 objects/min. The system correctly classified 8 plastic items and 7 leaf samples in the initial test set. Implications: The proposed approach supports practical deployment in small-scale or resource-constrained waste management facilities by enabling real-time sorting without large, labeled datasets or GPU hardware. Originality: This work introduces an interpretable hybrid decision framework that integrates a mask-based overlap ratio with multiple geometric shape descriptors, improving discrimination between plastic and leaf waste while maintaining computational efficiency.
Intelligent Robotic Arm Control System with Adaptive Learning Algorithm Based on Motion Pattern Recognition Excellsdeo Ndahawali; Jonah Mekel; Jaqlin Tamaka; GheridsDipipi, GheridsDipipi; Rick Resa Wahani; Michael Edward G. Kimbal; Deko Trio Desembara; Motulo, Firmansyah Reskal
International Journal Science and Technology Vol. 4 No. 3 (2025): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i3.2386

Abstract

Robotic-arm deployment beyond specialized facilities is often constrained by time-intensive programming and the need for expert operators, while gesture-based control can lose reliability due to sensor noise, drift, and inter-user variability. Objective: This study develops a low-cost, embedded robotic arm control system that learns from human demonstrations. Methodology: A quantitative experimental prototyping approach was used by building a 3-DOF robotic arm with an MPU6050 IMU and an Arduino Mega 2560. Multi-user gesture trials were collected, and system performance was analyzed through end-to-end evaluation of recognition accuracy, response time, learning efficiency, and motion replication error. Findings: The system achieved 85% gesture recognition accuracy, a 195 ms average response time, and a 4.2° mean absolute joint-angle error (SD = 2.1°), reaching target performance within ≤5 adaptation iterations while operating within microcontroller memory limits. Implications: The results support the feasibility of real-time, gesture-driven robotic arm control on resource-constrained embedded hardware for educational and light industrial use, enabling faster setup and user personalization without extensive pre-training. Originality: This work integrates embedded motion pattern recognition with error-based adaptive learning in a low-cost 3-DOF platform and reports consolidated end-to-end evidence (accuracy–latency–learning convergence–replication fidelity) to demonstrate practical feasibility.