Claim Missing Document
Check
Articles

Found 2 Documents
Search

Flame analysis and combustion estimation using large language and vision assistant and reinforcement learning Martınez, Fredy; Rendón, Angélica; Penagos, Cristian
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 3: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i3.pp1853-1862

Abstract

In this study, we present an advanced approach for flame analysis and combustion quality estimation in carbonization furnaces utilizing large language and vision assistant (LLaVA) and reinforcement learning from human feedback (RLHF). The traditional methods of estimating combustion quality in carbonization processes rely heavily on visual inspection and manual control, which can be subjective and imprecise. Our proposed methodology leverages multimodal AI techniques to enhance the accuracy and reliability of flame similarity measures. By integrating LLaVA’s high-resolution image processing capabilities with RLHF, we create a robust system that iteratively improves its predictive accuracy through human feedback. The system analyzes real-time video frames of the flame, employing sophisticated similarity metrics and reinforcement learning algorithms to optimize combustion parameters dynamically. Experimental results demonstrate significant improvements in estimating oxygen levels and overall combustion quality compared to conventional methods. This approach not only automates and refines the combustion monitoring process but also provides a scalable solution for various industrial applications. The findings underscore the potential of AI-driven techniques in advancing the precision and efficiency of combustion systems.
Integrating low-cost vision for autonomous tracking in assistive robots Martínez, Fredy; Martínez, Fernando; Penagos, Cristian
Bulletin of Electrical Engineering and Informatics Vol 14, No 3: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i3.9242

Abstract

This study presents the implementation of a real-time tracking system for the ARMOS TurtleBot, a robot designed for assistive applications in domestic environments. The system integrates two OmniVision 7670 (OV7670) camera modules positioned 7 cm apart to emulate human-like stereoscopic vision, enabling depth perception and three-dimensional object tracking. An embedded system platform 32-bit (ESP32) microcontroller captures and processes images from both cameras, calculates disparities, and transmits data to a Raspberry Pi via WebSockets. The Raspberry Pi, equipped with robot operating system (ROS), performs further analysis using open computer vision (OpenCV) and visualizes results in real-time with ROS visualization (RViz), allowing the robot to autonomously track moving objects such as humans or pets. Key optimizations, including image resolution reduction and data filtering, were implemented to enhance processing efficiency within the hardware constraints. The proposed approach demonstrates the feasibility of low-cost, real-time object tracking in assistive robotics, highlighting its potential for applications that require humanrobot interaction in dynamic indoor settings. This work contributes to the field by providing a practical solution for integrating stereoscopic vision and real-time decision-making capabilities into small-scale robots, promoting further research and development in affordable robotic assistance systems.