This study presents an assistive control system for a four-degree-of-freedom (4-DoF) robotic manipulator that integrates image-based spatial perception with electrooculography (EOG)-based human–machine interaction for three-dimensional object retrieval. The system is motivated by the need for intuitive, non-contact assistive technologies to support individuals with severe motor impairments, such as tetraplegia, in performing basic manipulation tasks. The proposed framework employs an orthogonal dual-camera vision configuration to achieve explicit 3D target localization, where planar object positions on the XY plane and depth along the Z axis are estimated using focal length–based geometric modeling. User commands are generated through an EOG interface, in which eye movements and voluntary blinks are classified using a K-Nearest Neighbor (KNN) algorithm to control manipulator motion. Compared to conventional assistive robotic systems that rely on depth sensors or high-degree-of-freedom manipulators, the proposed approach utilizes asymmetric monocular viewpoints and a minimal 4-DoF architecture to reduce system complexity. Experimental results demonstrate high performance, achieving average localization accuracies of 99.52% on the XY plane and 95.88% along the Z axis, as well as an EOG classification accuracy of 94.38%. Manipulation experiments confirmed reliable operation with a 100% task success rate, while task completion time and positional error increased gradually with target distance. These findings validate the feasibility of the proposed system as a low-complexity, high-accuracy assistive robotic solution for rehabilitation and human–machine interaction applications.
Copyrights © 2026