The integration of eye-gaze technology into robotic control systems has shown considerable promise in enhancing human–robot interaction, particularly for individuals with physical disabilities. This study investigates the influence of eye morphology and the use of corrective eyewear on the spatial accuracy of gaze-based robot control under static head pose conditions. Experiments were conducted using advanced eye-tracking systems and multiple machine learning algorithms—decision tree, support vector machine, discriminant analysis, naïve bayes, and K-nearest neighbor—on a participant pool with varied eye shapes and eyewear usage. The experimental design accounted for potential sources of bias, including lighting variability, participant fatigue, and calibration procedures. Statistical analyses revealed no significant differences in gaze estimation accuracy across eye shapes or eyewear status. However, a consistent pattern emerged: participants with non-monolid eye shapes achieved, on average, approximately 1% higher accuracy than those with monolid eye shapes—a difference that, while statistically insignificant, warrants further exploration. The findings suggest that gaze-based robotic control systems can operate reliably across diverse user groups and hold strong potential for use in assistive technologies targeting individuals with limited mobility, including those with severe motor impairments such as head paralysis. To further enhance the inclusiveness and robustness of such systems, future research should explore additional anatomical variations and environmental conditions that may influence gaze estimation accuracy.
Copyrights © 2025