Social robots are increasingly being integrated into educational environments to support learning and engagement. However, most existing systems lack the adaptability required to respond appropriately to dynamic human behavior in real-time classroom settings. This paper presents an adaptive learning framework for social robots that utilizes visual and proximity sensor data to perceive human spatial context and adjust interaction strategies accordingly. A Deep Q-Network (DQN)-based reinforcement learning algorithm is employed to map environmental states to socially appropriate actions such as maintaining distance, initiating interaction, or retreating. The robot was trained in a simulated classroom environment consisting of dynamic student agents with randomized behaviors. Experimental results show that the robot achieved a cumulative reward improvement of over 500%, reduced its average distance error from 0.45 m to 0.18 m, and increased its interaction success rate from 50% to 88% over 100 training episodes. These results confirm the effectiveness of the proposed model in enabling real-time behavioral adaptation. The framework contributes to the development of context-aware, socially intelligent robotic systems capable of enhancing Human-Robot Interaction (HRI) in educational applications. Future work includes extending the model to incorporate emotional cues and real-world validation with physical robot platforms. Keywords - social robots, adaptive learning, reinforcement learning, human-robot interaction, sensor fusion, educational robotics
Copyrights © 2025