Robotic-arm deployment beyond specialized facilities is often constrained by time-intensive programming and the need for expert operators, while gesture-based control can lose reliability due to sensor noise, drift, and inter-user variability. Objective: This study develops a low-cost, embedded robotic arm control system that learns from human demonstrations. Methodology: A quantitative experimental prototyping approach was used by building a 3-DOF robotic arm with an MPU6050 IMU and an Arduino Mega 2560. Multi-user gesture trials were collected, and system performance was analyzed through end-to-end evaluation of recognition accuracy, response time, learning efficiency, and motion replication error. Findings: The system achieved 85% gesture recognition accuracy, a 195 ms average response time, and a 4.2° mean absolute joint-angle error (SD = 2.1°), reaching target performance within ≤5 adaptation iterations while operating within microcontroller memory limits. Implications: The results support the feasibility of real-time, gesture-driven robotic arm control on resource-constrained embedded hardware for educational and light industrial use, enabling faster setup and user personalization without extensive pre-training. Originality: This work integrates embedded motion pattern recognition with error-based adaptive learning in a low-cost 3-DOF platform and reports consolidated end-to-end evidence (accuracy–latency–learning convergence–replication fidelity) to demonstrate practical feasibility.