This research aims to develop an object detection model that can distinguish between the gait of people with and without disabilities with high accuracy. Object detection is currently designed to detect people and is used in both normal and gender-based gait recognition. Gait recognition, if further examined, encompasses recognition of both non-disabled and disabled individuals. Every day, people walk like most, but people with disabilities have different gaits from those of normal people. Some use walking aids, whereas others walk without them. YOLOv8 is a platform for detecting people. This research proposes an object detection for normal people and people with disabilities, both those who use assistive devices and those who do not. The dataset used is Disabled gait, comprising 6500 images, and will be divided into 3 data splits: 70% for training, 20% for validation, and 10% for testing. Model evaluation is based on precision, recall, mAP50, and mAP50-90. The test results for three classifications, namely assistive, non-assistive, and normal, show the highest value in the assistive class with an mAP50 value of 0.98 and an mAP50-95 value of 0.996. This study advances gait recognition by extending object detection to accurately differentiate normal and disabled walking patterns, including both assistive and non-assistive gaits, thereby enriching inclusive human-movement analysis. Beyond computer vision, the findings benefit healthcare, rehabilitation, and smart surveillance systems by enabling more accurate mobility assessment and accessibility-aware applications.
Copyrights © 2025