This study proposes a hybrid approach to Braille translation leveraging the strengths of both YOLO for object detection and multitude of classification models such as ResNet, and ResNet for accurate Braille character classification from images. Upon comparing numerous models on various performance metrics, ResNet and DenseNet outperformed other models, exhibiting high accuracy (0.9487 and 0.9647 respectively) and F1-scores (0.9481 and 0.9666) due to their deep, densely connected architectures adept at capturing intricate Braille patterns. CNNs with pooling showed balanced results, while MobileNetV2's lightweight design limited complex classification. ResNeXt's multi-path learning achieved respectable performance but lagged behind ResNet and DenseNet. In the future the results from our study could be further explored on contracted Braille recognition, be adapted to various Braille codes, and optimized for mobile devices, for real time Braille detection and translation on smartphones.
Copyrights © 2025