Automated visual recognition of Multi Braille Characters (MBC) poses significant challenges for assistive reading technologies for the visually impaired. The intricate dot configurations and compact layouts of Braille complicate MBC classification. This study introduces a deep learning approach utilizing Convolutional Neural Networks (CNN) and compares four leading architectures: ResNet-50, ResNet-101, MobileNetV2, and VGG-16. A dataset comprising 105 MBC classes was developed from printed Braille materials and underwent preprocessing that included image cropping, brightness enhancement, character position labeling, and resizing to 89×89 pixels. A 70:20:10 data partitioning strategy was applied for training and evaluation, with variations in batch sizes (8–128) and epochs (50–500). The results demonstrate that ResNet-101 achieved superior performance, attaining an accuracy of 91.46%, an F1-score of 89.48%, and a minimum error rate of 8.5%. ResNet-50 and MobileNetV2 performed competitively under specific conditions, whereas VGG-16 consistently exhibited lower accuracy and training stability. Standard deviation assessments corroborated the stability of residual architectures throughout the training process. These results endorse ResNet-101 as the most effective architecture for Multi Braille Character classification, highlighting its potential for incorporation into automated Braille reading systems, a tool for translating braille into text or sound for future needs.