This research explores the optimization of YOLO-based computer vision algorithms for real-time recognition of Indonesian Sign Language (BISINDO) letters under diverse environmental conditions. Motivated by the communication barriers faced by the deaf and hearing communities due to limited sign language literacy, the study aims to enhance inclusivity through advanced visual detection technologies. By implementing the YOLOv5s model, the system is trained to detect and classify correct and incorrect BISINDO hand signs across 52 classes (26 correct and 26 incorrect letters), utilizing a dataset of 3,900 images augmented to 10,920 samples. Performance evaluation employs k-fold cross-validation (k=10) and confusion matrix analysis across varied lighting and background scenarios, both indoor and outdoor. The model achieves a high average precision of 0.9901 and recall of 0.9999, with robust results in indoor settings and slight degradation observed under certain outdoor conditions. These findings demonstrate the potential of YOLOv5 in facilitating real-time, accurate sign language recognition, contributing toward more accessible human-computer interaction systems for the deaf community.