Effective communication is challenging for deaf individuals in Indonesia, most of whom use Indonesian Sign Language (BISINDO). Sign Language Recognition (SLR) can bridge this communication gap. While Convolutional Neural Networks (CNNs) show high potential for SLR, their practical accessibility remains limited. This research aims to develop a CNN architecture for recognizing BISINDO alphabet signs from static images (still images) and integrate it into an accessible web platform. Using a static vision-based approach, a CNN model was trained on a public dataset (312 images, 26 classes) following standard pre-processing including data augmentation. The model was subsequently integrated into a web interface using Python and the Gradio library. Results demonstrated strong model performance, with validation accuracy reaching 97.44% and a macro-average F1-score of approximately 97.12%. However, classification challenges were identified for visually similar signs ('M' and 'N'). The resulting integrated web application proved functional, exhibited low prediction latency, and showed cross-platform compatibility. This study successfully demonstrates the development of an accurate DL model for static BISINDO alphabet recognition and its practical implementation via a web platform. This contributes to reducing the accessibility gap in SLR technology. Future research is recommended to utilize larger, more varied datasets and explore dynamic sign recognition.