This study aims to design and implement an image recognition system for Sistem Isyarat Bahasa Indonesia (SIBI) by applying the Manhattan distance classification method. Sign language serves as a vital means of visual communication for individuals with hearing impairments and disabilities. However, public understanding of this language remains limited, often leading to ineffective communication between hearing and non-hearing communities. Therefore, an assistive system capable of accurately recognizing sign language is highly needed. The Manhattan method was selected due to its simplicity and efficiency in calculating distances between data points. The dataset used in this study was obtained from the Kaggle website, consisting of 130 training images and 130 testing images, each representing 26 alphabet letters in the SIBI system. All images underwent initial preprocessing using Jupyter Notebook, including resizing, background removal, and conversion to grayscale to facilitate feature extraction. The grayscale images were then transformed into histograms and normalized to maintain a consistent value scale. The classification process was carried out by computing the Manhattan distance between the test and training image histograms. The system was developed using MATLAB R2015a, featuring a user interface that displays classification results directly. The test results showed that out of 130 test images, 104 were accurately recognized, achieving an accuracy rate of 80%. These findings indicate that the Manhattan method is effective for use in image-based sign language recognition systems. The developed system is expected to serve as an inclusive and educational tool to enhance communication between the hearing-impaired community and the general public. Further development may involve integrating additional methods and expanding the dataset.