This study discusses the application of EfficientNet architecture in developing a Convolutional Neural Network (CNN) model for sign language classification. Sign language is a vital communication method for the deaf community, but automatic recognition remains a challenge in the field of computer vision. One of the primary issues is the limitation in accuracy and efficiency of models in recognizing complex variations of sign language in real-world conditions. EfficientNet, known for its computational efficiency, is used as a backbone to build a CNN model that can classify sign language letter patterns with high accuracy while remaining lightweight. The dataset used in this study is American Sign Language (ASL) with data augmentation techniques to enhance the variety and quality of the dataset. The dataset comprises 14,740 images of sign language letter patterns from various angles and lighting conditions. Experimental results show that the EfficientNet-based model developed achieves training and validation accuracies of 98.40% with a more efficient model size and inference time. This study demonstrates the significant potential of using EfficientNet in developing sign language classification systems that can be applied to devices with limited resources, such as mobile applications and edge computing. These findings are expected to improve accessibility and social inclusion for the deaf and speech-impaired communities. Thus, this research not only contributes to the field of pattern recognition technology but also to efforts to enhance the quality of life for individuals with communication disabilities through the development of effective and efficient assistive tools.