This study proposes a deep learning model using the ConvNeXt Tiny architecture to detect autism spectrum disorder (ASD) from facial images, addressing the need for an early, efficient, and accessible diagnostic tool. The model integrates facial image preprocessing techniques like Contrast Limited Adaptive Histogram Equalization (CLAHE) and data augmentation, with facial segmentation performed by MTCNN. The ConvNeXt Tiny model is trained using transfer learning and evaluated through metrics such as accuracy, precision, recall, and F1-score, and compared with traditional CNN models like ResNet50 and EfficientNet-B0. The results demonstrate that the proposed model outperforms ResNet50 and EfficientNet in all evaluation metrics, achieving a classification accuracy of 84%. It also demonstrates a balanced performance across both classes (autistic and non-autistic), with high precision and recall for both, leading to a high F1-score. Furthermore, the model's computational efficiency makes it suitable for web and mobile applications, enabling scalable and real-time screening for ASD in children. The study's contributions include the development of a novel, lightweight ASD classification system, a comparative analysis of ConvNeXt with other CNN models, and the creation of a prototype for early ASD detection. This approach not only provides a promising alternative to conventional diagnostic methods but also sets the groundwork for further research and practical implementation in clinical settings.
Copyrights © 2025