Early detection of autism spectrum disorder (ASD) is crucial to support timely interventions that can improve children’s cognitive and social development. However, conventional approaches still rely on subjective observations and parental reports. This study proposes the development of a Flutter-based mobile application for face classification of autistic and non-autistic children using the MobileNetV3-Small architecture. The dataset contains 600 original facial images of children aged 4 to 14 years (300 autistic and 300 non-autistic), which were expanded to 1,860 images through augmentation techniques such as Gaussian noise addition, flipping, and contrast adjustment. The model was trained using transfer learning and optimized with the SGD optimizer and sigmoid activation function. During training, the model achieved a training accuracy of 95.27% and a validation accuracy of 97.92%, indicating effective learning with minimal overfitting. Evaluation on the test data showed perfect performance, with accuracy, precision, recall, and F1-score all reaching 100%. The model was then converted to TensorFlow Lite format to allow on-device inference on mobile platforms. The app enables users to upload photos via camera or gallery and instantly receive classification results, which are also saved to Firebase for history tracking. Testing showed a fast response time (1–2 seconds) and a smooth, user-friendly experience. These results highlight the potential of the system as a lightweight, efficient, and accessible facial image-based ASD screening tool, particularly in regions with limited access to specialized healthcare. Future work should include validation using larger and more diverse datasets across different demographics to ensure model robustness, fairness, and generalizability in real-world environments.