Detecting microsleep is important in preventing accidents caused by decreased alertness, especially in activities that require high concentration such as driving. This study aims to develop an image-based microsleep detection model using the MediaPipe FaceMesh. The EAR value is only used for the tagging process that forms the basis for dataset creation. The main problem investigated is how to produce a classification model that can accurately distinguish between normal eye conditions and microsleep conditions using image data taken from eye area snippets. To address this issue, this study applies a series of stages, starting from dataset formation, initial processing in the form of image size adjustment, normalization, and quality improvement through data augmentation, to model training using the MobileNetV2 architecture with transfer learning and fine-tuning techniques. The results of the experiment show that the use of data augmentation strategies has a significant effect on improving model performance, with the best configuration producing a test accuracy of 87.54 percent, with other high performance metrics, namely Precision of 88.64 percent, Recall (Sensitivity) of 87.14 percent, and F1-Score of 87.34 percent. These findings prove that an eye area image-based approach combined with a convolutional neural network model is capable of providing promising performance in detecting microsleep conditions. These findings prove that an approach based on eye area images combined with a convolutional neural network model can deliver promising performance in detecting microsleep. This research is expected to form the basis for the development of a more effective microsleep detection system that can be implemented in real world environments.