Drowsiness poses significant risks in safety-critical activities such as driving, industrial operations, and online learning. While advanced deep learning models (e.g., CNN-LSTM hybrids) achieve high accuracy in driver drowsiness detection, they often require substantial computational resources, limiting deployment on embedded or resource-constrained devices. This study addresses the research gap in lightweight, real-time, non-invasive drowsiness detection by developing an embeddable library using YOLOv12, an attention-centric single-stage detector known for balancing speed and accuracy. The model was trained on a custom dataset of 2312 video frame sequences (1011 "awake" and 1301 "drowsy" states, captured from varied angles under consistent lighting), augmented with standard techniques (e.g., brightness/contrast adjustments, flips, and rotations) to enhance generalization. It was evaluated through 80 real-time trials across multiple subjects. Performance metrics include accuracy of 93%, precision of 0.94, recall of 0.91, and F1-score of 0.93. The system detects drowsiness via facial bounding boxes followed by state classification (integrating eye/mouth aspect ratios) in real time. The main contribution is a proof-of-concept YOLOv12-based approach for non-invasive drowsiness monitoring, offering faster inference suitable for embedded applications (e.g., vehicle systems, meeting tools, or industrial safety) compared to heavier hybrid models. Limitations include some remaining sensitivity to extreme lighting/angles and dataset scale; future work will expand datasets, incorporate multi-modal cues, and further test robustness in diverse real-world conditions.