Traffic accidents caused by driver fatigue and drowsiness remain a serious safety concern in many countries, including Indonesia. Various image-based drowsiness detection systems have been developed, yet many still rely on single-frame analysis and lack the ability to capture complete temporal context. To address this issue, a system capable of accurately and real-time detecting signs of drowsiness is required. This study aims to evaluate and compare the performance of Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) algorithms for a facial-feature-based drowsiness detection system. The dataset used is YawDD, which consists of videos of drivers yawning and in neutral conditions. Each video was decomposed into frames and analyzed using MediaPipe to extract facial landmarks. Two main features, Eye Aspect Ratio (EAR) and Mouth Opening Ratio (MOR), were utilized. Due to class imbalance, the SMOTE technique was applied to the minority class in the training data. Both LSTM and BiLSTM models were compared under similar architecture configurations. The results show that BiLSTM outperformed LSTM with an accuracy of 94,74% and an F1- score 94,82%, compared to 92,98% accuracy and 93,22% F1-score achieved by LSTM. These findings demonstrate that bidirectional sequential processing in BiLSTM is more effective in capturing the temporal patterns of drowsiness symptoms. This study contributes to the development of accurate and efficient computer vision-based drowsiness detection systems.