This study develops a facial expression recognition system based on Facial Action Units (AU) data using a Bidirectional Long Short-Term Memory (BiLSTM) model. The dataset consists of AU data obtained from a supervisor, sourced from DCAP-SWOZ (USC Institute for Creative Technologies), a multimodal corpus containing AU values extracted from human interaction videos. A total of 188 AU files were used in this research. Initial labeling was performed using Facial Action Coding System (FACS)-based rules as pseudo-labels serving as a starting point for training the BiLSTM model. This approach was chosen because the dataset lacks inherent emotion labels, necessitating a label initialization mechanism. The BiLSTM model functions as a temporal smoother designed to reduce noise and label inconsistencies that commonly occur in frame-by-frame rule-based approaches. The trained model then performs inference on the same data to generate final labels with improved temporal stability. Evaluation was conducted by measuring model consistency against FACS rules and qualitative analysis of temporal stability in generated labels. Data were processed into 30-frame sequences with a 1-frame sliding window to effectively capture expression dynamics patterns. The BiLSTM model was trained using two hidden layers with dropout regularization. Evaluation results showed 96.61% consistency against FACS rules with high performance across all emotion classes, including anger (99.11%), disgust (97.98%), fear (94.08%), happiness (99.29%), neutral (96.42%), sadness (98.31%), and surprise (99.16%). Qualitative analysis demonstrated that the model successfully reduced frame-by-frame label fluctuations by 73% compared to pure rule-based approaches, producing more stable and realistic emotion segmentation. These results demonstrate that the combination of FACS-based labeling and the BiLSTM model can produce a temporally consistent automated labeling system capable of accelerating labeled dataset creation, although validation against human ground truth remains necessary as future research.