Low-light conditions continue to challenge real-time face recognition because dim illumination often produces noisy and low-contrast images that weaken the quality of feature extraction. This study investigates how different preprocessing strategies influence the performance of a dlib-ResNet–based recognition system under such conditions. Two reference dataset sizes—33 and 1000 images—were used to observe how reference variation affects embedding stability. Enhancement was applied either offline to the reference dataset or in real time to incoming video frames, and both approaches were also tested in combination. The experiments show that offline preprocessing provides the most reliable improvement. Enhancing reference images raised the F1-Score by 7.28% (small dataset) and 7.50% (large dataset) without reducing processing speed, indicating that clearer embeddings at registration contribute to more stable matching. Real-time preprocessing, however, produced inconsistent results. While slight gains appeared in specific cases, the added computation and occasional distortion of facial structure reduced accuracy in other scenarios. The combined method produced the weakest performance, with the large dataset showing a 33.71% decline, suggesting that excessive modification disrupts structural consistency between reference and test images. Overall, the results highlight the importance of maintaining coherent facial features rather than applying aggressive adjustments to every frame. Offline enhancement is the most practical strategy for low-light deployments, whereas real-time enhancement should be used selectively. Future work may explore adaptive illumination adjustment capable of tuning enhancement parameters automatically to match varying lighting conditions.
Copyrights © 2025