Babies are still unable to inform the pain theyexperience, therefore, babies cry when experiencing pain. With the rapid development of computer vision technologies, in the last few years, many researchers have tried to recognize pain from babies expressions using machine learning and image processing. In this paper, a research using Deep Convolution Neural Network (DCNN) Autoencoder and Long-Short Term Memory (LSTM) Network is conducted to detect cry and pain level from baby facial expression on video. DCNN Autoencoder isused to extract latent features from a single frame of baby face. Sequences of extracted latent features are then fed to LSTM sothe pain level and cry can be recognized. Face detection and face landmark detection is also used to frontalize baby facial imagebefore it i s processed by DCNN Autoencoder. From the testing on DCNN autoencoder, the result shows that the best architecture used three convolutional layers and three transposed convolutional layers. As for the LSTM classifier, the best model is using four frame sequences.
Copyrights © 2018