In social behavior of human interaction, human voice becomes one of the means of channeling mental states' emotional expression. Human voice is a vocal-processesed speech, arranged with word sequences, producing the speech pattern which able to channel the speakers' psychological condition. This pattern provides special characteristics that can be developed along with biometric identification process. Spectrum image visualization techniques are employed to sufficiently represent speech signal. This study aims to identify the emotion types in the human voice using a feature combination multi-spectrum Hilbert and cochleagram. The Hilbert spectrum represents the Hilbert-Huang Transformation(HHT)results for processing a non-linear, non-stationary instantaneous speech emotional signals with intrinsic mode functions. Through imitating the functions of the outer and middle ear elements, emotional speech impulses are broken down into frequencies that typically vary from the effects of their expression in the form of the cochlea continuum. The two inputs in the form of speech spectrum are processed using Convolutional Neural Networks(CNN) which best known for recognizing image data because it represents the mechanism of human retina and also Long Short-Term Memory(LSTM)method. Based on the results of this experiments using three public datasets of speech emotions, which each of them has similar eight emotional classes, this experiment obtained an accuracy of 90.97% with CNN and 80.62% with LSTM.
Copyrights © 2020