Kingsley, Akputu Oryina
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Recognizing facial emotions for educational learning settings Kingsley, Akputu Oryina; Inyang, Udoinyang G.; Msugh, Ortil; Mughal, Fiza T.; Usoro, Abel
IAES International Journal of Robotics and Automation (IJRA) Vol 11, No 1: March 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijra.v11i1.pp21-32

Abstract

Educational learning settings exploit cognitive factors as ultimate feedback to enhance personalization in teaching and learning. But besides cognition, the emotions of the learner which reflect the affective learning dimension also play an important role in the learning process. The emotions can be recognized by tracking explicit behaviors of the learner like facial or vocal expressions. Despite reasonable efforts to recognize emotions, the research community is currently constraints by two issues, namely: i) the lack of efficient feature descriptors to accurately represent and prospectively recognize (detecting) the emotions of the learner; ii) lack of contextual datasets to benchmark performances of emotion recognizers in the learning-specific scenarios, resulting in poor generalizations. This paper presents a facial emotion recognition technique (FERT). The FERT is realized through results of preliminary analysis across various facial feature descriptors. Emotions are classified using the multiple kernel learning (MKL) method which reportedly possesses good merits. A contextually relevant simulated learning emotion (SLE) dataset is introduced to validate the FERT scheme. Recognition performance of the FERT scheme generalizes to 90.3% on the SLE dataset. On more popular but noncontextually datasets, the scheme achieved 90.0% and 82.8% respectively extended Cohn Kanade (CK+) and acted facial expressions in the wild (AFEW) datasets. A test for the null hypothesis that there is no significant difference in the performances accuracies of the descriptors rather proved otherwise (χ2=14.619,df=5,p=0.01212) for a model considered at a 95% confidence interval.