Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : International Journal of Electrical and Computer Engineering

Time series activity classification using gated recurrent units Yi-Fei Tan; Xiaoning Guo; Soon-Chang Poh
International Journal of Electrical and Computer Engineering (IJECE) Vol 11, No 4: August 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v11i4.pp3551-3558

Abstract

The population of elderly is growing and is projected to outnumber the youth in the future. Many researches on elderly assisted living technology were carried out. One of the focus areas is activity monitoring of the elderly. AReM dataset is a time series activity recognition dataset for seven different types of activities, which are bending 1, bending 2, cycling, lying, sitting, standing and walking. In the original paper, the author used a many-to-many Recurrent Neural Network for activity recognition. Here, we introduced a time series classification method where Gated Recurrent Units with many-to-one architecture were used for activity classification. The experimental results obtained showed an excellent accuracy of 97.14%.
Human activity recognition with self-attention Yi-Fei Tan; Soon-Chang Poh; Chee-Pun Ooi; Wooi-Haw Tan
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 2: April 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i2.pp2023-2029

Abstract

In this paper, a self-attention based neural network architecture to address human activity recognition is proposed. The dataset used was collected using smartphone. The contribution of this paper is using a multi-layer multi-head self-attention neural network architecture for human activity recognition and compared to two strong baseline architectures, which are convolutional neural network (CNN) and long-short term network (LSTM). The dropout rate, positional encoding and scaling factor are also been investigated to find the best model. The results show that proposed model achieves a test accuracy of 91.75%, which is a comparable result when compared to both the baseline models.
Facial emotion recognition using deep learning detector and classifier Ng Chin Kit; Chee-Pun Ooi; Wooi Haw Tan; Yi-Fei Tan; Soon-Nyean Cheong
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 3: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i3.pp3375-3383

Abstract

Numerous research works have been put forward over the years to advance the field of facial expression recognition which until today, is still considered a challenging task. The selection of image color space and the use of facial alignment as preprocessing steps may collectively pose a significant impact on the accuracy and computational cost of facial emotion recognition, which is crucial to optimize the speed-accuracy trade-off. This paper proposed a deep learning-based facial emotion recognition pipeline that can be used to predict the emotion of detected face regions in video sequences. Five well-known state-of-the-art convolutional neural network architectures are used for training the emotion classifier to identify the network architecture which gives the best speed-accuracy trade-off. Two distinct facial emotion training datasets are prepared to investigate the effect of image color space and facial alignment on the performance of facial emotion recognition. Experimental results show that training a facial expression recognition model with grayscale-aligned facial images is preferable as it offers better recognition rates with lower detection latency. The lightweight MobileNet_v1 is identified as the best-performing model with WM=0.75 and RM=160 as its hyper-parameters, achieving an overall accuracy of 86.42% on the testing video dataset.