Claim Missing Document
Check
Articles

Found 3 Documents
Search
Journal : International Journal of Electrical and Computer Engineering

Wearable sensor-based human activity recognition with ensemble learning: a comparison study Yee Jia Luwe; Chin Poo Lee; Kian Ming Lim
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 4: August 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i4.pp4029-4040

Abstract

The spectacular growth of wearable sensors has provided a key contribution to the field of human activity recognition. Due to its effective and versatile usage and application in various fields such as smart homes and medical areas, human activity recognition has always been an appealing research topic in artificial intelligence. From this perspective, there are a lot of existing works that make use of accelerometer and gyroscope sensor data for recognizing human activities. This paper presents a comparative study of ensemble learning methods for human activity recognition. The methods include random forest, adaptive boosting, gradient boosting, extreme gradient boosting, and light gradient boosting machine (LightGBM). Among the ensemble learning methods in comparison, light gradient boosting machine and random forest demonstrate the best performance. The experimental results revealed that light gradient boosting machine yields the highest accuracy of 94.50% on UCI-HAR dataset and 100% on single accelerometer dataset while random forest records the highest accuracy of 93.41% on motion sense dataset.
Speech emotion recognition with light gradient boosting decision trees machine Kah Liang Ong; Chin Poo Lee; Heng Siong Lim; Kian Ming Lim
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 4: August 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i4.pp4020-4028

Abstract

Speech emotion recognition aims to identify the emotion expressed in the speech by analyzing the audio signals. In this work, data augmentation is first performed on the audio samples to increase the number of samples for better model learning. The audio samples are comprehensively encoded as the frequency and temporal domain features. In the classification, a light gradient boosting machine is leveraged. The hyperparameter tuning of the light gradient boosting machine is performed to determine the optimal hyperparameter settings. As the speech emotion recognition datasets are imbalanced, the class weights are regulated to be inversely proportional to the sample distribution where minority classes are assigned higher class weights. The experimental results demonstrate that the proposed method outshines the state-of-the-art methods with 84.91% accuracy on the emo-DB dataset, 67.72% on the Ryerson audio-visual database of emotional speech and song (RAVDESS) dataset, and 62.94% on the interactive emotional dyadic motion capture (IEMOCAP) dataset.
Three-dimensional shape generation via variational autoencoder generative adversarial network with signed distance function Ebenezer Akinyemi Ajayi; Kian Ming Lim; Siew-Chin Chong; Chin Poo Lee
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 4: August 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i4.pp4009-4019

Abstract

Mesh-based 3-dimensional (3D) shape generation from a 2-dimensional (2D) image using a convolution neural network (CNN) framework is an open problem in the computer graphics and vision domains. Most existing CNN-based frameworks lack robust algorithms that can scale well without combining different shape parts. Also, most CNN-based algorithms lack suitable 3D data representations that can fit into CNN without modification(s) to produce high-quality 3D shapes. This paper presents an approach that integrates a variational autoencoder (VAE) and a generative adversarial network (GAN) called 3 dimensional variational autoencoder signed distance function generative adversarial network (3D-VAE-SDFGAN) to create a 3D shape from a 2D image that considerably improves scalability and visual quality. The proposed method only feeds a single 2D image into the network to produce a mesh-based 3D shape. The network encodes a 2D image of the 3D object into the latent representations, and implicit surface representations of 3D objects corresponding to those 2D images are subsequently generated. Hence, a signed distance function (SDF) is proposed to maintain object inside-outside information in the implicit surface representation. Polygon mesh surfaces are then produced using the marching cubes algorithm. The ShapeNet dataset was used in the experiments to evaluate the proposed 3D-VAE-SDFGAN. The experimental results show that 3D-VAE-SDFGAN outperforms other state-of-the-art models.