Juliana Johari
Universiti Teknologi MARA

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Learning face similarities for face verification using hybrid convolutional neural networks Fadhlan Hafizhelmi Kamaru Zaman; Juliana Johari; Ahmad Ihsan Mohd Yassin
Indonesian Journal of Electrical Engineering and Computer Science Vol 16, No 3: December 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v16.i3.pp1333-1342

Abstract

Face verification focuses on the task of determining whether two face images belong to the same identity or not. For unrestricted faces in the wild, this is a very challenging task. Besides significant degradation due to images that have large variations in pose, illumination, expression, aging, and occlusions, it also suffers from large-scale ever-expanding data needed to perform one-to-many recognition task. In this paper, we propose a face verification method by learning face similarities using a Convolutional Neural Networks (ConvNet). Instead of extracting features from each face image separately, our ConvNet model jointly extracts relational visual features from two face images in comparison. We train four hybrid ConvNet models to learn how to distinguish similarities between the face pair of four different face portions and join them at top-layer classifier level. We use binary-class classifier at top-layer level to identify the similarity of face pairs which includes a conventional Multi-Layer Perceptron (MLP), Support Vector Machines (SVM), Native Bayes, and another ConvNet. There are 3 face pairing configurations discussed in this paper. Results from experiments using Labeled face in the Wild (LFW) and CelebA datasets indicate that our hybrid ConvNet increases the face verification accuracy by as much as 27% when compared to individual ConvNet approach. We also found that Lateral face pair configuration yields the best LFW test accuracy on a very strict test protocol without any face alignment using MLP as top-layer classifier at 87.89%, which on-par with the state-of-the-arts. We showed that our approach is more flexible in terms of inferencing the learned models on out-of-sample data by testing LFW and CelebA on either model.
Modelling and estimating trajectory points from RTK-GNSS based on an integrated modelling approach Ravenny Sandin Nahar; Kok Mun Ng; Fadhlan Hafizhelmi Kamaruzaman; Noorfadzli Abdul Razak; Juliana Johari
Indonesian Journal of Electrical Engineering and Computer Science Vol 34, No 1: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v34.i1.pp162-172

Abstract

The sparse Gaussian process regression (GPR) has been used to model trajectory data from Real time kinematics-global navigation satellite system (RTK-GNSS). However, upon scrutinizing the model residuals; the sparse GPR model poorly fits the data and exhibits presence of correlated noise. This work attempts to address these issues by proposing an integrated modeling approach called GPR-LR-ARIMA where the sparse GPR was integrated with the linear regression with autoregressive integrated moving average errors (LR-ARIMA) to further enhance the description of the trajectory data. In this integrated approach, the predicted trajectory points from the GPR were further described by the LR-ARIMA. Simulation of the GPR-LR-ARIMA on three sets of trajectory data indicated better model fit, revealed in the normally distributed model residuals and symmetrically distributed scatter plots. Correlated noise was also successfully eliminated by the model. The GPR-LR-ARIMA outperformed both the GPR and LRARIMA by its ability to improve mean-absolute-error in 2-dimension positioning by up to 86%. The GPR-LR-ARIMA contributes to enhancement of positioning accuracy of dynamic GNSS measurements in localization and navigation system with good model fit.