Hamimah Ujir
Universiti Malaysia Sarawak

Published : 7 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 7 Documents
Search

Teaching workload in 21st century higher education learning setting Hamimah Ujir; Shanti Faridah Salleh; Ade Syaheda Wani Marzuki; Hashimatul Fatma Hashim; Aidil Azli Alias
International Journal of Evaluation and Research in Education (IJERE) Vol 9, No 1: March 2020
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (302.888 KB) | DOI: 10.11591/ijere.v9i1.20419

Abstract

A standard equation on teaching workload calculation in the previous academic setting only includes the contact hours with students through lecture, tutorial, laboratory and in-person consultation (i.e. one-to-one final year project consultation). This paper discusses teaching workload factors according to the current higher-education setting. Devising a teaching workload equation that includes all teaching and learning strategies in the 21st century higher education learning setting is needed. This is indeed a challenging task for the academic administrators to scrutinize every single parameter that accounted for teaching and learning. In this work, we have discussed the parameters which are significant in teaching workload calculation. For instance, the conventional in-person contact with the students, type of delivery, type of assessment, the preparation of materials for flipped classroom as well as MOOC, to name a few. Teaching workload also affects quality teaching and from the academic perception, the higher workload means lower-quality teaching.
Customer’s spontaneous facial expression recognition Golam Morshed; Hamimah Ujir; Irwandi Hipiny
Indonesian Journal of Electrical Engineering and Computer Science Vol 22, No 3: June 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v22.i3.pp1436-1445

Abstract

In the field of consumer science, customer facial expression is often categorized either as negative or positive. Customer who portrays negative emotion to a specific product mostly means they reject the product while a customer with positive emotion is more likely to purchase the product. To observe customer emotion, many researchers have studied different perspectives and methodologies to obtain high accuracy results. Conventional neural network (CNN) is used to recognize customer spontaneous facial expressions. This paper aims to recognize customer spontaneous expressions while the customer observed certain products. We have developed a customer service system using a CNN that is trained to detect three types of facial expression, i.e. happy, sad, and neutral. Facial features are extracted together with its histogram of gradient and sliding window. The results are then compared with the existing works and it shows an achievement of 82.9% success rate on average.
The analysis of facial feature deformation using optical flow algorithm Dayang Nur Zulhijah Awang Jesemi; Hamimah Ujir; Irwandi Hipiny; Sarah Flora Samson Juan
Indonesian Journal of Electrical Engineering and Computer Science Vol 15, No 2: August 2019
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v15.i2.pp769-777

Abstract

Facial features deformed according to the intended facial expression. Specific facial features are associated with specific facial expression, i.e. happy means the deformation of mouth. This paper presents the study of facial feature deformation for each facial expression by using an optical flow algorithm and segmented into three different regions of interest. The deformation of facial features shows the relation between facial the and facial expression. Based on the experiments, the deformations of eye and mouth are significant in all expressions except happy. For happy expression, cheeks and mouths are the significant regions. This work also suggests that different facial features' intensity varies in the way that they contribute to the recognition of the different facial expression intensity. The maximum magnitude across all expressions is shown by the mouth for surprise expression which is 9x10-4. While the minimum magnitude is shown by the mouth for angry expression which is 0.4x10-4.
Performance evaluation of SIFT against common image deformations on iban plaited mat motif images Silvia Joseph; Irwandi Hipiny; Hamimah Ujir; Sarah Flora Samson Juan; Jacey-Lynn Minoi
Indonesian Journal of Electrical Engineering and Computer Science Vol 23, No 3: September 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v23.i3.pp1470-1477

Abstract

Decorative plaited mat is one of the many examples of rich plait work often seen on Borneo handicraft products. The plaited mats are decorated with simple and complex motif designs; each has its own special meaning and taboos. The motif designs are used as a reflection of environment and the traditional beliefs in the Iban community. In line with efforts from UNESCO’s and Sarawak Government’s, digitization, and the use of IR4.0 technologies to preserve and promote this cultural heritage is encouraged. Towards this end goal, we present a novel image dataset containing 10 Iban plaited mat motif classes. The plaited mat motifs are made of diagonal and symmetrical shapes, as well as geometric and non-geometric patterns. Classification’s accuracy using scale-invariant feature transform (SIFT) features was evaluated against 6 common image deformations: zoom+rotation, viewpoint, image blur, JPEG compression, scale and illumination, across multiple threshold values. Varying degrees of each deformation were applied to a digitally cleaned (and cropped) image of each mat motif class. We used RANSAC to remove outliers from the noisy SIFT matching result. The optimal threshold value is 2.0e-2 with a reported 100.0% matching accuracy for the scale change and zoom+rotation set.
Who danced better? ranked tiktok dance video dataset and pairwise action quality assessment method Irwandi Hipiny; Hamimah Ujir; Aidil Azli Alias; Musdi Shanat; Mohamad Khairi Ishak
International Journal of Advances in Intelligent Informatics Vol 9, No 1 (2023): March 2023
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v9i1.919

Abstract

Video-based action quality assessment (AQA) is a non-trivial task due to the subtle visual differences between data produced by experts and non-experts. Current methods are extended from the action recognition domain where most are based on temporal pattern matching. AQA has additional requirements where order and tempo matter for rating the quality of an action. We present a novel dataset of ranked TikTok dance videos, and a pairwise AQA method for predicting which video of a same-label pair was sourced from the better dancer. Exhaustive pairings of same-label videos were randomly assigned to 100 human annotators, ultimately producing a ranked list per label category. Our method relies on a successful detection of the subject’s 2D pose inside successive query frames where the order and tempo of actions are encoded inside a produced String sequence. The detected 2D pose returns a top-matching Visual word from a Codebook to represent the current frame. Given a same-label pair, we generate a String value of concatenated Visual words for each video. By computing the edit distance score between each String value and the Gold Standard’s (i.e., the top-ranked video(s) for that label category), we declare the video with the lower score as the winner. The pairwise AQA method is implemented using two schemes, i.e., with and without text compression. Although the average precision for both schemes over 12 label categories is low, at 0.45 with text compression and 0.48 without, precision values for several label categories are comparable to past methods’ (median: 0.47, max: 0.66).
Iban plaited mat motif classification with adaptive smoothing Silvia Joseph; Irwandi Hipiny; Hamimah Ujir
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 12, No 2: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v12.i2.pp840-850

Abstract

Decorative mats plaited by the Iban communities in Borneo contains motifs that reflect their traditional beliefs. Each motif has its own special meaning and taboos. A typical mat motif contains multiple smaller patterns that surround the main motif hence is likely to cause misclassification. We introduce a classification framework with adaptive sampling to remove smaller features whilst retaining larger (and discriminative) image structures. Canny filter and probabilistic hough transform are gradually applied to a clean greyscale image until a threshold value pertaining to the image’s structural information is reached. Morphological dilation is then applied to improve the appearance of the retained edges. The resulting image is described using binary robust invariant scalable keypoints (BRISK) features with random sample consensus (RANSAC). We reported the classification accuracy against six common image deformations at incremental degrees: scale+rotation, viewpoint, image blur, joint photographic experts group (JPEG) compression, scale and illumination. From our sensitivity analysis, we found the optimal threshold for adaptive smoothing to be 75.0%. The optimal scheme obtained 100.0% accuracy for JPEG compression, illumination, and viewpoint set. Using adaptive smoothing, we achieved an average increase in accuracy of 11.0% compared to the baseline.
Classification of dances using AlexNet, ResNet18 and SqueezeNet1_0 Khalif Amir Zakry; Irwandi Hipiny; Hamimah Ujir
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 12, No 2: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v12.i2.pp602-609

Abstract

Dancing is an art form of creative expression that is based on movement. Dancing comprises varying styles, pacing and composition to convey an artist’s expression. Thus, the classification of any dance to a certain genre or type depends on how accurate or similar it is to what is generally understood to be the specific movements of that dance type. This presents a problem for new dancers to assess if the dance movements that they have just learned is accurate or not to what the original dance type is. This paper proposed that deep learning models can classify dance videos of amateur dancers according to the similar movements of actions of several dance classes. For this study, AlexNet, ResNet and SqueezeNet models was used to perform training on multiple frames of actions of several dance videos for label prediction and the classification accuracy of the models during each training epoch is compared. This study observed that the average classification accuracy of the deep learning models is 94.9669% and is comparable to other approaches used for dance classifications.