Claim Missing Document
Check
Articles

Found 13 Documents
Search

Students’ emotion classification system through an ensemble approach Muhajir, Muhajir; Muchtar, Kahlil; Oktiana, Maulisa; Bintang, Akhyar
SINERGI Vol 28, No 2 (2024)
Publisher : Universitas Mercu Buana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22441/sinergi.2024.2.020

Abstract

Emotion is a psychological and physiological response to an event or stimulus. Understanding students' emotions helps teachers and educators interact more effectively with students and create a better learning environment. The importance of understanding students' emotions in the learning process has led to exploring the use of facial emotion classification technology. In this research, an ensemble approach consisting of ResNet, MobileNet, and Inception is applied to identify emotional expressions on the faces of school students using a dataset that includes emotions such as happiness, sadness, anger, surprise, and boredom, acquired from students of Darul Imarah State Junior High School, Great Aceh District, Indonesia. Our dataset is available publicly, and so-called USK-FEMO. The performance evaluation results show that each model and approach has significant capabilities in classifying facial emotions. The ResNet model shows the best performance with the highest accuracy, precision, recall, and F1-score, which is 86%. MobileNet and Inception also demonstrate good performance, indicating potential in handling complex expression variations. The most interesting finding is that the ensemble approach achieves the highest accuracy, precision, recall, and F1-score of 90%. By combining predictions from the three models, the ensemble approach can consistently and accurately address emotion variations. Implementing emotion classification models, individually and in an ensemble format, can improve teacher-student interactions and optimize learning strategies that are responsive to students' emotional needs. 
Performance analysis of DMF teeth detection using deep learning: A comparative study with clinical examination as quasi experimental study Novita, Rizki; Putri, Rizkika; Fitria, Maya; Oktiana, Maulisa; Elma, Yasmina; Rahayu, Handika; Janura, Subhan; Habibie, Hafidh
Padjadjaran Journal of Dentistry Vol 36, No 1 (2024): March 2024
Publisher : Faculty of Dentistry Universitas Padjadjaran

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24198/pjd.vol36no1.52357

Abstract

ABSTRACTIntroduction: Decayed, missing, and filled teeth (DMF-T) are indicators used to assess the oral health status of an individual or a population. This examination is typically performed manually by dentists or dental therapists. In previous research, researchers have developed a deep learning model as a part of artificial intelligence that can detect DMF-T. Aim of this research was to analyze the comparison of the performance of deep learning with clinical examinations in DMF-T assessment. Methods: Experienced dentists conducted clinical examinations on 50 subjects who met the inclusion criteria. Oral clinical photos of the same patients were taken from various aspects, in total 250 images, and further analyzed using a deep learning model. The results of the clinical examination and deep learning were then statistically analyzed using an unpaired t-test to determine whether there were differences between groups. Results: The unpaired t-test analysis indicated that there was no significant difference between the result of DMF-T examination by dentist and by DL (P>0.05). Unpaired t-test of this research indicated no significant difference (P = 0.161). The unpaired t-test concluded that t Stat < t Critical two-tail, then who was accepted, which stated that there was no significant difference between the results of the DMF-T examination between two groups. Conclusion: The DL model demonstrates good clinical performance in detecting DMF-T.KeywordsDMF-T, clinical assessment, deep learning, caries detection
Cross-Spectral Cross-Distance Face Recognition via CNN with Image Augmentation Techniques Rahmatika, Nisa Adilla; Arnia, Fitri; Oktiana, Maulisa
Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) Vol 8 No 5 (2024): October 2024
Publisher : Ikatan Ahli Informatika Indonesia (IAII)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29207/resti.v8i5.5929

Abstract

Facial recognition is a critical biometric identification method in modern security systems, yet it faces significant challenges under varying lighting conditions, particularly when dealing with near-infrared (NIR) images, which exhibit reduced illumination compared to visible light (VIS) images. This study aims to evaluate the performance of Convolutional Neural Networks (CNNs) in addressing the Cross-Spectral Cross-Distance (CSCD) challenge, which involves face identification across different spectra (NIR and VIS) and varying distances. Three CNN models—VGG16, ResNet50, and EfficientNetB0—were assessed using a dataset comprising 800 facial images from 100 individuals, captured at four different distances (1m, 60m, 100m, and 150m) and across two wavelengths (NIR and VIS). The Multi-task Cascaded Convolutional Networks (MTCNN) algorithm was employed for face detection, followed by image preprocessing steps including resizing to 224x224 pixels, normalization, and homomorphic filtering. Two distinct data augmentation strategies were applied: one utilizing 10 different augmentation techniques and the other with 4 techniques, trained with a batch size of 32 over 100 epochs. Among the tested models, VGG16 demonstrated superior performance, achieving 100% accuracy in both training and validation phases, with a training loss of 0.55 and a validation loss of 0.612. These findings underscore the robustness of VGG16 in effectively adapting to the CSCD setting and managing variations in both lighting and distance.
Water Level Detection for Flood Disaster Management Based on Real-time Color Object Detection Saddami, Khairun; Nurdin, Yudha; Noviantika, Fina; Oktiana, Maulisa; Muchallil, Sayed
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 8, No. 1, February 2023
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v8i1.1635

Abstract

Currently, the water level monitoring system for a river uses instruments installed on the banks of the river and must be checked continuously and manually. This study proposes a real-time water level detection system based on a computer vision algorithm. In the proposed system, we use color object tracking technique with a bar indicator as a reference’s level. We set three bar indicators to determine the status of the water level, namely NORMAL, ALERT and DANGER. A camera was installed across the bar level indicators to capture bar indicator and monitoring the water level. In the simulation, the monitoring system was installed in 5-100 lux lighting conditions. For experimental purposes, we set various distances of the camera, which is set of 40-80 centimeters and the camera angle is set of 30-60 degrees. The experiment results showed that this system has an accuracy of 94% at camera distance is in range 50-80 centimeters and camera angle is 60o. Based on these results, it can be concluded that this proposed system can determine the water level well in varying lighting conditions.
Authentication of an Indonesian ID Card with Simultaneous NFC and Face Recognition Chairullah, Chairullah; Away, Yuwaldi; Oktiana, Maulisa
Jurnal Rekayasa Elektrika Vol 21, No 2 (2025)
Publisher : Universitas Syiah Kuala

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.17529/jre.v21i2.41142

Abstract

Identity (ID) card forgery remains a significant issue in Indonesia, often leading to crimes such as identity theft and fraud. To address this challenge, this study proposes the development of an identity authentication system that integrates near field communication (NFC) and facial recognition based on K-nearest neighbors (KNN) algorithm. The primary objective of this system is to enhance the security of ID card (KTP) data and to ensure efficient and accurate access to services requiring identity verification. The system stores facial data and ID card information securely in Firebase, which serves both as a user authentication platform and a secure cloud-based storage solution. The application, developed using Flutter, incorporates facial recognition for biometric verification, while NFC is employed as an additional authentication layer to provide dual-factor verification and reinforce identity security. Experimental results demonstrate that the facial recognition based on KKN achieved an accuracy rate of 100% with a false acceptance rate (FAR) of 0%, indicating a highly reliable performance. These findings confirm that the integration of facial recognition and NFC technologies offers a robust and effective solution to combat ID card forgery, thereby improving the overall reliability and security of the population data authentication system in Indonesia.
Impact of Image Quality Enhancement Using Homomorphic Filtering on the Performance of Deep Learning-Based Facial Emotion Recognition Systems Bahri, Al; Oktiana, Maulisa; Fitria, Maya; Zulfikar
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 11 No. 2 (2025): June
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v11i2.30409

Abstract

Facial emotion recognition technology is crucial in understanding human expressions from images or videos by analyzing distinct facial features.  A common challenge in this technology is accurately detecting a person's facial expression, which can be hindered by unclear facial lines, often due to poor lighting conditions. To address these challenges, it is essential to improve image quality. This study investigates how enhancing image quality through homomorphic filtering and sharpening techniques can boost the accuracy and performance of deep learning-based facial emotion recognition systems. Improved image quality allows the classification model to focus on relevant expression features better.  Therefore, this research contributes to in facilitating more intuitive and responsive communications by enabling system to interpret and respond to human emotions effectively. The testing used three different architectures: MobileNet, InceptionV3, and DenseNet121. Evaluasi kinerja dilakukan menggunakan metrik akurasi, presisi, recall, dan F1-score. Experimental results indicated that image enhancement positively impacts the accuracy of the facial emotion recognition system. Specifically, the average accuracy increased by 1-2% for the MobileNet architecture, by 5-7% for InceptionV3, and by 1-3% for DenseNet121.
Autism Face Detection System using Single Shot Detector and ResNet50 Melinda, Melinda; Alfariz, Muhammad Fauzan; Yunidar, Yunidar; Ghimri, Agung Hilm; Oktiana, Maulisa; Miftahujjannah, Rizka; Basir, Nurlida; Acula, Donata D.
JURNAL INFOTEL Vol 17 No 2 (2025): May
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v17i2.1331

Abstract

The facial features of children can provide important visual cues for the early detection of autism spectrum disorder (ASD). This research focuses on developing an image-based detection system to identify children with ASD. The main problem addressed is the lack of practical methods to assist healthcare professionals in the early identification of ASD through facial visual characteristics. This study aims to design a prototype facial image acquisition and detection system for children with ASD using Raspberry Pi and a deep learning-based single shot detector (SSD) algorithm. In this method, the face detection model uses a modified ResNet50 architecture, which can be used for advanced analysis for classification between autistic and normal children, achieving 95% recognition accuracy on a dataset consisting of facial images of children with and without ASD. The system is able to recognize the visual characteristics of the faces of children with ASD and consistently distinguish them from those of normal children. Real-time testing shows a detection accuracy ranging from 86% to 90%, with an average accuracy of 90%, despite fluctuations caused by variations in movement and viewing angle. These results show that the developed system offers high accuracy and has the potential to function as a reliable diagnostic tool for the early detection of ASD, which ultimately facilitates timely intervention by healthcare professionals to support the optimal development of children with ASD.
The Role of U-Net Segmentation for Enhancing Deep Learning-based Dental Caries Classification Yassar, Muhammad Keysha Al; Fitria, Maya; Oktiana, Maulisa; Yufnanda, Muhammad Aditya; Saddami, Khairun; Muchtar, Kahlil; Isma, Teuku Reza Auliandra
Indonesian Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol. 7 No. 2 (2025): May
Publisher : Jurusan Teknik Elektromedik, Politeknik Kesehatan Kemenkes Surabaya, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/ijeeemi.v7i2.75

Abstract

Dental caries, one of the most prevalent oral diseases, can lead to severe complications if left untreated. Early detection is crucial for effective intervention, reducing treatment costs, and preventing further deterioration. Recent advancements in deep learning have enabled automated caries detection based on clinical images; however, most existing approaches rely on raw or minimally processed images, which may include irrelevant structures and noise, such as the tongue, lips, and gums, potentially affecting diagnostic accuracy. This research introduces a U-Net-based tooth segmentation model, which is applied to enhance the performance of dental caries classification using ResNet-50, InceptionV3, and ResNeXt-50 architectures. The methodology involves training the teeth segmentation model using transfer learning from backbone architectures ResNet-50, VGG19, and InceptionV3, and evaluating its performance using IoU and Dice Score. Subsequently, the classification model is trained separately with and without segmentation using the same hyperparameters for each model with transfer learning, and their performance is compared using a confusion matrix and confidence interval. Additionally, Grad-CAM visualization was performed to analyze the model's attention and decision-making process. Experimental results show a consistent performance improvement across all models with the application of segmentation. ResNeXt-50 achieved the highest accuracy on segmented data, reaching 79.17%, outperforming ResNet-50 and InceptionV3. Grad-CAM visualization further confirms that segmentation plays a crucial role in directing the model’s focus to relevant tooth areas, improving classification accuracy and reliability by reducing background noise. These findings highlight the significance of incorporating tooth segmentation into deep learning models for caries detection, offering a more precise and reliable diagnostic tool. However, the confidence interval analysis indicates that despite consistent improvements across all metrics, the observed differences may not be statistically significant.
Implementation of Convolutional Recurrent Neural Network for Vehicle Number Plate Identification in Raspberry Pi Based Parking System Muzammil, Rivaul; Oktiana, Maulisa; Roslidar, Roslidar
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 10, No. 4, November 2025
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v10i4.2320

Abstract

The rapid growth of vehicles in Indonesia has created significant challenges in managing parking facilities. To address this issue, this study proposes an intelligent parking system based on automatic license plate character recognition. The system employs YOLOv8 (You Only Look Once) for license plate region detection and CRNN (Convolutional Recurrent Neural Network) for alphanumeric character recognition. Its architecture integrates a Raspberry Pi, camera module, and servo motor to enable automated license plate detection and recognition during vehicle entry and exit. YOLOv8 generates bounding boxes to isolate license plate regions, which are then processed as input for CRNN. The CRNN extracts visual features through convolutional layers and captures sequential relationships among characters using recurrent layers. The entire pipeline is deployed on Raspberry Pi with TensorFlow Lite to ensure efficient computation in resource-constrained environments. Experimental results demonstrate that YOLOv8 achieved a detection accuracy of 94.69%, with a precision of 98.32%, recall of 96.25%, and F1-score of 97.27%, while CRNN reached a character recognition accuracy of 93.8% across 30 license plates. Although some recognition errors occurred, such as misclassifying ‘G’ as ‘C’, 'W' as 'H', and 'Q' as 'O', the proposed system proved effective and feasible for embedded smart parking applications.
Improved Histogram of Oriented Gradient (HOG) Feature Extraction for Facial Expressions Classification Ramiady, Luthfiar; Arnia, Fitri; Oktiana, Maulisa; Novandri, Andri
Jurnal Rekayasa Elektrika Vol 20, No 3 (2024)
Publisher : Universitas Syiah Kuala

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.17529/jre.v20i3.34044

Abstract

Facial expression classification system is one of the implementations of machine learning (ML) that takes facial expression datasets, undergoes training, and then utilizes the trained results to recognize facial expressions in new facial images. The recognized facial expressions include anger, contempt, disgust, fear, happy, sadness, and surprise expressions. The method employed for facial feature extraction utilizes histogram-oriented gradient (HOG). This study proposes an enhancement method for HOG feature extraction by reducing the feature dimension into multiple sub-features based on gradient orientation intervals, referred to as HOG channel (HOG-C). Classifier testing techniques are divided into two methods for comparisonsupport vector machines (SVM) with HOG features and SVM with HOG-C features. The testing results demonstrate that SVM with HOG achieves an accuracy of 99.9% with an average training time of 18.03 minutes, while SVM with HOG-C attains a 100% accuracy with an average training time of 18.09 minutes. The testing outcomes reveal that the implementation of SVM with HOG-C successfully enhances accuracy for facial expression classification.