Claim Missing Document
Check
Articles

Found 7 Documents
Search

Students’ emotion classification system through an ensemble approach Muhajir, Muhajir; Muchtar, Kahlil; Oktiana, Maulisa; Bintang, Akhyar
SINERGI Vol 28, No 2 (2024)
Publisher : Universitas Mercu Buana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22441/sinergi.2024.2.020

Abstract

Emotion is a psychological and physiological response to an event or stimulus. Understanding students' emotions helps teachers and educators interact more effectively with students and create a better learning environment. The importance of understanding students' emotions in the learning process has led to exploring the use of facial emotion classification technology. In this research, an ensemble approach consisting of ResNet, MobileNet, and Inception is applied to identify emotional expressions on the faces of school students using a dataset that includes emotions such as happiness, sadness, anger, surprise, and boredom, acquired from students of Darul Imarah State Junior High School, Great Aceh District, Indonesia. Our dataset is available publicly, and so-called USK-FEMO. The performance evaluation results show that each model and approach has significant capabilities in classifying facial emotions. The ResNet model shows the best performance with the highest accuracy, precision, recall, and F1-score, which is 86%. MobileNet and Inception also demonstrate good performance, indicating potential in handling complex expression variations. The most interesting finding is that the ensemble approach achieves the highest accuracy, precision, recall, and F1-score of 90%. By combining predictions from the three models, the ensemble approach can consistently and accurately address emotion variations. Implementing emotion classification models, individually and in an ensemble format, can improve teacher-student interactions and optimize learning strategies that are responsive to students' emotional needs. 
Performance evaluation of hyper-parameter tuning automation in YOLOV8 and YOLO-NAS for corn leaf disease detection Saputra, Huzair; Muchtar, Kahlil; Chitraningrum, Nidya; Andria, Agus; Febriana, Alifya
SINERGI Vol 29, No 1 (2025)
Publisher : Universitas Mercu Buana

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22441/sinergi.2025.1.018

Abstract

Corn cultivation was crucial in Southeast Asia, significantly contributing to regional food security and economies. However, leaf diseases posed a significant threat, causing substantial losses in production and quality. This research utilized artificial intelligence (AI) technology to address this issue by automating the hyper-parameter tuning process in YOLO (You Only Look Once) object detection models for early corn leaf disease detection. High-resolution images of corn leaves were captured and preprocessed for consistency. The preprocessing stage involved creating new dataset folders for images and labels, resizing images while preserving their aspect ratio, and rotating them if necessary. The images, containing 11,596 labeled instances, were analyzed using YOLOv8 and YOLO-NAS models. Each image's detected disease regions were converted into YOLO-format text files with x, y, width, and height coordinates, describing the presence and severity of infections. The models' performances were evaluated using precision, recall, mAP50, and mAP50-95 metrics. YOLOv8m achieved a mAP50 of 98.5% and mAP50-95 of 67.8%, while YOLO-NAS-L demonstrated superior detection capabilities with a mAP50 of 70.3% and mAP50-95 of 38.9%. This automated system facilitated early disease identification and enabled prompt preventive measures, thereby enhancing crop yields and mitigating losses. The findings highlighted the potential of advanced AI-driven detection systems in revolutionizing crop management and supporting global food security. 
The Role of U-Net Segmentation for Enhancing Deep Learning-based Dental Caries Classification Yassar, Muhammad Keysha Al; Fitria, Maya; Oktiana, Maulisa; Yufnanda, Muhammad Aditya; Saddami, Khairun; Muchtar, Kahlil; Isma, Teuku Reza Auliandra
Indonesian Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol. 7 No. 2 (2025): May
Publisher : Jurusan Teknik Elektromedik, Politeknik Kesehatan Kemenkes Surabaya, Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/ijeeemi.v7i2.75

Abstract

Dental caries, one of the most prevalent oral diseases, can lead to severe complications if left untreated. Early detection is crucial for effective intervention, reducing treatment costs, and preventing further deterioration. Recent advancements in deep learning have enabled automated caries detection based on clinical images; however, most existing approaches rely on raw or minimally processed images, which may include irrelevant structures and noise, such as the tongue, lips, and gums, potentially affecting diagnostic accuracy. This research introduces a U-Net-based tooth segmentation model, which is applied to enhance the performance of dental caries classification using ResNet-50, InceptionV3, and ResNeXt-50 architectures. The methodology involves training the teeth segmentation model using transfer learning from backbone architectures ResNet-50, VGG19, and InceptionV3, and evaluating its performance using IoU and Dice Score. Subsequently, the classification model is trained separately with and without segmentation using the same hyperparameters for each model with transfer learning, and their performance is compared using a confusion matrix and confidence interval. Additionally, Grad-CAM visualization was performed to analyze the model's attention and decision-making process. Experimental results show a consistent performance improvement across all models with the application of segmentation. ResNeXt-50 achieved the highest accuracy on segmented data, reaching 79.17%, outperforming ResNet-50 and InceptionV3. Grad-CAM visualization further confirms that segmentation plays a crucial role in directing the model’s focus to relevant tooth areas, improving classification accuracy and reliability by reducing background noise. These findings highlight the significance of incorporating tooth segmentation into deep learning models for caries detection, offering a more precise and reliable diagnostic tool. However, the confidence interval analysis indicates that despite consistent improvements across all metrics, the observed differences may not be statistically significant.
Rancang Bangun Purwarupa Pemilah Sampah Pintar Berbasis Deep Learning Muchtar, Kahlil; Anshari, Nyak Twoman; Chairuman, Chairuman; Alhabibie, Khalid; Munadi, Khairul
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 9 No 3: Juni 2022
Publisher : Fakultas Ilmu Komputer, Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25126/jtiik.2022934976

Abstract

Pengolahan sampah di Indonesia masih menjadi pekerjaan rumah yang besar dan belum terselesaikan. Menurut penelitian aktual Sustainable Waste Indonesia (SWI) mengungkapkan bahwa 24% sampah di Indonesia masih tidak dikelola dengan baik. Dari sekitar 65 juta ton sampah yang diproduksi di Indonesia tiap harinya, sampah yang paling banyak dihasilkan adalah sampah organik sebanyak 60%, sampah plastik 14%, diikuti sampah kertas 9%, metal 4,3%, kaca, kayu dan bahan lainnya sebesar 12,7%. Sampah plastik yang dihasilkan Indonesia mencapai 1,3 juta ton. Berdasarkan banyaknya sampah yang diproduksi Indonesia, dapat diketahui besarnya peran daur ulang dalam menyelamatkan lingkungan. Peran yang paling utama adalah dapat membantu mengurangi limbah dimanapun dan mengurangi polusi. Langkah awal untuk pengolahan limbah adalah pemilahan. Dengan memilah sampah yang benar, masyarakat dapat dengan mudah mengidentifikasi bahan mana yang dapat didaur ulang dan mana yang tidak. Berdasarkan permasalahan tersebut, peneliti mengusulkan sebuah sistem yang mampu membedakan dan mengenal sampah organik dan sampah anorganik. Dalam hal ini, digunakan salah satu cabang ilmu pembelajaran mesin (Machine Learning) yang mampu mengetahui kumpulan gambar serta mengklasifikasikannya yaitu pembelajaran mendalam (Deep Learning). Salah satu metode pembelajaran mendalam (Deep Learning) yang digunakan adalah Convolutional Neural Network (CNN). Arsitektur tersebut menyerupai saraf manusia dan merupakan salah satu pembelajaran terawasi. Selain itu, peneliti memanfaatkan Raspberry Pi sebagai mikrokontroler, modul kamera Raspberry Pi yang digunakan untuk mengambil gambar, serta Intel Movidius Neural Compute Stick (NCS) yang berfungsi untuk mempercepat proses komputasi sehingga proses pendeteksian lebih mudah. Hal ini dikarenakan perangkat tersebut bersifat portable, cepat dan akurat. AbstractWaste processing in Indonesia is still a big homework and has not been solved. According to the latest research by Sustainable Waste Indonesia (SWI) 24% of waste in Indonesia is still not properly managed. From about 65 million tons of waste produced in Indonesia every day, the largest contributor to this is organic waste as much as 60%, plastic waste 14%, followed by paper waste 9%, metal 4.3%, glass, wood and other materials at 12.7%. The plastic waste in Indonesia reaches 1.3 million tons. Based on the amount of waste in Indonesia, it can be seen that the role of recycling is big in saving the environment. It is crucial to help reduce waste anywhere and reduce press down pollution. The very first step in waste processing is sorting. By properly sorting waste, people can easily identify which materials can be recycled and which are not. Based on these problems, the researcher proposes a system that is able to recognize and sort organic waste, and inorganic waste. In this case, Deep Learning, a branch of (Machine Learning) is used to be able to understand a set of images and classify them. Deep Learning method applied here is using Convolutional Neural Network (CNN). The algorithm is like human nerves and is one of supervised learning. In addition, this research use the Raspberry Pi as a microcontroller, the Raspberry Pi camera module which is used to take pictures, and the Intel Movidius Neural Compute Stick (NCS) to speed up the computing process so that the identification process is easier. These devices are portable, fast and accurate.
Augmentation of Additional Arabic Dataset for Jawi Writing and Classification Using Deep Learning Razali, Safrizal; Muchtar, Kahlil; Rinaldi, Muhammad Hafiz; Nurdin, Yudha; Rahman, Aulia
Jurnal Rekayasa Elektrika Vol 20, No 1 (2024)
Publisher : Universitas Syiah Kuala

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.17529/jre.v20i1.33722

Abstract

This research aims to create an additional dataset containing Arabic characters for writing Jawi script and to train classification models using deep learning architectures such as InceptionV3 and ResNet34. The initial stage of the study involves digital image processing to obtain the additional Arabic character dataset from several sources, including HMBD, AHAWP, and HUCD, encompassing various connected and disconnected forms of Jawi script. Image processing includes steps such as preprocessing to enhance image quality, segmentation to separate Arabic characters from the background, and augmentation to increase dataset variability. Once the dataset is formed, we train the models using appropriate training data for each InceptionV3 and ResNet34 architecture. The classification evaluation results indicate that the model with ResNet34 architecture achieved the best performance with an accuracy of 96%. This model successfully recognizes Jawi script accurately and consistently, even for classes with similar shapes. The main contribution of this research is the availability of the additional Arabic character dataset that can be utilized for Jawi script recognition and performance assessment of various deep learning models. The study also emphasizes the importance of selecting the appropriate architecture for specific character recognition tasks. The research findings affirm that the model with ResNet34 architecture has excellent capability in recognizing the additional Arabic characters for writing Jawi. The results of this research have the potential to support further developments in Jawi character recognition applications and provide valuable insights for researchers in the field of character recognition sourced from Arabic characters. Dataset augmentation results can be accessed at https://singkat.usk.ac.id/g/En0skCKGAR
Comparison Study of Corn Leaf Disease Detection based on Deep Learning YOLO-v5 and YOLO-v8 Chitraningrum, Nidya; Banowati, Lies; Herdiana, Dina; Mulyati, Budi; Sakti, Indra; Fudholi, Ahmad; Saputra, Huzair; Farishi, Salman; Muchtar, Kahlil; Andria, Agus
Journal of Engineering and Technological Sciences Vol. 56 No. 1 (2024)
Publisher : Directorate for Research and Community Services, Institut Teknologi Bandung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/j.eng.technol.sci.2024.56.1.5

Abstract

Corn is one of the primary carbohydrate-rich food commodities in Southeast Asian countries, among which Indonesia. Corn production is highly dependent on the health of the corn plant. Infected plants will decrease corn plant productivity. Usually, corn farmers use conventional methods to control diseases in corn plants. Still, these methods are not effective and efficient because they require a long time and a lot of human labor. Deep learning-based plant disease detection has recently been used for early disease detection in agriculture. In this work, we used convolutional neural network algorithms, namely YOLO-v5 and YOLO-v8, to detect infected corn leaves in the public data set called ‘Corn Leaf Infection Data set’ from the Kaggle repository. We compared the mean average precision (mAP) of mAP 50 and mAP 50-95 between YOLO-v5 and YOLO-v8. YOLO-v8 showed better accuracy at an mAP 50 of 0.965 and an mAP 50-95 of 0.727. YOLO-v8 also showed a higher detection number of 12 detections than YOLO-v5 at 11 detections. Both YOLO algorithms required about 2.49 to 3.75 hours to detect the infected corn leaves. This all-trained model could be an effective solution for early disease detection in future corn plantations.
Development of a self-driving RC car with lane-keeping system using a pure pursuit controller Rahman, Aulia; Alhamdi, Muhammad Jurej; Muchtar, Kahlil; Nurdin, Yudha; Roslidar, Roslidar; Razali, Safrizal; Effendi, Riki
Jurnal Polimesin Vol 23, No 4 (2025): August
Publisher : Politeknik Negeri Lhokseumawe

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30811/jpl.v23i4.6664

Abstract

The development of autonomous vehicles is crucial for enhancing driving safety, comfort, and efficiency. This research presents the design of a self-driving Remote Controlled (RC) car at a 1:10 scale, equipped with a lane-keeping system and a pure pursuit controller. The primary objective is to evaluate the effectiveness of integrating computer vision techniques with trajectory tracking control to maintain lane stability. Lane detection was achieved using a sliding windows algorithm, while polynomial fitting estimated the lane centerline. A stereo camera provided spatial perception, capturing images that were processed to determine the steering angle needed to minimize deviation between the lookahead point and the viewpoint of the vehicle. Experimental results show that the system-maintained lane position with minimal deviation, achieving an average steering angle of 90.44° on straight paths, 65.4° on right turns, and 113.1° on left turns. These results demonstrate the feasibility of combining vision-based lane detection with a pure pursuit controller to improve path-tracking accuracy and stability in autonomous vehicles.