Claim Missing Document
Check
Articles

Found 14 Documents
Search
Journal : Sebatik

ANALISIS PERBANDINGAN ALGORITMA CLUSTERING DALAM MELAKUKAN SEGMENTASI WARNA PADA CITRA JAJAN TRADISIONAL Saidatul Arifah; Ericks Rachmat Swedia; M. Ridwan Dwi Septian
Sebatik Vol. 27 No. 1 (2023): Juni 2023
Publisher : STMIK Widya Cipta Dharma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46984/sebatik.v27i1.2273

Abstract

Jajanan tradisional sangat beragam dan memiliki ciri warna tertentu, seperti kue putu dengan dominan warna hijau, kue cucur dengan dominan warna coklat, kue lumpur dengan dominan warna kuning, combro dengan dominan warna kuning keemasan, gemblong dengan dominan warna coklat, dan masih banyak yang lainnya. Masyarakat pada era jaman sekarang lebih banyak menyukai produk makanan instan, dari segi rasa jajan tradisional tidak kalah enak dengan makanan instan yang beredar di pasaran sedangkan dari segi kesehatan, jelas jajan tradisional lebih sehat karena tidak mengandung bahan pengawet. Machine learning terdapat suatu teknik yang dapat melakukan segmentasi pada citra digital, yang disebut teknik segmentasi citra. Segmentasi merupakan proses partisi gambar digital ke beberapa daerah dengan tujuan untuk menyederhanakan ataupun merubah representasi gambar menjadi sesuatu yang lebih bermakna dan mudah dianalisa. Metode yang digunakan adalah metode clustering karena dapat melakukan clustering warna dengan baik. Algoritma clustering yang digunakan antara lain K-Means, Fuzzy C-Means, dan metode Elbow digunakan untuk mencari jumlah cluster berdasarkan Sum of Square Error (SSE). Aplikasi segmentasi gambar ini terdiri dari Web Apps dan Python Apps. Aplikasi Python adalah server untuk aplikasi segmentasi gambar ini dan kerangka web yang digunakan adalah Flask API. Berdasarkan sepuluh pengujian yang telah dilakukan, algoritma K-Means mendapatkan akurasi sebesar 76,47% dan algoritma Fuzzy C-Means mendapatkan akurasi sebesar 68,63%. Akurasi tersebut dapat disimpulkan bahwa algoritma K-Means lebih baik dan efisien dari segi waktu dibandingkan algoritma Fuzzy C-Means dalam melakukan segmentasi warna pada citra jajanan tradisional.
Comparison of YOLOv7 and YOLOv8 Architectures for Detecting Shirt Collars Danyalson, Calvin; Cahyanti, Margi; Swedia, Ericks Rachmat; Sarjono, Mochammad Wisuda
Sebatik Vol. 28 No. 2 (2024): December 2024
Publisher : STMIK Widya Cipta Dharma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46984/sebatik.v28i2.2492

Abstract

The shirt collar is one of the primary aspects monitored during online examinations in the postgraduate program at Gunadarma University. Examinees are required to wear formal, collared attire. Based on these regulations, a study was conducted to develop a collar detection method to facilitate the online exam monitoring process. This research involves a comparative analysis of two detection architectures: You Only Look Once (YOLO) version 7 (YOLOv7) and version 8 (YOLOv8), to determine the most effective architecture for detecting shirt collars using the dataset provided in the study. Detection models developed from both architectures were implemented in a web-based application and tested to evaluate their accuracy and efficiency. The testing results showed that YOLOv7 achieved an average accuracy of 95%, outperforming YOLOv8, which had an average accuracy of 75%. However, despite YOLOv8's lower accuracy, it excelled in detection speed, with an average processing time of 2.27 seconds, significantly faster than YOLOv7's average processing time of 22.42 seconds. Considering both accuracy and speed, YOLOv7 demonstrated the best overall performance in this study. Nonetheless, it is possible that YOLOv8 could surpass YOLOv7 in the future if significant improvements are made to its detection accuracy.
A Real-Time Helmet Detection System Based on YOLOv8 to Support Traffic Law Enforcement Puspita, Tiara; Swedia, Ericks Rachmat; Cahyanti, Margi; Septian, M Ridwan Dwi
Sebatik Vol. 29 No. 1 (2025): June 2025
Publisher : STMIK Widya Cipta Dharma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46984/sebatik.v29i1.2585

Abstract

Helmet use is a critical safety measure for motorcycle riders, yet non-compliance remains high in Indonesia. This study introduces a real-time helmet detection system using the YOLOv8 architecture, deployed on Android devices with the Kotlin programming language. A dataset of 1,197 digital images was collected and annotated using Roboflow Annotate, containing two classes: helmet users (True) and non-users (False). To improve model generalization, data augmentation techniques such as rotation and shear were applied. The model was trained using the pretrained yolov8n.pt weights and evaluated based on mAP and Intersection over Union (IoU). During training, the model achieved a mAP50 of 98% and a mAP50–95 of 59.6%. In testing, the mAP50 reached 98.3% and mAP50–95 reached 61%, with an average IoU of 0.73. The trained model was then converted into TensorFlow Lite format and integrated into an Android application. Real-time testing showed a detection accuracy of 93.3%. These results demonstrate that YOLOv8 is effective for mobile-based real-time helmet detection and has strong potential to support traffic law enforcement systems, especially in urban environments where manual monitoring is inefficient. The system contributes to enhancing public safety through smart technology integration.
Deep Learning Architecture of VGG16 and VGG19 for Eyeglasses Face Classification Fatah, Muhammad Faiz Abrar; Swedia, Ericks Rachmat; Cahyanti, Margi; Septian, M Ridwan Dwi
Sebatik Vol. 29 No. 2 (2025): December 2025
Publisher : STMIK Widya Cipta Dharma

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.46984/sebatik.v29i2.2688

Abstract

This study aims to conduct a comparative analysis of the performance of two popular Convolutional Neural Network (CNN) architectures, namely Visual Geometry Group (VGG16 and VGG19), in classifying facial images with glasses using the “Glasses or No Glasses” dataset. Both models were developed through a transfer learning approach by utilizing pre-trained ImageNet weights to accelerate convergence and improve classification accuracy. The training process employed the Adam optimizer with binary crossentropy as the loss function. The dataset was divided into two subsets 80% for training and 20% for validation while testing was performed on 50 unseen images excluded from both subsets. Experimental results show that the VGG16 architecture achieved 87.86% training accuracy and 89.11% validation accuracy, whereas VGG19 achieved 86.86% training accuracy and 87.89% validation accuracy. On the testing dataset, VGG16 correctly classified 47 out of 50 images (94%), while VGG19 correctly classified 48 images (96%). Although the performance gap is relatively small, VGG19 demonstrated better computational efficiency with a shorter training duration (2 hours and 41 minutes) compared to VGG16 (2 hours and 59 minutes). Furthermore, the trained models were successfully implemented in an Android application using TensorFlow Lite, enabling real-time eyeglasses detection. These findings indicate that the VGG19 architecture offers superior efficiency and accuracy for deep learning–based eyeglass face classification tasks.