cover
Contact Name
Huzain
Contact Email
huzain.azis@umi.ac.id
Phone
+628114484875
Journal Mail Official
ijodas.journal@gmail.com
Editorial Address
Jln. Paccerakkang, Kel. Berua, Kec.Biringkanaya, Kota Makassar, Propinsi Sulawesi Selatan, 90241
Location
Unknown,
Unknown
INDONESIA
Indonesian Journal of Data and Science
Published by yocto brain
ISSN : -     EISSN : 27159930     DOI : -
Core Subject : Science, Education,
IJODAS provides online media to publish scientific articles from research in the field of Data Science, Data Mining, Data Communication, Data Security and Data Representation
Articles 135 Documents
Comparation Analysis of Otsu Method for Image Braille Segmentation : Python Approaches Wicaksana, Ardi Anugerah; Handayani, Anik Nur
Indonesian Journal of Data and Science Vol. 6 No. 2 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i2.268

Abstract

Braille plays a crucial role in supporting literacy for individuals with visual impairments. However, converting Braille documents into digital text remains a technical challenge, particularly in accurately segmenting Braille dots from scanned images. This study aims to evaluate and compare the effectiveness of several classical image segmentation techniques—namely Otsu, Otsu Inverse, Otsu Morphology, and Otsu Inverse Morphology—in enhancing Braille image pre-processing. The methods were tested using a set of Braille image datasets and evaluated based on six quantitative image quality metrics: Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), Mean Absolute Error (MAE), Structural Similarity Index (SSIM), Feature Similarity Index (FSIM), and Edge Similarity Index (ESSIM). The results show that the Otsu Morphology method achieved the highest PSNR (27.6798) and SSIM (0.5548), indicating superior image fidelity and structural preservation, while the standard Otsu method yielded the lowest MSE (113.3485).These findings demonstrate that applying morphological operations in combination with thresholding significantly enhances the segmentation quality of Braille images, supporting better accuracy in subsequent recognition tasks. This approach offers a practical and efficient alternative to deep learning models, particularly for resource-constrained systems such as portable Braille readers.
YOLOv8 Implementation on British Sign Language System with Edge Detection Extraction Romadlon, Muhammad Rizqi; Anik Nur Handayani
Indonesian Journal of Data and Science Vol. 6 No. 2 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i2.276

Abstract

This study presents the development and implementation of a deep learning-based system for recognizing static hand gestures in British Sign Language (BSL). The system utilizes the YOLOv8 model in conjunction with edge detection extraction techniques. The objective of this study is to enhance the accuracy of recognition and facilitate communication for individuals with hearing impairments. The dataset was obtained from Kaggle and comprises images of various BSL hand signs captured against a uniform green background under consistent lighting conditions. The preprocessing steps entailed resizing the images to 640 640 pixels, implementing pixel normalization, filtering out low-quality images, and employing data augmentation techniques such as horizontal flipping, rotation, shear, and brightness adjustments to enhance robustness. Edge detection was implemented to accentuate the contours of the hand, thereby facilitating more precise gesture identification. Manual annotation was performed to generate both bounding boxes and segmentation masks, allowing for the training of two model variants: The first is YOLOv8 (non-segmentation), and the second is YOLOv8-seg (segmentation). Both models underwent training over a period of 100 epochs, employing the Adam optimizer and binary cross-entropy loss. The training-to-testing data splits utilized were 50:50, 60:40, 70:30, and 80:20. The evaluation metrics employed included mAP@50, precision, recall, and F1-score. The YOLOv8-seg model with an 80:20 split demonstrated the optimal performance, exhibiting a precision of 0.974, a recall of 0.968, and mAP@50 of 0.979. These metrics signify the model's capacity for robust detection and localization. Despite requiring greater computational resources, the segmentation model offers enhanced contour recognition, rendering it well-suited for high-precision applications. However, the generalizability of the model is constrained due to the employment of static gestures and controlled backgrounds. In the future, researchers should consider incorporating dynamic gestures, varied backgrounds, and uncontrolled lighting to enhance real-world performance.
Classification of Lontara Script Using K-NN Algorithm, Decision Tree, and Random Forest Based on Hu Moments and Canny Segmentation Septiani, Berlian; Hasanuddin, Tasrif; Astuti, Wistiani
Indonesian Journal of Data and Science Vol. 6 No. 2 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i2.281

Abstract

Lontara script is a traditional writing system of the Bugis-Makassar people in South Sulawesi, used to write the Bugis, Makassar, and Mandar languages. This system is based on an abugida, in which each letter represents a consonant with an inherent vowel. It was once used to record history, customary law, and literature, but its use has declined due to the influence of the Latin alphabet. Today, the Lontara script is preserved through education and digitization as part of the cultural heritage of the Indonesian archipelago. In this article, the researchers attempt to use a dataset of handwritten Lontara Bugis-Makassar characters. The process begins with the collection of character datasets, which are then processed through Canny segmentation and Hu Moment feature extraction to obtain a representation of the shape that is invariant to rotation and scale. The processed data was divided into training and testing data, then classified using the K-NN, Decision Tree, and Random Forest algorithms. The results showed that the KNN algorithm with 6 neighbors achieved the highest accuracy, precision, and recall of 98%. The Decision Tree algorithm achieved an accuracy of 96.67%, precision of 96.22%, recall of 95.33%, and an F1-score of 95.98%. Meanwhile, Random Forest showed an accuracy of 96.67%, precision of 96.34%, recall of 96%, and an F1-score of 95.98%.
Deep Learning-Based Blood Cell Image Classification Using ResNet18 Architecture Edyson Tarigan, Thomas; Prasetyo, Agung Budi; Susanti, Emy
Indonesian Journal of Data and Science Vol. 6 No. 2 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i2.300

Abstract

The classification of white blood cells (WBC) plays a critical role in haematological diagnostics, yet manual examination remains a labour-intensive and subjective process. In response to this challenge, this study investigates the application of deep learning, specifically the ResNet18 convolutional neural network architecture, for the automated classification of blood cell images into four classes: eosinophils, lymphocytes, monocytes, and neutrophils. The dataset used comprises microscopic images annotated by cell type and is divided into training and validation sets with an 80:20 ratio. Standard pre-processing techniques such as image normalization and augmentation were applied to enhance model robustness and generalization. The model was fine-tuned using transfer learning with pre-trained weights from ImageNet and optimized using the Adam optimizer. Performance was evaluated through a comprehensive set of metrics including accuracy, precision, recall, F1-score, mean squared error (MSE), and root mean squared error (RMSE). The best model achieved a validation accuracy of 86.89%, with macro-averaged precision, recall, and F1-score of 0.8738, 0.8690, and 0.8688, respectively. Lymphocyte classification yielded the highest F1-score (0.9515), while eosinophils posed the greatest classification challenge, as evidenced by lower precision and higher misclassification rates in the confusion matrix. Error-based evaluation further supported the model’s consistency, with an MSE of 0.7125 and RMSE of 0.8441. These results confirm that ResNet18 is capable of learning discriminative features in complex haematological imagery, providing an efficient and reliable alternative to manual analysis. The findings suggest potential for practical implementation in clinical workflows and pave the way for further research involving multi-model ensembles or cell segmentation pre-processing for improved precision
A Comparative Study of Public Opinion on Indonesian Police: Examining Cases in the Aftermath of the Kanjuruhan Football Disaster Purnawansyah, Purnawansyah; Raja, Roesman Ridwan; Darwis, Herdianti
Indonesian Journal of Data and Science Vol. 6 No. 2 (2025): Indonesian Journal of Data and Science
Publisher : yocto brain

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56705/ijodas.v6i2.235

Abstract

This research explores public sentiment towards the Indonesian police using sentiment analysis and machine learning techniques. The study addresses the challenge of understanding public opinion based on social media comments related to significant police cases. The aim is to compare reported satisfaction levels with actual public sentiment. Utilizing the Indonesian RoBERTa base IndoLEM sentiment classifier, comments were analyzed and preprocessed. The classification was conducted using Random Forest (RF) and Complement Naive Bayes (CNB) models, incorporating unigram and bi-gram features. Oversampling techniques were applied to handle data imbalance. The best-performing model, Random Forest with bi-gram features, achieved high evaluation scores, including a precision of 0.91 and accuracy of 0.91. The findings reveal significant insights into public opinion, contributing to improved law enforcement strategies and public trust.