cover
Contact Name
Dede Kurniadi
Contact Email
dede.kurniadi@itg.ac.id
Phone
+6287880007464
Journal Mail Official
jistics@aptika.org
Editorial Address
Green Garden Residence C-87, Kabupaten Garut, Provinsi Jawa Barat, Indonesia, 44151
Location
Kab. garut,
Jawa barat
INDONESIA
Journal of Intelligent Systems Technology and Informatics
ISSN : -     EISSN : 3109757X     DOI : https://doi.org/10.64878/jistics
The Journal of Intelligent Systems Technology and Informatics (JISTICS) is an international peer-reviewed open-access journal that publishes high-quality research in the fields of Artificial Intelligence, Intelligent Systems, Information Technology, Computer Science, and Informatics. JISTICS aims to foster global scientific exchange by providing a platform for researchers, practitioners, and academics to disseminate original findings, critical reviews, and innovative applications. The journal is published three times a year (March, July, November) and may also publish special issues on emerging topics.
Articles 5 Documents
Search results for , issue "Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025" : 5 Documents clear
Indonesian Sign Language Alphabet Image Classification using Vision Transformer Agustiansyah, Yoga; Kurniadi, Dede
Journal of Intelligent Systems Technology and Informatics Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025
Publisher : Aliansi Peneliti Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64878/jistics.v1i1.5

Abstract

Effective communication is fundamental for social interaction, yet individuals with hearing impairments often face significant barriers. Indonesian Sign Language (BISINDO) is a vital communication tool for the deaf community in Indonesia. However, limited public understanding of BISINDO creates communication barriers, which necessitate an accurate automatic recognition system. This research aims to investigate the efficacy of the Vision Transformer (ViT) model, a state-of-the-art deep learning architecture, for classifying static BISINDO alphabet images, exploring its potential to overcome the limitations of previous approaches through robust feature extraction. The methodology involved utilizing a dataset of 26 BISINDO alphabet classes, which underwent comprehensive preprocessing, including class balancing via augmentation and image normalization. The Google/vit-base-patch16-224-in21k ViT model was adapted with a custom classification head and trained using a two-phase strategy: initial feature extraction with a frozen backbone, followed by full network fine-tuning. The fine-tuned Vision Transformer model demonstrated exceptional performance on the unseen test set, achieving an accuracy of 99.77% (95% CI: 99.55%–99.99%), precision of 99.77%, recall of 99.72%, and a weighted F1-score of 0.9977, significantly surpassing many previously reported methods. The findings compellingly confirm that the ViT model is a highly effective and robust solution for BISINDO alphabet image classification, underscoring the potential of advanced Transformer-based architectures in developing accurate assistive communication technologies to benefit the Indonesian deaf and hard-of-hearing community.
Comparison of CNN Models Using EfficientNetB0, MobileNetV2, and ResNet50 for Traffic Density with Transfer Learning Fauzi, Dhika Restu; Haqdu D, Gezant Ashabil
Journal of Intelligent Systems Technology and Informatics Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025
Publisher : Aliansi Peneliti Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64878/jistics.v1i1.6

Abstract

Traffic congestion in urban areas poses a significant and widespread challenge, stemming from the essential role of modern transportation in daily human activities. To address this issue, artificial intelligence (AI), particularly through applying convolutional neural networks (CNN), offers a promising solution for developing automated, accurate, and efficient traffic density classification systems. However, the performance of such systems is critically dependent on the selection of optimal model architecture. This study comprehensively analyzes three leading pre-trained CNN models: EfficientNetB0, MobileNetV2, and ResNet50. Utilizing a transfer learning approach, the models were trained over 20 epochs to classify traffic density into five categories: Empty, Low, Medium, High, and Traffic Jam. The research methodology was based on the public Traffic Density Singapore dataset. To enhance model robustness and address class imbalances, the initial dataset of 4,038 images was expanded to 6,850 images through data augmentation techniques. All images were subsequently resized to a uniform size of 224x224 pixels. The evaluation results conclusively demonstrate that the ResNet50 architecture delivered superior performance, achieving a validation accuracy of approximately 85%. Furthermore, ResNet50 consistently yielded higher precision, recall, and f1-score values across most classes. For comparison, EfficientNetB0 and MobileNetV2 achieved 81% and 79% validation accuracies, respectively. This study concludes that ResNet50 is the optimal architecture for this classification task, and these findings establish a foundation for developing real-world, intelligent traffic monitoring systems.
Image Classification Using MobileNet Based on CNN Architecture for Grape Leaf Disease Detection Nur Sahid, Ahmad; Cahyadi, Deden Ruli
Journal of Intelligent Systems Technology and Informatics Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025
Publisher : Aliansi Peneliti Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64878/jistics.v1i1.7

Abstract

Grape cultivation, while economically important, is often challenged by various leaf diseases that can significantly impact yield and quality, underscoring the need for rapid and accurate detection methods. Traditional diagnostic approaches can be time-consuming and require expert knowledge, whereas advanced image classification techniques offer a promising avenue for automated disease identification. This research aimed to develop and rigorously evaluate a Convolutional Neural Network (CNN) model, specifically leveraging the MobileNetV2 architecture, for the precise classification of four common grape leaf diseases: healthy, Black Rot, Esca (also known as Black Measles), and Leaf Blight. The methodology encompassed dataset acquisition and pre-processing, data augmentation to increase training data diversity, and applying transfer learning using pre-trained MobileNetV2 weights, followed by a fine-tuning stage to adapt the model to the specific task. A comprehensive evaluation on 1,805 previously unseen test images demonstrated the model's exceptional performance, achieving an overall accuracy of 99.89%. Ultimately, the proposed approach significantly outperforms previous methods, demonstrating the feasibility of applying lightweight CNN architectures to real-world detection scenarios. The main contribution of this research is showing that high computational efficiency can be achieved without sacrificing accuracy, paving the way for implementation in digital detection systems with limited resources, particularly for mobile devices or edge systems.
Brain Tumor Classification using Convolutional Neural Network with ResNet Architecture Fadilah, Azki; Azkia, Azka
Journal of Intelligent Systems Technology and Informatics Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025
Publisher : Aliansi Peneliti Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64878/jistics.v1i1.8

Abstract

Brain tumors are dangerous, sometimes fatal illnesses that require prompt, accurate diagnosis to enhance patient outcomes. Given the intricacy and diversity of tumor characteristics, manual interpretation of brain MRI data is frequently laborious and prone to human error. This research aims to create an automated system for classifying brain tumors by integrating the Convolutional Neural Network (CNN) algorithm with the ResNet architecture. The suggested approach makes use of 7,023 MRI pictures that have been divided into four categories: non-tumor, pituitary tumor, meningioma, and glioma. Image normalization, grayscale conversion, scaling, and data augmentation methods, including rotation and flipping, were among the preprocessing processes used to enhance model performance. The ResNet design was chosen because it effectively trains deeper networks by utilizing residual connections to prevent vanishing gradient problems. Metrics such as F1-score, accuracy, precision, and recall were used to train and assess the system. According to the testing data, the model performed consistently across all classes and attained an outstanding accuracy of 94.14%. These results validate the promise of deep learning methods, especially CNNs with ResNet enhancements, for classification tasks involving medical images. The system created in this work is a promising tool for assisting clinical decision-making, cutting down on diagnostic time, and improving the accuracy of brain tumor identification and classification.
Gender Identification from Facial Images Using Custom Convolutional Neural Network Architecture Amiludin, Ikbal; Putra, Andika Eka Sastya
Journal of Intelligent Systems Technology and Informatics Vol 1 No 1 (2025): JISTICS Vol. 1 No. 1 March 2025
Publisher : Aliansi Peneliti Informatika

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64878/jistics.v1i1.27

Abstract

Gender classification from facial images has become increasingly important in biometric applications. This study introduces a deep learning approach utilizing a custom convolutional neural network (CNN) model trained on 8,908 labeled facial images obtained from Kaggle, comprising 4,169 female and 4,739 male samples. Each image underwent preprocessing, including grayscale conversion, face alignment, cropping, resizing to 100×100 pixels, and pixel normalization. The CNN architecture consists of three convolutional layers with ReLU activation, max-pooling layers, a flatten layer, and two dense layers, ending with a sigmoid activation function for binary classification. The model was implemented using TensorFlow and trained for 70 epochs on Google Colab with GPU acceleration. Evaluation metrics include classification accuracy, confusion matrix, and area under the curve (AUC) from the ROC curve. The proposed system achieved 90.79% accuracy and 0.97 AUC, indicating robust performance. However, the confusion matrix revealed slightly higher precision for male predictions, suggesting the need for class balance refinement. The method demonstrates strong potential for integration into real-world facial analysis systems, such as identity verification, access control, and intelligent surveillance platforms.

Page 1 of 1 | Total Record : 5