cover
Contact Name
Adyanata Lubis
Contact Email
jmnr@rokania.ac.id
Phone
+628127651902
Journal Mail Official
jmnr@rokania.ac.id
Editorial Address
Jl. Raya Pasir Pengaraian,Km 15 Langkitin, Kec. Rambah Samo. Kab.Rokan Hulu
Location
Kab. rokan hulu,
Riau
INDONESIA
JOURNAL OF ICT APLICATIONS AND SYSTEM
Published by STKIP Rokania
ISSN : 28301404     EISSN : 2830098X     DOI : https://doi.org/10.56313/jictas
The Journal of ICT Applications System is a scientific journal that presents original articles on computer science research. This journal is a means of publication and a place to share research and development work in the field of computers. Loading of articles in this journal is done through submit. Complete information for article loading and article writing instructions are available in each issue. Articles submitted will go through a selection process for bestari partners and/or editors. Journal of ICTAplication System is published 2 times a year, in June and December Journal of ICTAplication System Registered at PDII LIPI with Print ISSN number 2830-1404 and Online ISSN 2830-098X For practitioners, academics, teachers and students in the field of computer science who want articles on research results and ideas to be published in this journal via submit
Articles 38 Documents
Sentiment Classification of Public Tweets Towards CGV Cinemas on Social Media X Using Naive Bayes Algorithm Zulkifli, Akhmad
Journal of ICT Applications System Vol 4 No 1 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i1.418

Abstract

In the era of digital communication, sentiment analysis on social media platforms provides businesses with valuable insights into public perception. This research aims to classify public sentiment toward CGV cinemas in Indonesia through tweets collected from Social Media X using the Naive Bayes algorithm. A total of 4,000 tweets were preprocessed through a series of text normalization techniques, including tokenization, stop word removal, and stemming. Text features were transformed using the TF-IDF method. The Naive Bayes classifier was trained and evaluated using an 80:20 train-test split. Experimental results showed an overall classification accuracy of 38.05%, with the model performing significantly better on positive sentiments (F1-score: 0.538) than on neutral and negative ones. These findings highlight the capability and limitations of traditional probabilistic classifiers when dealing with short, noisy textual data in multilingual social contexts. This study contributes to applied sentiment analysis and offers a baseline for future comparison with more sophisticated models
Optimized Detection of Red Devil Fish in Low-Quality Underwater Images from Lake Toba Using a Hybrid CNN and Transfer Learning Approach Enda Ribka Meganta P; Yanto, Budi
Journal of ICT Applications System Vol 4 No 1 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i1.429

Abstract

The detection of freshwater fish in turbid underwater environments presents significant challenges due to poor image quality caused by low lighting, suspended particles, and visual noise. This study proposes an optimized detection model for Amphilophus labiatus (Red Devil fish) in the murky waters of Lake Toba, Indonesia, using a hybrid Convolutional Neural Network (CNN) integrated with transfer learning and visual enhancement techniques. The proposed architecture combines MobileNetV2 and ResNet50 backbones with CLAHE (Contrast Limited Adaptive Histogram Equalization) and median filtering to improve image clarity and feature extraction. A custom dataset comprising 3,500 annotated underwater images was used to train and evaluate the model. The hybrid model achieved a detection accuracy of 96.1%, a precision of 95.6%, a recall of 94.8%, and a mean Average Precision (mAP@0.5) of 0.941—outperforming baseline models such as YOLOv5 and Faster R-CNN. Visual diagnostics and Grad-CAM attention maps confirm the model's ability to focus on key anatomical features under varying image conditions. The architecture is optimized for real-time deployment on edge-AI devices, supporting conservation efforts and biodiversity monitoring in freshwater ecosystems
Implementation of YOLOv8 for Object Detection in Urban Traffic Surveillance A Case Study on Vehicles and Pedestrians from CCTV Imagery Saragih, Rusmin; Imeldawaty Gultom; Frans Ikorasaki; Theodora MV Nainggolan
Journal of ICT Applications System Vol 4 No 1 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i1.430

Abstract

Implementation of the YOLOv8 object detection algorithm for enhancing traffic surveillance through accurate identification of multiple road entities, including cars, motorcycles, trucks, and pedestrians. Using a 41-second CCTV video as the primary dataset, the research adopts a deep learning-based training approach via Google Colab to evaluate YOLOv8's performance under real-world urban conditions. The detection model was assessed using key evaluation metrics such as accuracy, precision, recall, and Mean Average Precision (mAP). The experimental results demonstrate that YOLOv8 achieves an overall detection accuracy of 80%, showing reliable performance in identifying vehicles and people despite challenges such as occlusions, varied lighting, and complex backgrounds. However, accuracy variations were observed in cases involving partial visibility and non-optimal camera angles. The findings highlight the potential of YOLOv8 as a robust and scalable solution for real-time traffic object detection, with implications for smart city development and automated traffic management systems. Further improvements are recommended in dataset diversity and model fine-tuning to enhance detection robustness across dynamic traffic scenarios
Anomaly-Based Financial Fraud Detection Using Autoencoder A Case Study on the Kaggle Credit Card Dataset Andri Nata; Dudes Manalu; Jaya Tata Hardinata; Peniel Sam Putra Sitorus
Journal of ICT Applications System Vol 4 No 1 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i1.431

Abstract

Financial fraud remains a critical challenge for banking systems and digital payment platforms worldwide. With the rapid growth of electronic transactions, effective fraud detection mechanisms are essential to ensure security and user trust. This study explores the application of an unsupervised deep learning model—Autoencoder—for anomaly-based financial fraud detection. Utilizing the publicly available Kaggle Credit Card Fraud Detection dataset, which comprises 284,807 transactions including 492 fraudulent cases, the model is trained exclusively on legitimate transactions to learn typical behavioral patterns. Prior to training, the dataset underwent feature anonymization using Principal Component Analysis (PCA), and numerical columns such as "Amount" and "Time" were normalized using Min-Max Scaling. The Autoencoder architecture includes three encoder and decoder layers with ReLU activations, and is optimized using the Adam optimizer with Mean Squared Error (MSE) as the loss function. Experimental results show that the model achieves a classification accuracy of 94% and an AUC score of 0.931, indicating strong potential for detecting anomalies. However, the precision for identifying fraudulent transactions remains relatively low (5%), reflecting the challenges posed by imbalanced datasets. Despite this, the study demonstrates that Autoencoder offers a promising foundation for fraud detection systems, with further improvements possible through model integration and hybrid ensemble techniques
Automatic Food Label Detection in Images Using Convolutional Neural Network with Food-101 Dataset Natasya, Ccely; Aisyah, Nur; Prasiwiningrum, Elyandri; Yulfita Aini
Journal of ICT Applications System Vol 4 No 1 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i1.432

Abstract

automatic detection of food labels from digital images has emerged as a crucial application in dietary analysis, nutrition monitoring, and smart culinary systems. This study presents the implementation of a Convolutional Neural Network (CNN) model for food label recognition using the Food-101 dataset, which consists of over 101,000 images from 101 distinct food categories. The proposed system follows a systematic pipeline that includes image resizing, normalization, and data augmentation to enhance model robustness and performance. The CNN architecture is designed with multiple convolutional and pooling layers, followed by dense and softmax output layers for final classification. The training was conducted using the Adam optimizer with a learning rate of 0.0001, batch size of 32, and dropout regularization to prevent overfitting. Experimental results demonstrate a classification accuracy of 24.45% after one training epoch, highlighting both the capability and limitations of the baseline CNN model. Despite moderate accuracy, the model successfully identifies visually distinguishable food items and sets a foundation for future improvements through transfer learning and fine-tuning. This research confirms the potential of CNN-based models for food label detection and provides insights for the development of more accurate food recognition systems in health, dietary, and culinary applications
Explainable Transformer-Based Object Detection for Autonomous Systems under Adversarial and Low-Light Conditions Elyandri Prasiwiningrum; Aris Sudaryanto
Journal of ICT Applications System Vol 4 No 2 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i2.444

Abstract

Recent advancements in object detection have demonstrated remarkable performance in autonomous systems; however, most deep learning models still suffer significant accuracy degradation under low-light or adversarial conditions. This study proposes an Explainable Transformer-Based Object Detection (ETOD) framework that integrates Vision Transformer (ViT) architecture with Explainable Artificial Intelligence (XAI) mechanisms to achieve robust and interpretable object detection in adverse environments. The proposed ETOD model employs a dual-branch structure: (i) a low-light enhancement module that uses contrastive illumination normalization to recover critical features, and (ii) a transformer-based detection head optimized for global contextual reasoning. To ensure explainability, Grad-CAM and attention visualization maps are incorporated to highlight the model’s focus regions, providing interpretive insights for human operators and safety auditors. Experimental evaluation was conducted using benchmark datasets (ExDark, BDD100K-Night, and COCO-Adversarial) with simulated adversarial perturbations and low-illumination conditions. The proposed ETOD achieved a 12.8% improvement in mAP over standard DETR and 17.5% higher robustness against adversarial attacks while maintaining real- time inference on edge GPUs. Qualitative analysis demonstrates that the explainability module provides clear visual cues that correlate strongly with detected object boundaries. The findings suggest that integrating transformer- based detection with explainable reasoning mechanisms offers a promising pathway for trustworthy and safety-critical perception systems in autonomous vehicles and drones
Retinal Disease Classification Using Deep CNN on Fundus Images Yanto, Adri; Pratama, Yogi; Ridwan
Journal of ICT Applications System Vol 4 No 2 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i2.451

Abstract

Diabetic retinopathy (DR) is one of the primary causes of preventable blindness, highlighting the necessity for accurate and automated retinal screening systems. Manual diagnosis through fundus image inspection is time-consuming and prone to subjective interpretation, particularly in regions with limited access to ophthalmic specialists. This study presents a deep convolutional neural network (CNN) approach based on ResNet50 architecture with fine-tuning for multi-class classification of retinal diseases. The proposed model was developed using the APTOS 2019 Blindness Detection dataset, consisting of 3,662 fundus images categorized into five levels of DR severity. A robust preprocessing pipeline, including illumination correction, contrast enhancement, normalization, and extensive data augmentation, was implemented to improve image quality and balance the dataset. The network was trained using the Adam optimizer with a learning rate of 1×10?? and categorical cross-entropy loss for 30 epochs under an 80:20 train–validation split. Experimental evaluation demonstrated high performance with 92.4% accuracy, 0.91 precision, 0.92 recall, 0.91 F1-score, and an AUC of 0.95, outperforming baseline CNN and VGG16 models. Furthermore, Grad-CAM visualization confirmed that the model accurately localized critical retinal regions associated with microaneurysms, hemorrhages, and exudates, enhancing interpretability and clinical trust. The proposed ResNet50-based framework provides an explainable, efficient, and reliable solution for automated diabetic retinopathy detection, supporting large-scale tele-ophthalmology and early diagnosis applications in medical imaging
Enhanced Classification of Brain MRI Images for Tumor Detection Using Transfer Learning and Grad-CAM-Based Explainable Convolutional Neural Network (CNN) Putra, Irwandi Rizki; Zulrahmadi; Andri Swandi; Yulia; Tasya Destria Putri
Journal of ICT Applications System Vol 4 No 2 (2025): Journal of ICT Aplications and System
Publisher : Lembaga Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56313/jictas.v4i2.454

Abstract

Accurate and explainable classification of brain Magnetic Resonance Imaging (MRI) is crucial for the early detection and treatment of brain tumors. This study introduces an enhanced deep learning framework that integrates transfer learning with Grad-CAM-based explainable Convolutional Neural Network (CNN) for tumor classification. The proposed approach utilizes a fine-tuned EfficientNet-B0 architecture with an optimized preprocessing pipeline consisting of Contrast Limited Adaptive Histogram Equalization (CLAHE), normalization, and multi-variant augmentation (rotation, flipping, and zoom). The model was trained on a publicly available brain MRI dataset comprising 3,000 images classified into four categories: glioma, meningioma, pituitary tumor, and non-tumor. Evaluation metrics include accuracy, precision, recall, F1-score, and AUC. Experimental results demonstrate that the proposed model achieves an accuracy of 94.2% and an AUC of 0.965, outperforming baseline CNN models by a significant margin. The use of Grad-CAM visualization provides interpretability by localizing tumor regions within MRI scans, thereby increasing the model’s clinical transparency. This study highlights the potential of explainable deep learning models to enhance diagnostic reliability in automated brain tumor detection systems.

Page 4 of 4 | Total Record : 38