cover
Contact Name
Husni Teja Sukmana
Contact Email
husni@bright-journal.org
Phone
+62895422720524
Journal Mail Official
jads@bright-journal.org
Editorial Address
Gedung FST UIN Jakarta, Jl. Lkr. Kampus UIN, Cemp. Putih, Kec. Ciputat Tim., Kota Tangerang Selatan, Banten 15412
Location
Kota adm. jakarta pusat,
Dki jakarta
INDONESIA
Journal of Applied Data Sciences
Published by Bright Publisher
ISSN : -     EISSN : 27236471     DOI : doi.org/10.47738/jads
One of the current hot topics in science is data: how can datasets be used in scientific and scholarly research in a more reliable, citable and accountable way? Data is of paramount importance to scientific progress, yet most research data remains private. Enhancing the transparency of the processes applied to collect, treat and analyze data will help to render scientific research results reproducible and thus more accountable. The datasets itself should also be accessible to other researchers, so that research publications, dataset descriptions, and the actual datasets can be linked. The journal Data provides a forum to publish methodical papers on processes applied to data collection, treatment and analysis, as well as for data descriptors publishing descriptions of a linked dataset.
Articles 518 Documents
Model Integration of Information Technology in Optimizing the Food Supply Chain of the Free Nutritious Meal (MBG) Program to Reduce Food Waste Hari, Yulius; Yanggah, Minny Elisa; Budiman, Arief
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1039

Abstract

The Free Nutritious Meal Program in Surabaya is a major government initiative designed to improve child nutrition and reduce hunger among schoolchildren from low-income families. Despite its importance, the program faces significant challenges of food loss and waste due to inefficiencies in transportation, storage, and demand matching. This study introduces a Smart MBG Cloud Platform and applies a linear programming model to optimize the program’s supply chain under two operational scenarios: a baseline system without Information Technology (IT) support and an IT-enhanced system integrating route optimization and digital inventory monitoring. Simulation results reveal substantial efficiency gains in the IT-integrated model. This study was conducted using a mixed-method approach involving samples from schools as beneficiaries and the Nutrition Fulfillment Service Unit as providers of free nutritious meals. Using simulation data from five kitchens and ten schools and conducting 50 stochastic replications, the IT-enhanced model achieved a 28% reduction in transportation cost and the total objective value declined by 22%, compared to without IT support scenario. These results demonstrate that incorporating digital route planning and inventory monitoring not only reduces operational expenses but also mitigates organic waste, ensuring fresher meal delivery and supporting sustainability targets. These improvements highlight the potential of digital tools to minimize inefficiencies, ensure fresher meal delivery, and strengthen the nutritional impact of the program. Beyond operational savings, the IT-based model contributes to reduced organic waste generation and aligns with broader sustainability goals. The findings provide empirical evidence that digital transformation can significantly enhance the performance of public food programs and offer practical insights for policymakers to replicate these strategies in similar urban initiatives.
Data-Driven Evaluation of a Gamified Breath-Holding Training Application to Improve CT Scan Quality and Reduce Patient Anxiety P, Vinoth Kumar; M, Ganga; K, Vijayakumar; K, Umamaheswari; Devarajan, Gunapriya; Batumalay, M
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.804

Abstract

This study presents the development and evaluation of Breathe Well, an innovative three-tiered Graphical User Interface (GUI) application designed to address motion-induced step artifacts and patient anxiety during Computed Tomography (CT) scans. The core idea of the application is to combine relaxation techniques, guided breathing exercises, and gamified training modules within a single interactive platform that allows patients to practice breath-holding and anxiety control prior to scanning. The objective is to enhance patient cooperation, reduce involuntary movement, and improve overall image quality while minimizing the time healthcare staff spend on manual breath-hold instruction. The study involved a comparative analysis between a control group and an intervention group trained using the Breathe Well system. Quantitative results demonstrated a significant improvement in imaging outcomes, with the mean artifact score decreasing from 3.1 ± 0.8 in the control group to 2.1 ± 0.7 in the intervention group (p 0.01). Psychological assessment using the State-Trait Anxiety Inventory (STAI) revealed a marked reduction in patient anxiety, with mean scores declining from 48.6 ± 6.4 before training to 38.2 ± 5.8 after using the application (p 0.01). Qualitative feedback further confirmed increased patient confidence, comfort, and comprehension of CT procedures. The findings indicate that integrating gamified digital interventions into pre-scan preparation significantly improves both patient experience and diagnostic precision. The novelty of this research lies in the creation of a self-guided, multi-level digital platform that bridges behavioral training and imaging technology, offering a scalable, patient-centered solution for modern radiology workflows.
Hybrid Transformer-XGBOOST Model Optimized with Ant Colony Algorithm for Early Heart Disease Detection: A Risk Factor-Driven and Interpretable Method Pratama, Moch Deny; El Hakim, Faris Abdi; Aditia Syahputra, Dimas Novian; Dermawan, Dodik Arwin; Asmunin, Asmunin; Nudin, Salamun Rohman; Nurhidayat, Andi Iwan
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.969

Abstract

Cardiovascular diseases (CVDs) remain the leading cause of death worldwide, with significant socioeconomic consequences due to premature death and chronic disability. Although clinical screening techniques have evolved, early and accurate prediction of heart disease is still partial due to the limited capacity of conventional machine learning algorithms to model the complex nonlinear interactions among various contributing risk factors e.g., hypertension, diabetes, hyperlipidemia, and genetic predisposition. To address these challenges, this research introduces a hybrid framework that combines the Transformer architecture known for its robust self-attention mechanism and high representational capabilities with Ant Colony Optimization (ACO), a nature-inspired metaheuristic algorithm modeled on the foraging behavior of ants, to enable adaptive and efficient hyperparameter optimization. The proposed model processes structured clinical data by encoding categorical variables into embeddings and normalizing numerical features, resulting in a unified tabular representation suitable for transformer-based analysis. ACO improves model efficiency by optimizing key parameters e.g., embedding configuration, learning rate, and depth, reducing manual intervention and computational overhead. The proposed Hybrid Transformer-ACO model focuses on interpretable clinical features to provide actionable risk stratification. Model evaluation was performed using classification metrics e.g., accuracy, precision, recall, F1 score, and time complexity to measure predictive performance and computational efficiency during the training and inference phases. These evaluation criteria provide evidence of the model's diagnostic reliability, generalizability, and practical feasibility for clinical application.. The model achieved 100% accuracy, sensitivity, specificity, and F1-score, outperforming several models. Time complexity analysis demonstrated efficient training and testing, while the model interpretability supports transparency and trust.
Hybrid Deep Learning for Image Authenticity: Distinguishing Between Real and AI-Generated Images Wella, Wella; Suryasari, Suryasari; Desanti, Ririn Ikana
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.991

Abstract

The increasing use of artificially generated images raises significant concerns about the authenticity of digital content. This study introduces a hybrid deep learning model for binary classification of real and generated images by combining spatial and relational features. The central idea is to integrate a convolutional backbone adapted from ResNet18 for visual feature extraction with a graph representation based on nearest-neighbor relations to capture inter-image similarities. The objective is to evaluate whether this dual-feature approach improves classification performance compared to single-feature baselines. Using a balanced dataset of 1,256 images (744 real and 512 generated), the model was trained on 70% of the data and tested on the remaining 30%. Experimental findings demonstrate that the model achieved an overall accuracy of 88%, with precision of 0.91 and recall of 0.89 for real images, and precision of 0.85 and recall of 0.87 for generated images. The corresponding F1 scores were 0.90 and 0.86, yielding a macro average F1 of 0.88. Confusion matrix analysis shows balanced misclassification across both classes, while stable performance across epochs indicates reliable learning behavior. Results confirm that the hybrid model achieves stronger classification effectiveness than convolution-only or graph-only baselines. The novelty of this work lies in demonstrating that the integration of spatial and relational learning provides a more robust framework for detecting synthetic images than single-modality approaches. The contribution of this research is both methodological, in proposing a hybrid architecture that unifies convolutional and graph-based learning, and practical, in providing empirical evidence that such integration enhances the reliability of image authenticity verification. While the absence of a validation set limited hyperparameter optimization and early stopping, the findings indicate that this hybrid design offers a promising direction for improving the robustness and generalizability of synthetic image detection.
Comparative Study of CNN-Based Architectures for Early Brain Tumor Diagnosis D, Lakshmi; C, Pragash; Batumalay, Malathy; R, Karthick Manoj
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.920

Abstract

This study presents a comprehensive comparative analysis of Convolutional Neural Network (CNN)-based deep learning architectures for early brain tumor detection and classification using multi-modal medical imaging. The primary objective is to evaluate and integrate advanced deep neural network models, including EfficientNet-B2, VGG16, U-Net, and a hybrid CNN-LSTM, to enhance diagnostic accuracy, precision, and robustness. The proposed framework involves five key stages: image acquisition from MRI, CT, PET, and ultrasound modalities; preprocessing through normalization, skull stripping, noise reduction, and registration; segmentation of tumor regions; feature extraction; and classification using optimized deep learning algorithms. Experimental evaluation demonstrates that the hybrid CNN-LSTM model achieved the highest overall performance, with an accuracy of 98.81%, precision of 98.90%, recall of 98.90%, and F1-score of 99%. The EfficientNet-B2 model followed closely with 98.73% accuracy, 98.73% precision, 99.13% recall, and 98.79% F1-score, confirming its strength in efficient feature utilization and computational scalability. In contrast, VGG16 and U-Net achieved accuracies of 93.27% and 88%, respectively, indicating limited adaptability to complex tumor morphologies. The findings reveal that CNN-based hybrid models outperform traditional architectures by effectively capturing both spatial and temporal dependencies in MRI data, leading to improved interpretability and clinical reliability. The novelty of this research lies in its methodological integration of convolutional and recurrent layers within a unified diagnostic framework, establishing a reproducible, high-performance model for early brain tumor detection. The study contributes to the advancement of intelligent medical imaging systems by demonstrating that hybrid deep learning architectures can significantly reduce diagnostic uncertainty and enable more precise, automated clinical decision support for early intervention.
MYCD: Integration of YOLO-CNN and DenseNet for Real-Time Road Damage Detection Based on Field Images Yenni, Helda; Muzawi, Rometdo; Karpen, Karpen; Anam, M. Khairul; Kasaf, Michel; Hadi, Tjut Rizqi Maysyarah; Wahyuni, Dewi Sari
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1040

Abstract

Road damage such as cracks, potholes, and uneven surfaces poses serious risks to transportation safety, logistics efficiency, and maintenance budgeting in Indonesia. Manual inspection is time consuming, labor intensive, and prone to error, motivating the use of reliable computer vision solutions. This study proposes MYCD, a hybrid and mobile ready architecture that combines the fast detection ability of YOLO with the dense feature reuse of DenseNet, enhanced by the Convolutional Block Attention Module (CBAM) for spatial and channel focus and Spatial Pyramid Pooling (SPP) for multi scale context understanding. The system detects and classifies the severity of road damage into minor, moderate, and severe categories using images captured by standard cameras. MYCD was trained and validated on 1,120 field images using an 80/20 split to simulate realistic deployment. Validation achieved 64 percent accuracy, with the highest per class precision of 0.72 for minor damage and mAP@0.5 = 0.677. The confusion matrix showed that most errors occurred in the moderate category because of visual similarity with minor and severe damage. Unlike earlier studies that extended YOLO with heavy backbones such as ResNet or EfficientNet, MYCD focuses on feature propagation (DenseNet), attention precision (CBAM), and multi scale fusion (SPP) optimized for real time operation on standard hardware. Efficiency profiling confirmed its deployability. After compression, the model size is 46.8 MB and it requires 3.7 GFLOPs per inference at 640×640 resolution. On a mid-range Android device (Snapdragon 778G, 8 GB RAM), MYCD runs at 19 frames per second with 1.2 GB peak memory. Compared with YOLOv8 WD (68 MB; 5.2 GFLOPs), MYCD reduces computation by 31 percent while maintaining similar accuracy. Overall, MYCD achieves a practical balance of speed, accuracy, and efficiency, providing a deployable and reproducible framework for real time road damage detection in resource limited settings.
Data-Driven Forecasting of Special Education Enrollment: An Explainable Machine Learning Approach Castro, Raul Alberto Garcia; Paucar, Wildon Rojas; Garces, Elena Miriam Chavez; Pérez-Mamani, Rubens Houson
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1046

Abstract

The application of machine learning algorithms in the field of special education remains incipient despite advances achieved in other sectors. This field faces challenges related to inclusion, planning, and resource allocation, especially in contexts where administrative records are often underutilized for analytical purposes. This study proposes an explainable forecasting approach based on 23,905 historical data records to anticipate educational demand in the Special Basic Education (SBE) modality, aiming to develop and validate a Random Forest model applied to a multivariate database of official enrollment records from 2019 to 2024, projecting a slight global contraction from 28,000 to 26,800 enrollments by 2025. The findings reveal nonlinear growth patterns differentiated by region and educational level, mainly in Early SBE (ages 0 to 2), Preschool, and Primary, with a general trend of increasing demand in coastal and highland regions. The models achieved high levels of accuracy (R² 0.97), with a Root Mean Squared Error (RMSE) below 190, a Mean Absolute Error (MAE) under 70, and a Mean Absolute Percentage Error (MAPE) below 10%. These results demonstrate the model’s utility as a strategic decision-support tool by optimizing resource planning in an education system characterized by territorial heterogeneity. The novelty of this study lies in integrating geospatial analysis and predictive algorithmic interpretability within an explainable artificial intelligence framework, fostering more equitable, transparent, and evidence-based educational planning.
Time Series Forecasting of Environmental Dynamics in Urban Ecotourism Forest Using Deep Learning Iskandar, Ade Rahmat; Suroso, Arif Imam; Hermadi, Irman; Prasetyo, Lilik Budi
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1029

Abstract

Time Series Forecasting of Environmental Dynamics in urban forests is quite challenging, unless new approaches such as deep learning and remote sensing are employed. Deep learning-based time series algorithms offer robust scientific capabilities for forecasting and assessing sustainability trends using sequential data. Among these, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM) have gained widespread adoption across various predictive modeling domains. In the present research, these algorithms are employed to analyze urban forest raster data derived from the Srengseng Ecotourism Forest, located in West Jakarta, Indonesia. The present study focuses on predicting the temporal patterns of key spatial indicators: Normalized Difference Vegetation Index (NDVI), Land Surface Temperature (LST), and Forest Cover Density (FCD) in the Srengseng urban ecotourism forest area, spanning the years 2014 to 2024, through the application of LSTM, GRU, and BiLSTM deep learning architectures. The methodology used in this study is a combined approach involving remote sensing and deep learning. Spatial data were acquired through the delineation of a high-precision polygon of Srengseng Urban Forest using Google Earth Pro and Google Earth Engine (GEE). GeoTIFF datasets of NDVI, LST, and FCD for the years 2014–2024 were processed using Python-based modeling scripts. Model performance was evaluated through a comparative analysis of LSTM, GRU, and BiLSTM in predicting temporal trends in these ecological indicators. The results of this study show that the Bidirectional LSTM (BiLSTM) consistently demonstrated superior performance to predict all the data spatially, with scores of 0.94 for NDVI, 0.90 for FCD, and 0.85 for LST. Followed by LSTM that predicts NDVI (0.87), FCD (0.89), LST (0.83), as well as GRU, which can estimate spatial data NDVI (0.86), FCD (0.89), and LST (0.85). These results outperformed the predictive accuracy of both the standard LSTM and GRU models.
Gamified Digital Intervention to Reduce Online Game Gambling Tendency among Youth: A TAM–SDT Evaluation Yulyanto, Yulyanto; Kurniadi, Erik; Husen, Dede; Yusuf, Fahmi
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1095

Abstract

The rapid growth of online gaming has raised concerns about addictive behaviors among young people, particularly with the emergence of loot boxes that resemble gambling mechanisms. This study aims to examine the effectiveness of a gamification-based application as a preventive intervention for online gambling game addiction and to evaluate user acceptance through an extended Technology Acceptance Model (TAM). The research was conducted in two stages. In the pre-test phase, 588 respondents aged 15–25 completed a questionnaire measuring impulsivity, Internet Gaming Disorder (IGD), and loot box exposure. The results identified 169 individuals (28.7%) with addictive tendencies. In the intervention phase, 86 respondents from this group participated in a gamified stimulation program using a specially designed application. Data were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The measurement model met reliability and validity requirements. Structural model analysis confirmed the classic TAM relationships: perceived ease of use significantly influenced perceived usefulness (β = .559, p 0.001), perceived usefulness influenced attitude toward use (β = .385, P= 0.001), and attitude influenced behavioral intention (β = .461, p 0.001). In addition, self-determination theory (SDT) significantly affected both attitude (β = .360, P= 0.003) and behavioral intention (β = .166, P= 0.038). However, affective visual design (AVD) was not significant, and behavioral intention did not reduce addictive behavior (β = -0.109, P= 0.386). The model demonstrated predictive relevance for TAM constructs (Q² 0) but failed to predict addictive behavior (Q² = -0.003). This study contributes theoretically by extending TAM with SDT in the context of digital health interventions and practically by demonstrating the potential of gamification as a preventive tool. However, the short intervention period and limited sample size constrained its effectiveness in reducing addiction. Longer-term interventions and broader contextual factors are recommended for future research.
Applied Data Science for Testing the Impact of Intangible Resources on Business Performance of SMEs Hac, Le Dinh; Tam, Phan Thanh
Journal of Applied Data Sciences Vol 7, No 1: January 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i1.1137

Abstract

This study investigates how intangible resources influence business performance in small and medium-sized enterprises in Vietnam by applying structural equation modeling within an applied data science framework. The research aims to clarify the direct and indirect mechanisms through which key intangible components, such as human capital, structural capital, relationship capital, organizational culture, and brand image, shape enterprise outcomes. It also examines the mediating role of creative innovation and the moderating influence of operating time. The study employs a mixed-method design, beginning with qualitative interviews with enterprise managers to validate and refine measurement constructs, followed by a quantitative survey of managers in two major economic regions in Vietnam. Data were analyzed using an advanced structural modeling approach to assess reliability, validity, and the strength of causal relationships. The findings demonstrate that intangible resources act as a robust foundation for firm success, exerting strong positive effects on both creative innovation and overall business performance. All five resource dimensions significantly contribute to the higher-order construct, with relationship capital and organizational culture emerging as the most influential drivers. Creative innovation partially mediates the relationship between intangible resources and business performance, illustrating how firms convert knowledge-based assets into tangible outcomes through idea generation and implementation. Further, operating time strengthens this relationship, indicating that more established firms leverage their intangible foundations more effectively. The study contributes to ongoing discussions on resource-based competitiveness by extending theoretical perspectives to an emerging market context. It highlights the strategic importance of cultivating intangible resources to foster innovation capability and sustain long-term performance. The results offer practical implications for managers and policymakers seeking to develop knowledge-driven and innovation-oriented enterprises in dynamic economic environments.