cover
Contact Name
Teguh Wiyono
Contact Email
indexsasi@apji.org
Phone
+6285700037105
Journal Mail Official
indexsasi@apji.org
Editorial Address
Jalan Watunganten 1 No 1-6, Batursari, Mranggen Kab. Demak Jawa Tengah 59567
Location
Kab. demak,
Jawa tengah
INDONESIA
Global Science: Journal of Information Technology and Computer Science
ISSN : 31089976     EISSN : 31089968     DOI : 10.70062
Core Subject : Science,
Global Science: Journal of Information Technology and Computer Science; This a journal intended for the publication of scientific articles published by International Forum of Researchers and Lecturers This journal contains studies in the fields of Information Technology and Computer Science, both theoretical and empirical. This journal is published 1 year 4 times (March, June, September and December)
Articles 5 Documents
Search results for , issue "Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science" : 5 Documents clear
Benchmarking Machine Learning Models for Large-Scale Loan Default Prediction Using Real Data Devianto, Yudo; Saragih, Rusmin; Cahyana, Yana
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.181

Abstract

This research benchmarks multiple machine learning (ML) algorithms for large-scale loan default prediction using a real-world dataset of 255,000 borrower records, where default cases represent only ~9–12% of total observations. The study addresses the persistent gap in comparative analyses of ML models that balance predictive accuracy, interpretability, and computational efficiency for credit risk assessment. Six algorithmic families were evaluated Logistic Regression, Random Forest, XGBoost, LightGBM, CatBoost, Artificial Neural Networks (ANN), and Stacked Ensemble—using standardized preprocessing, hybrid imbalance handling (SMOTE, class weighting, under-sampling), and comprehensive evaluation metrics (AUC, F1, Recall, Precision, PR-AUC, and Brier Score). Empirical results show Logistic Regression achieved the highest AUC of 0.732, outperforming nonlinear models under the baseline configuration, while LightGBM attained perfect recall (1.0) but low precision (0.116), indicating over-prediction of defaults. Gradient boosting models demonstrated robust calibration (Brier ≈ 0.114–0.116) and the best computational efficiency, with LightGBM showing the fastest training and lowest memory use. CatBoost exhibited strong recall but the slowest computation, and ANN underperformed on tabular data (AUC ≈ 0.56). The Stacked Ensemble delivered balanced results with AUC = 0.664 and improved overall stability. These findings confirm that boosting-based models, particularly LightGBM and CatBoost, offer superior scalability and calibration, whereas Logistic Regression remains a valuable interpretable baseline. The study concludes that effective default prediction requires integrating rebalancing, calibration, and threshold optimization to enhance recall and operational deployment reliability in large-scale credit ecosystems.
Interpretable Feature Interaction Mining in High-Dimensional Clinical Data Using Hybrid Tree–Neural Models Widiastuti, Tiwuk; Richard , Berlien; Maryo Indra, Manjaruni
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.182

Abstract

High-dimensional clinical data exhibit complex and non-linear relationships among patient attributes, where outcomes are often influenced by feature interactions rather than isolated variables. However, many existing machine learning models prioritize predictive performance while providing limited interpretability and insufficient insight into interaction structures. This study aims to address this limitation by developing an interpretable and robust framework for feature interaction mining in clinical data. We propose a hybrid tree–neural modeling framework that explicitly captures and ranks feature interactions while maintaining stable predictive performance. Tree-based ensemble models are employed to identify non-linear interaction patterns, while neural representations enhance learning flexibility and generalization. The framework integrates interaction importance analysis, cross-validation–based stability assessment, and evaluation across multiple data splits to ensure robustness and interpretability. Experiments conducted on a real-world high-dimensional clinical dataset demonstrate that the proposed approach achieves consistent predictive performance, with AUC values ranging from 0.628 to 0.641 across five cross-validation folds (mean AUC ≈ 0.633). Performance remains stable under varying train–test splits, indicating strong generalizability. Interaction analysis reveals that a small number of dominant feature interactions—such as age combined with length of hospital stay and medication count combined with diagnostic information—consistently contribute to model predictions, appearing in over 80% of validation folds. Ablation studies further confirm that removing interaction-aware components leads to noticeable performance degradation, highlighting their importance. In conclusion, this study demonstrates that explicit feature interaction modeling enhances interpretability, stability, and generalization in clinical prediction tasks. The proposed hybrid framework provides a reliable foundation for developing trustworthy and transparent clinical decision-support systems
Transparent AI for Welfare Programs: Explainable Fraud Detection Using Publicly Available Administrative Data Sutrisno, Sutrisno; Winny, Purbaratri
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.184

Abstract

This study examines the application of Transparent Artificial Intelligence (AI) for fraud detection in public welfare programs using publicly available administrative data. Persistent challenges in welfare governance such as misallocation, fraud, and data inaccuracy necessitate analytical frameworks that are both effective and explainable. The research aims to design and evaluate an interpretable anomaly detection system capable of identifying irregularities in welfare distribution while maintaining transparency and accountability. Methodologically, the study employs two unsupervised models Isolation Forest and Local Outlier Factor (LOF) to detect anomalies in sub-district-level welfare data, incorporating features such as population size, number of beneficiaries, and coverage ratio. An Explainable AI (XAI) framework integrating surrogate Random Forests, Permutation Feature Importance (PFI), and local linear surrogates (LIME-like) is applied to ensure interpretability of both global and local model behaviors. Findings reveal that receivers per 1000 population and percentage coverage are dominant determinants of anomaly scores. Fifteen administrative units were flagged for potential inconsistencies suggesting over- or under-reporting of beneficiaries. Cross-validation between IF and LOF models confirmed consistency in identifying anomalous regions. The integrated XAI explanations enhance transparency, enabling policymakers and auditors to trace the rationale behind detected anomalies. In conclusion, the proposed Transparent AI framework demonstrates that combining anomaly detection with interpretability tools can strengthen accountability and fairness in welfare administration. It offers a reproducible, ethical, and data-driven approach to social program monitoring, reinforcing public trust and supporting responsible AI governance.
Toward Explainable AI for Cybersecurity: A NIST-Based Knowledge Graph for Transparent Semantic Reasoning Pratama, Firman; Dahil, Irlon; Dien, Marion Erwin; Lase, Dewantoro
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.186

Abstract

Explainable artificial intelligence (XAI) has become a critical requirement in cybersecurity due to the high-stakes nature of security decision-making and the limitations of black-box learning models. This study investigates the construction of an explainable cybersecurity knowledge representation by leveraging standardized terminology from the NIST cybersecurity glossary. The primary problem addressed is the lack of transparent and semantically grounded reasoning mechanisms in existing AI-driven cybersecurity systems, which limits trust, accountability, and analyst adoption. To address this challenge, we propose a NIST-based semantic knowledge graph that embeds explainability directly into its ontology structure and reasoning process. The proposed framework systematically extracts definitional entities and relations from NIST glossary entries to construct a domain ontology and a multi-relational knowledge graph. A rule-based semantic relation extraction method is employed to ensure faithful, interpretable, and reproducible reasoning paths. The resulting knowledge graph contains over 3,000 cybersecurity concepts and approximately 27,000 semantic relations, covering hierarchical, associative, dependency, and mitigation semantics. Experimental evaluation demonstrates that the proposed approach achieves a high level of explainability, with 92.4% of reasoning outcomes being fully traceable and only 1.4% classified as non-traceable. Most explainable reasoning paths are limited to two or three hops, indicating an effective balance between inferential depth and human interpretability. Structural analysis further confirms the presence of meaningful hub concepts that support multi-hop semantic inference. These results confirm that ontology-driven, standard-based knowledge graphs provide a robust foundation for explainable cybersecurity intelligence. The study concludes that explainability-by-design, grounded in authoritative standards, offers a viable and trustworthy alternative to opaque AI models for cybersecurity applications.
From Cryptography To Risk: Network Topology Of Cybersecurity Knowledge Simarmata, Simon; Boru, Meiton
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.189

Abstract

Inconsistent terminology across cybersecurity frameworks undermines global governance and interoperability. The National Institute of Standards and Technology Cybersecurity Framework (NIST CSF 2.0) and ISO/IEC 27001:2022 share similar objectives but diverge semantically in defining risk, control, and resilience. This semantic gap causes difficulties in compliance mapping and automated policy translation. Research Objectives: This study aims to analyze the semantic similarity and divergence between NIST and ISO/IEC 27000 terminologies, identify conceptual structures influencing interoperability, and propose an AI-assisted foundation for harmonizing cybersecurity language globally. Methodology: A mixed-method semantic comparative design integrates Natural Language Processing (NLP) and ontology mapping. Using the nist_glossary.csv dataset and ISO vocabularies, terms were normalized and analyzed via cosine similarity using sentence-transformer embeddings. Ontological alignment was visualized through the Semantic Threat Graph (STG) and validated by certified experts using Cohen’s Kappa reliability tests. Results: From 672 term pairs, results show 40.9% high semantic equivalence, 38.8% partial overlap, and 20.3% semantic divergence. Strongest alignment appears in “Protect” and “Identify” domains, while divergences occur in governance and recovery-related terms. Ontology mapping revealed three conceptual clusters—Risk Governance, Technical Safeguards, and Organizational Readiness. Conclusions: Findings confirm a 79.7% total semantic alignment, indicating strong potential for harmonizing global cybersecurity standards. The study contributes an empirical model combining computational linguistics and AI-based ontology mapping to establish semantic interoperability, enabling unified cybersecurity governance and AI-driven compliance automation. Keywords: Semantic Interoperability; Ontology Mapping; Cybersecurity Frameworks; Terminology Alignment; AI Harmonization

Page 1 of 1 | Total Record : 5