cover
Contact Name
Teguh Wiyono
Contact Email
indexsasi@apji.org
Phone
+6285700037105
Journal Mail Official
indexsasi@apji.org
Editorial Address
Jalan Watunganten 1 No 1-6, Batursari, Mranggen Kab. Demak Jawa Tengah 59567
Location
Kab. demak,
Jawa tengah
INDONESIA
Global Science: Journal of Information Technology and Computer Science
ISSN : 31089976     EISSN : 31089968     DOI : 10.70062
Core Subject : Science,
Global Science: Journal of Information Technology and Computer Science; This a journal intended for the publication of scientific articles published by International Forum of Researchers and Lecturers This journal contains studies in the fields of Information Technology and Computer Science, both theoretical and empirical. This journal is published 1 year 4 times (March, June, September and December)
Articles 24 Documents
Digital Twin-Driven Cybersecurity Risk Assessment Model for Industrial Internet of Things (IIoT) Networks in Manufacturing 4.0 Atika Mutiarachim; Royke Lantupa Kumowal; Nigar Aliyeva
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 2 (2025): June: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i2.175

Abstract

This study explores the development and application of a digital twin-driven cybersecurity risk assessment model for Industrial Internet of Things (IIoT) networks. The increasing complexity and interconnectivity of IIoT systems have expanded the attack surface, making them vulnerable to a wide range of cyber threats. The digital twin model addresses this challenge by creating real-time virtual replicas of physical systems, which can simulate and predict network vulnerabilities and attack vectors. The model uses machine learning algorithms and real-time data to simulate cyberattacks, including Distributed Denial of Service (DDoS), malware, and data breaches. By providing continuous monitoring and dynamic risk predictions, the digital twin model enhances the resilience of IIoT networks compared to traditional cybersecurity frameworks. The findings indicate that the model's ability to predict potential cyber threats and simulate various attack scenarios provides a more proactive and accurate approach to cybersecurity in IIoT environments. Additionally, the study highlights key mitigation strategies, including adaptive security mechanisms, real-time anomaly detection, and the use of lightweight encryption for resource-constrained devices. Despite its effectiveness, challenges such as computational requirements, integration with legacy systems, and scalability were identified. This research underscores the strategic importance of digital twin models in securing IIoT systems and advancing Manufacturing 4.0 ecosystems. Future research should focus on enhancing model accuracy, expanding its application to diverse industrial sectors, and improving interoperability with legacy systems to further strengthen the security posture of IIoT networks.
Enhancing Cross-Organizational Healthcare Analytics Through Blockchain-Enabled Federated Learning Mutiara S. Simanjuntak; Aji Priyambodo; Elshad Yusifov
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 2 (2025): June: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i2.176

Abstract

This study explores the integration of blockchain technology with federated learning (FL) to enhance cross-organizational healthcare analytics while ensuring privacy and data security. Federated learning allows multiple institutions to collaboratively train machine learning models without sharing sensitive patient data. Instead, local data is used to train models, and only model parameters are exchanged. However, privacy concerns and data sharing inefficiencies have hindered broader healthcare collaboration. Blockchain, a decentralized ledger technology, addresses these concerns by ensuring data integrity and transparency, providing an immutable and tamper-proof record of all transactions. This study investigates how the combination of blockchain and federated learning can overcome these challenges, facilitating secure and efficient data sharing between healthcare institutions. The study uses synthetic multi-institution healthcare datasets to simulate real-world collaboration scenarios. The blockchain-enabled federated learning system ensures that no raw patient data is shared, significantly reducing the risk of privacy breaches while still allowing healthcare institutions to collaborate on predictive model development. The results show that while there is a slight decrease in model accuracy compared to centralized methods, the trade-off is outweighed by the privacy and security benefits. Blockchain’s integration ensures that model updates are transparent, enhancing trust between institutions and reducing concerns about data integrity. Moreover, the use of blockchain’s smart contracts automates and enforces compliance, further streamlining collaboration. This research contributes to the field by demonstrating how blockchain-integrated federated learning can create a secure, scalable, and privacy-preserving framework for collaborative healthcare analytics. The findings underscore the potential for this approach to enhance healthcare outcomes and improve decision-making across institutions while ensuring patient data protection.
Augmented Reality-Assisted Explainable AI Platform for Collaborative Design of Cyber-Physical Systems in Industrial Automation Anjun Dermawan; Efan Efan; Elay Yusifli Elshad
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 3 (2025): September: Global Science: Journal of Information Technology and Computer Scien
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i3.177

Abstract

The integration of Augmented Reality (AR) and Explainable AI (XAI) within Cyber-Physical Systems (CPS) design is transforming the industrial automation landscape. This study explores how combining AR’s immersive visualization with XAI’s decision transparency enhances collaborative design processes in CPS. The AR-XAI platform developed in this research improves team collaboration by offering real-time visual feedback and enabling interactive decision-making. The platform provides interpretable insights into AI-driven decisions, fostering trust among engineers and decision-makers. Key features of the platform include the ability to visualize complex CPS models, facilitating faster iterations, reducing design errors, and improving design accuracy. The integration of XAI ensures transparency in decision-making by offering clear explanations of AI predictions, which is essential for ensuring accountability and building trust in automated systems. Testing with industrial engineers confirmed that the AR-XAI platform significantly improved design outcomes, with a reduction in errors and enhanced team performance compared to traditional design methods. Moreover, the platform enabled faster decision-making and improved collaboration across diverse teams, demonstrating its potential to optimize CPS design workflows. This research provides valuable insights into the role of AR and XAI in advancing Industry 4.0 and paves the way for more advanced integrations of these technologies in industrial settings. Future research should focus on developing scalable solutions for various industrial applications and exploring more sophisticated AR-XAI integrations for emerging fields like smart cities and autonomous manufacturing.
Quantum-Inspired Meta-Blockchain Consensus Algorithm for Green Cloud Data Centers Optimizing Energy and Latency Trade-Offs Ricky Imanuel Ndaumanu; Suprayuandi Pratama; Gulay Yusifli Elshad
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 3 (2025): September: Global Science: Journal of Information Technology and Computer Scien
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i3.178

Abstract

The increasing demand for cloud computing services has led to the rapid expansion of cloud data centers, which consume significant amounts of energy and contribute substantially to global CO2 emissions. As the IT industry grows, the environmental impact of these data centers becomes an urgent concern. Green Cloud Computing (GCC) has emerged as a solution to mitigate this impact by focusing on energy efficiency and reducing carbon footprints while maintaining the necessary functionality and performance of cloud infrastructures. However, traditional blockchain consensus algorithms such as Proof of Work (PoW) and Proof of Stake (PoS) face limitations regarding energy consumption and scalability, which exacerbates the environmental burden. This study proposes a quantum-inspired blockchain consensus algorithm designed to optimize energy consumption and reduce latency in cloud data centers. By integrating quantum principles such as superposition and entanglement, the algorithm enhances task scheduling and resource utilization, enabling more energy-efficient operations without sacrificing performance. Simulations in a green cloud environment showed that the quantum-inspired algorithm resulted in up to a 30% reduction in energy usage compared to traditional consensus methods, with a 40% improvement in consensus processing time. These results suggest that quantum-inspired algorithms hold significant potential for enhancing the sustainability of cloud infrastructures by improving energy efficiency and scalability. Furthermore, this study discusses the feasibility of implementing quantum-inspired algorithms on classical hardware, addressing challenges in scalability and integration into existing blockchain frameworks. The findings provide valuable insights into the potential of quantum-inspired technologies to drive energy-efficient solutions in cloud computing.
CyberBERT: A Semantic Search Framework for Security Terminologies Using Transformer Models Sinaga, Rudolf; Frangky
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 4 (2025): December: Global Science: Journal of Information Technology and Computer Scienc
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i4.179

Abstract

: The rapid expansion of cybersecurity standards and threat intelligence frameworks has led to significant semantic fragmentation among security terminologies, hindering effective information retrieval and interoperability across systems. Traditional keyword-based search approaches are inadequate for capturing the contextual meaning of security terms, particularly within formal frameworks such as NIST, MITRE ATT&CK, and CWE. This study addresses this challenge by proposing CyberBERT, a transformer-based semantic search framework designed to align cybersecurity terminologies through deep contextual representation and ontology-driven reasoning. Research Objectives: The primary objective of this research is to develop a semantic retrieval model capable of understanding conceptual relationships between security terms beyond lexical similarity. Methodology: The proposed methodology fine-tunes a BERT-based model on the NIST Glossary corpus using a combination of masked language modeling and triplet loss objectives to generate discriminative semantic embeddings. These embeddings are further aligned with cybersecurity ontologies, including MITRE ATT&CK and CWE, to enhance semantic consistency and explainability. Semantic retrieval is performed using cosine similarity within a 768-dimensional embedding space and evaluated using Mean Reciprocal Rank (MRR) and Precision@K metrics. Results: Experimental results demonstrate that CyberBERT achieves an MRR of 0.832, outperforming domain-adapted baselines such as SecureBERT and CyBERT. The integration of ontology alignment improves semantic accuracy by over 6%, while robustness evaluations confirm resilience against adversarial linguistic perturbations. Visualization using t-SNE reveals coherent semantic clustering aligned with the five core NIST Cybersecurity Framework functions. Conclusions: In conclusion, CyberBERT effectively bridges semantic gaps across cybersecurity terminologies by combining transformer-based contextual learning with ontological reasoning. The framework offers a robust, interpretable, and scalable solution for semantic search, supporting improved interoperability and knowledge discovery in cybersecurity operations and standards harmonization.
Mapping Public Sentiment on Generative AI via Twitter NLP and Topic Modeling* Noronha, Marcelino Caetano; Dwiasnati, Saruni; Helena P Panjaitan, Cherlina
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 4 (2025): December: Global Science: Journal of Information Technology and Computer Scienc
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i4.183

Abstract

Abstract: The rapid diffusion of Generative Artificial Intelligence (AI) has intensified public debate regarding its benefits, risks, and societal implications. This study investigates public sentiment and thematic structures surrounding Generative AI by analyzing Twitter discourse as a representation of large-scale, real-time public perception. The research addresses two main problems: how public sentiment toward Generative AI is distributed and what dominant themes shape this perception. Accordingly, the objective is to map both emotional polarity and thematic narratives embedded in social media conversations. A computational mixed-methods approach was employed using a dataset of 12,470 tweets collected on 17 December 2024. Sentiment classification was conducted using a transformer-based DistilBERT model, while semantic representations were generated with Sentence-BERT. Topic modeling was performed using BERTopic, integrating HDBSCAN clustering and class-based TF-IDF to extract coherent and interpretable topics. Human-in-the-loop validation supported the interpretive robustness of topic labeling. The findings reveal that public sentiment toward Generative AI is predominantly positive (41.8%), particularly in relation to productivity enhancement, education, and creative applications. Neutral sentiment (31.4%) reflects informational discourse, while negative sentiment (26.8%) centers on ethical concerns, privacy risks, misinformation, and AI hallucinations. Seven dominant topics were identified, with clear topic–sentiment alignment showing optimism in utility-driven themes and skepticism in ethics- and risk-related discussions. In conclusion, public perception of Generative AI is dualistic—characterized by strong enthusiasm alongside persistent caution. These results provide empirical insights for AI governance, responsible innovation, and future research on socio-technical impacts of Generative AI. *
Explainable End-to-End Autonomous Driving Using Vision-Based Deep Learning in Safety-Critical Scenarios Sasmoko, Dani; Adi Supriyono, Lawrence; Wijanarko Adi Putra, Toni
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 4 (2025): December: Global Science: Journal of Information Technology and Computer Scienc
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i4.185

Abstract

End-to-end autonomous driving has emerged as a promising paradigm in which deep neural networks directly map raw visual inputs to continuous control actions. Despite its effectiveness, this approach suffers from limited transparency, posing significant challenges for deployment in safety-critical driving scenarios. This study addresses the lack of interpretability in vision-based end-to-end autonomous driving systems and aims to analyze model decision-making behavior under critical conditions such as sharp steering maneuvers and abrupt control transitions. To this end, an explainable end-to-end autonomous driving framework is proposed, combining a convolutional neural network trained via imitation learning with gradient-based visual attribution techniques, including Grad-CAM. The model predicts continuous steering, throttle, and braking commands directly from front-facing camera images, while explainability mechanisms are applied to reveal input regions influencing each control decision. Model performance is evaluated using both prediction accuracy and safety-oriented behavioral metrics. Experimental results show that the proposed explainable model achieves lower control prediction errors compared to a baseline end-to-end CNN, reducing steering mean squared error from 0.034 to 0.031, throttle error from 0.021 to 0.019, and brake error from 0.018 to 0.016. Moreover, safety-oriented analysis indicates improved driving stability, with steering variance reduced from 0.087 to 0.072 and abrupt control changes decreased from 14.6 to 10.3 events. Visual explanations consistently highlight road surfaces and lane-related structures during complex maneuvers, indicating reliance on semantically meaningful cues. In conclusion, the results demonstrate that integrating explainability into end-to-end autonomous driving not only preserves predictive performance but also correlates with smoother and more stable driving behavior. This framework contributes to the development of transparent and trustworthy autonomous driving systems suitable for safety-critical applications
Machine Learning-Based Spatiotemporal Modeling for Detecting Disease Hotspots in Primary Care Data Rachmatika, Rinna; Desyani, Teti; Khoirudin
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 4 (2025): December: Global Science: Journal of Information Technology and Computer Scienc
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i4.188

Abstract

Diseases in primary health services exhibit complex spatial-temporal dynamics due to urbanization and population mobility. Conventional surveillance approaches are difficult to capture these patterns adaptively. Machine learning (ML) based on spatio-temporal modeling offers a solution with the ability to detect disease clusters automatically and with high precision. Research Objectives: This research aims to develop a machine learning model to detect disease hotspots from primary service data in Indonesia, with a focus on improving prediction accuracy, interpretability, and relevance of health policies. Methodology: The primary service dataset for 2024 (5,343 entries) was analyzed using three ML models Gradient Boosting Machine (GBM), Temporal Random Forest (TRF), and Multi-EigenSpot with spatial (village) and temporal (week, month) features. Performance evaluation includes predictive (AUC, F1-score) and spatial (Moran's I, Spatio-Temporal Correlation Index) metrics. Results: The results showed that Multi-EigenSpot achieved the best performance (AUC=0.91; F1=0.86), with the detection of dominant hotspots in Sungai Asam and Beringin Villages. Moran's I value of 0.63 indicates a strong spatial autocorrelation, while STCI=0.57 indicates moderate temporal stability. Conclusions: ML-based spatio-temporal models are effective in identifying hidden disease patterns and have the potential to be integrated into national digital surveillance systems. This approach supports precision public health by providing a scientific basis for real-time location- and time-based intervention policies.
Enhancing Transparency in Recommender Systems: An Explainable AI Approach Using MovieLens Noe'man, Achmad; Samsinar; Wibowo, Agung
Global Science: Journal of Information Technology and Computer Science Vol. 1 No. 4 (2025): December: Global Science: Journal of Information Technology and Computer Scienc
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v1i4.190

Abstract

Recommender systems play a critical role in shaping user decisions across digital platforms; however, the increasing complexity of recommendation algorithms has raised serious concerns regarding transparency, trust, and accountability. This study focuses on enhancing the transparency of recommender systems by integrating Explainable Artificial Intelligence (XAI) techniques within a MovieLens-based recommendation framework. The primary problem addressed is the opacity of conventional recommendation models, which limits user understanding of why certain items are recommended and may reduce trust, perceived fairness, and system acceptance. Accordingly, the main objective of this research is to design and evaluate a hybrid explainable recommender system that balances predictive accuracy with human-understandable explanations. The proposed approach combines Matrix Factorization, feature-importance-aware neural networks, and knowledge graph embeddings to construct a robust recommendation model. To enhance explainability, multiple XAI strategies are integrated, including model-agnostic methods (LIME, SHAP, and CLIME), argumentation-based explanations, and context-aware personalized explanations. A comprehensive evaluation framework is employed, incorporating algorithmic metrics (accuracy, fidelity, robustness, counterfactual consistency, and fairness) alongside human-centered evaluations measuring trust, transparency, cognitive load, and perceived usefulness. Experimental results demonstrate that the knowledge graph–enhanced hybrid model achieves superior recommendation accuracy compared to baseline approaches. Moreover, context-aware explanations consistently outperform other methods in terms of fidelity, robustness, and user-perceived transparency, while argumentation-based explanations are found to be the most persuasive. CLIME offers a strong balance between technical stability and interpretability. The findings indicate that no single explainability technique is universally optimal; instead, hybrid and adaptive explanation strategies are most effective. In conclusion, this study confirms that human-centered, context-adaptive XAI significantly improves transparency and user trust in recommender systems, highlighting explainability as a fundamental component rather than an optional enhancement.
Benchmarking Machine Learning Models for Large-Scale Loan Default Prediction Using Real Data Devianto, Yudo; Saragih, Rusmin; Cahyana, Yana
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.181

Abstract

This research benchmarks multiple machine learning (ML) algorithms for large-scale loan default prediction using a real-world dataset of 255,000 borrower records, where default cases represent only ~9–12% of total observations. The study addresses the persistent gap in comparative analyses of ML models that balance predictive accuracy, interpretability, and computational efficiency for credit risk assessment. Six algorithmic families were evaluated Logistic Regression, Random Forest, XGBoost, LightGBM, CatBoost, Artificial Neural Networks (ANN), and Stacked Ensemble—using standardized preprocessing, hybrid imbalance handling (SMOTE, class weighting, under-sampling), and comprehensive evaluation metrics (AUC, F1, Recall, Precision, PR-AUC, and Brier Score). Empirical results show Logistic Regression achieved the highest AUC of 0.732, outperforming nonlinear models under the baseline configuration, while LightGBM attained perfect recall (1.0) but low precision (0.116), indicating over-prediction of defaults. Gradient boosting models demonstrated robust calibration (Brier ≈ 0.114–0.116) and the best computational efficiency, with LightGBM showing the fastest training and lowest memory use. CatBoost exhibited strong recall but the slowest computation, and ANN underperformed on tabular data (AUC ≈ 0.56). The Stacked Ensemble delivered balanced results with AUC = 0.664 and improved overall stability. These findings confirm that boosting-based models, particularly LightGBM and CatBoost, offer superior scalability and calibration, whereas Logistic Regression remains a valuable interpretable baseline. The study concludes that effective default prediction requires integrating rebalancing, calibration, and threshold optimization to enhance recall and operational deployment reliability in large-scale credit ecosystems.

Page 2 of 3 | Total Record : 24