cover
Contact Name
Ai Munandar
Contact Email
ijitcsa@gmail.com
Phone
+62+6282111152015
Journal Mail Official
ijitcsa@gmail.com
Editorial Address
International Journal of Information Technology and Computer Science Applications (IJITCSA) Sekretariat Jejaring Penelitian dan Pengabdian Masyarakat (JPPM) : Ranau Estate Blok D.3, Kel. Panggungjati, Kp. Pantogan Kec. Taktakan - Kota Serang, Provinsi Banten, e-mail : jitcsa@jejaringppm.org web : www.jejaringppm.org
Location
Kota serang,
Banten
INDONESIA
International Journal of Information Technology and Computer Science Applications (IJITCSA)
ISSN : 29643139     EISSN : 29855330     DOI : https://doi.org/10.58776/ijitcsa.v1i2
he Journal of Information Technology and Computer Science Applications (JITCSA) is an information technology and computer science publication. Applications from both fields for solving real cases are also welcome. JITCSA accepts research articles, systematic reviews, literature studies, and other relevant ones. Several fields of science that are the focus of JITCSA include information technology and the like, computer science fields, including artificial intelligence, data science, data mining, machine learning, deep learning, and the like. IJITCSA is published three times a year, in January, May, and September. The first issue in January 2023 had eight articles. Focus and Scope International Journal of Information Technology and Computer Science Applications includes scholarly writings on scientific research or review, pure research, and applied research in the field of computer science, information systems, and information technology as well as a review-general review of the development of the theory, methods, and related applied sciences. Information systems System Software Artificial Intelligence Computer Architecture Distributed Systems System & Software Engineering Genomics & Bioinformatics Internet and Web AI & Expert systems Software Process and Life Cycle Database Systems Software Testing & Quality assurance Bioinformatics Information Technology Implementation Computing Languages & Algorithms E-commerce & M-Commerce Computer Networks & Communications Computing Systems Control Systems & Engineering Systems Engineering System Security Digital Forensics Data Mining & Machine Learning Data Modeling
Articles 63 Documents
Healthcare Data Integration Through Enterprise Data Warehousing: Architecture, Conformance Pipeline, and Experimental Validation for Readmission Analytics La Duy Ngôn
International Journal of Information Technology and Computer Science Applications Vol. 4 No. 1 (2025): January - April 2026
Publisher : Jejaring Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58776/ijitcsa.v4i1.246

Abstract

Healthcare organizations operate a fragmented digital landscape in which hospital information systems (HIS), electronic health records (EHR), laboratory systems, billing platforms, and departmental applications are optimized for transaction processing but not for integrated analysis. The resulting interoperability gaps, semantic inconsistency, duplicated records, and uneven data quality constrain enterprise reporting and limit higher-value analytics. This paper substantially proposes implementable enterprise data warehouse architecture, formalizing its data-quality and conformance mechanisms, and validating the design through experimental analytics use case. The proposed framework combines an integration layer for ETL/ELT, conformed dimensions, departmental marts, governance controls, and an analytics layer for OLAP and machine learning. To demonstrate practical value, the paper evaluates the framework on a de-identified inpatient diabetes dataset comprising 101,766 encounters and 50 raw attributes. The experimental pipeline performs profiling, conformance mapping, diagnosis grouping, missing-value treatment, and dimensional modeling before training benchmark readmission models. The best ranking performance is obtained by XGBoost with an AUROC of 0.688 and an AUPRC of 0.235, while threshold tuning improves recall-oriented operational utility. The results show that healthcare warehousing should not be framed merely as centralized storage; rather, it is an architectural mechanism for interoperability, data quality control, reproducible analytics, and decision support. The manuscript concludes with implementation guidance and limitations relevant to hospitals seeking a scalable, governance-aware warehousing program.
Revisiting the IBM Retail Data Warehouse: A Governed One-Column Architecture and Reproducible Open-Dataset Validation for Retail Analytics Nayananda Karunaratne; Pulasthi Medhananda
International Journal of Information Technology and Computer Science Applications Vol. 4 No. 1 (2025): January - April 2026
Publisher : Jejaring Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58776/ijitcsa.v4i1.247

Abstract

The IBM Retail Data Warehouse (RDW) correctly recognized the importance of integrated retail data, but it remained largely descriptive, did not formalize the underlying architecture, and lacked a reproducible empirical validation. This paper reconstructs and substantially extends that early proposal into a publication-ready research article. We first synthesize the historical IBM RDW, Retail Data Warehouse Model (RDWM), Retail Services Data Model (RSDM), and Retail Business Solution Template (RBST) concepts with contemporary data warehousing, data governance, and retail analytics literature. We then propose a governed, RDW-informed logical architecture that separates ingestion, quality control, conformed dimensional modeling, analytics marts, and decision-support services. To move beyond conceptual discussion, we instantiate the architecture with an open retail dataset from the UCI Machine Learning Repository containing 541,909 transactions. After governance-oriented preprocessing, the final analytical mart contains 392,692 valid rows, 18,532 orders, 4,338 customers, 3,665 products, and 37 countries. We formulate the transformation and forecasting workflow mathematically, define an end-to-end algorithmic pipeline, and evaluate a retail revenue forecasting task using naive, seasonal naive, linear regression, ridge regression, random forest, and gradient boosting baselines. On the hold-out test window, the best model (linear regression on warehouse-engineered features) achieves an RMSE of 4,302.61 GBP and R2=0.9766, while a raw, ungoverned pipeline yields a much weaker RMSE of 10,068.59 GBP. This corresponds to a 57.27% reduction in RMSE attributable to governance and dimensional integration. The results show that the practical value of an RDW-like architecture is not merely organizational; when implemented as a governed analytical platform, it measurably improves reproducibility, interpretability, and forecasting quality.
A Lakehouse-Oriented Big Data Infrastructure for Educational Analytics: Integrating Administrative and Assessment Data for Early Student Risk Prediction Bhairav Kaphle; Biswajit Shrestha
International Journal of Information Technology and Computer Science Applications Vol. 4 No. 1 (2025): January - April 2026
Publisher : Jejaring Penelitian dan Pengabdian Masyarakat

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58776/ijitcsa.v4i1.248

Abstract

Educational institutions increasingly depend on heterogeneous digital systems, yet many analytics initiatives remain fragmented across student information, registration, assessment, and learning platforms. This paper proposes a lakehouse-oriented big data infrastructure for educational analytics and validates it through a reproducible early-risk prediction study using the Open University Learning Analytics Dataset (OULAD). The study integrates five public OULAD tables student information, course registration, assessment metadata, student assessment submissions, and course presentation metadata into temporally valid feature tables aligned to the student–module–presentation level. We define a windowed feature engineering framework that constructs actionable indicators such as submission rate, weighted completion score, average submission lag, and assessment coverage gap at 30%, 50%, 70%, and 100% of the course timeline. Two supervised classifiers, logistic regression and random forest, are evaluated under a stratified 80/20 protocol. The results show that administrative data alone provides weak discrimination (AUC  0.673), whereas integrated mid-course assessment evidence substantially improves performance. At the 50% course window, the random-forest model achieves an AUC of 0.947, F1 of 0.879, and recall of 0.829; even at the 30% window the model already reaches an AUC of 0.904. These findings demonstrate that the value of educational prediction depends not only on model choice but also on data integration architecture. The paper contributes (i) a lakehouse-oriented reference architecture for higher-education analytics, (ii) a temporally constrained feature engineering strategy for early-warning systems, and (iii) an empirical ablation showing that multi-source integration yields large and operationally meaningful gains.