cover
Contact Name
Abdul Aziz
Contact Email
abdulazizbinceceng@gmail.com
Phone
+6282180992100
Journal Mail Official
journaleastasouth@gmail.com
Editorial Address
Grand Slipi Tower, level 42 Unit G-H Jl. S Parman Kav 22-24, RT. 01 RW. 04 Kel. Palmerah Kec. Palmerah Jakarta Barat 11480
Location
Kota adm. jakarta barat,
Dki jakarta
INDONESIA
The Eastasouth Journal of Information System and Computer Science
Published by Eastasouth Institute
ISSN : 30266041     EISSN : 3025566X     DOI : https://doi.org/10.58812/esiscs
Core Subject : Science,
ESISCS - The Eastasouth Journal of Information System and Computer Science is a peer-reviewed journal and open access three times a year (April, August, December) published by Eastasouth Institute. ESISCS aims to publish articles in the field of Enterprise systems and applications, Database management systems, Decision support systems, Knowledge management systems, E-commerce and e-business systems, Business intelligence and analytics, Information system security and privacy, Human-computer interaction, Algorithms and data structures, Artificial intelligence and machine learning, Computer vision and image processing, Computer networks and communications, Distributed and parallel computing, Software engineering and development, Information retrieval and web mining, Cloud computing and big data. ESISCS accepts manuscripts of both quantitative and qualitative research. ESISCS publishes papers: 1) review papers, 2) basic research papers, and 3) case study papers. ESISCS has been indexed in, Crossref, and others indexing. All submissions should be formatted in accordance with ESISCS template and through Open Journal System (OJS) only.
Articles 102 Documents
AI-Powered Quality Assurance and MIS Analytics: Building Resilient and Intelligent Digital Economies Sarker, Shakila; Nihat, Mashur Bin Mahmud
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.767

Abstract

Artificial intelligence (AI), predictive analytics, and management information systems (MIS) are all converging to remake U.S. companies into smart, adaptive ecosystems that can sustain economic resilience, cybersecurity, and software quality assurance (QA). This study synthesizes the empirical and conceptual findings of 20 peer-reviewed articles published between 2023 and 2025 to establish an integrated AI–MIS–QA Resilience Framework (AMQRF) that synthesizes automation, analytics, and governance in critical sectors such as IT, health, energy, and supply-chain infrastructure. The meta-synthesis reveals predictive QA with AI reduces software defect rates by 25–45%, MIS-based analytics increase operational visibility levels by 30–35%, and AI-driven cybersecurity models improve the accuracy of threat detection by up to 40%. All combined these flips enterprise resilience as an enterprise function of interconnected digital smartness and organizational learning. The study concludes by recommending a governance-aware architecture in which predictive QA, business analytics, and MIS co-evolve to facilitate sustainable competitiveness and national digital security.
Bibliometric Analysis of Human‑Centered AI Research in Southeast Asia (2015–2025) Judijanto, Loso; Diwyarthi, Ni Desak Made Santi
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.799

Abstract

This study does a⁠ bibli⁠ometric analysis of human-cent⁠ered artificial intelligence (HCAI) research in Southeast Asia from 2015 to 2025,⁠ with th⁠e objective of mapping publishing trends, conceptual frameworks, and collaborative networks in the region. The investigation, utilizing the Scopus database an⁠d v⁠isual⁠ization tools l⁠ike VOSviewer and Bibliometrix, indicates that fundamental AI con⁠cepts—namely artificial inte⁠lligence, machine⁠ learning, and deep learni⁠ng—function as pivotal anc⁠hors in the literature. These technical themes increasingly converge with human-centered areas, including explain⁠able AI, user-centered design, ethic⁠al tech⁠nology, and healthcare applications. The research designates S⁠ingapore as the pr⁠eeminent center for regional and international col⁠laboration, succeeded by Malaysia, Indonesia, Vietnam, and the Philippines. Institutional networks⁠ prioritiz⁠e significant contributions from technological universities⁠ and⁠ medical research institutions. The results demonstrate a distinct transition towards in⁠tegrative and value-oriented AI r⁠esearch that incorporates transparency, user empowerment, and social accountability in technical⁠ advancemen⁠t. This study offers a comprehensive assessment of current scholarship and identifies prospects for future research, pol⁠icymaking, and international collaboration in pr⁠omoting human-centered AI throu⁠gh⁠out Southeast Asia.
Analysis of the Moral Obligations of AI Developers Thru the Principle of Explainability in the Perspective of Kantian Deontological Ethics: A Qualitative Study Fauziyah, Rizma; Winarno, Agung; Subagyo, Subagyo
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.816

Abstract

The proliferation of "Black Box" Artificial Intelligence systems creates a significant ethical void regarding accountability and user autonomy, fundamentally challenging the right of individuals to understand decisions affecting their lives. This study aims to analyze the moral obligations of AI developers to implement Explainability (XAI) using the rigorous normative framework of Kantian Deontological Ethics. Employing a qualitative research design with conceptual analysis, the study utilizes secondary data from Kant's foundational texts and contemporary literature on algorithmic transparency, applying the Categorical Imperative as the primary lens. The findings conclude that the deployment of non-explainable AI constitutes a direct violation of Kant’s Formula of Humanity, as it reduces users merely to means for achieving computational goals rather than treating them as autonomous, rational agents. Furthermore, the practice fails the Universal Law test, which prohibits the universalization of opacity in decision-making processes. Consequently, the study asserts that Explainability is a non-negotiable moral duty for developers, establishing that predictive accuracy cannot ethically justify the erosion of human autonomy, thereby demanding a paradigm shift from utilitarian efficiency to deontological adherence in AI development.
AI-Powered Data Analytics and Multi-Omics Integration for Next-Generation Precision Oncology and Anticancer Drug Development Sikder, Tawfiqur Rahman; Dash, Sourav; Uddin, Borhan; Hossain, Forhad
The Eastasouth Journal of Information System and Computer Science Vol. 1 No. 02 (2023): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v1i02.838

Abstract

The recent rapid evolution of artificial intelligence (AI), big data analytics, and multi-omics technologies is changing modern precision oncology. These tools have opened up new opportunities to understand the heterogeneity of the tumor, drug response, and biomarker discovery. Traditional cancer therapies often fail because we do not fully understand the genomic, transcriptomic, proteomic, and metabolomic differences that are present between patients and within tumor microenvironments. Recent progress in computational intelligence, integrative omics pipelines, and drug discovery through machine learning holds significant potential to enable the personalization of cancer treatment, identify new anticancer compounds, and accelerate the development of new therapeutics. This study provides a detailed analysis of how AI-enabled data analytics and the integration of multi-omics capabilities are transforming next-generation precision oncology and the development of anticancer drugs. It synthesizes the insights from the recent studies such as big data facilitated plant biotechnology for bioactive anticancer compounds (Ahmed et al., 2023), machine learning enabled genomic selection framework (Saimon et al., 2023), artificial intelligence based on ischemic stroke biomarker discovery (Manik, 2023), cervical cancer prediction (Manik, 2022), predictive multi-omics system of neurodegenerative disease (Manik, 2021), and chronic disease analytics (Manik et al., 2021) to describe the potential of innovative computational frameworks to overcome existing Generative AI, deep learning, hybrid ML, and systems biology stand out as pillars on precision drug discovery, immuno-oncology improvement, high throughput compound selection, and early diagnosis of various cancers. The paper then develops a conceptual AI-driven multi-omics architecture for real-world oncology applications. It demonstrates how the genomic layer, transcript sequencing layer, epigenomic layer, proteome, microbiomics, and metabolomics layers can be harmonized using machine learning, federated learning, Bayesian optimization, and network-based models. By addressing literature from both modern times and fundamentals, this work uncovers gaps in the current oncology pipelines, suggests new strategies in AI for real-world translation into clinical oncology, and thereby establishes the potential of bioinformatics-driven solutions in anticancer drug development. The results highlight the importance of interdisciplinary research and data science approaches in providing equitable, individualized, and high-precision cancer care.
Achieving Financial Certainty: A Unified Ledger Integrity System for Automated, End-to-End Reconciliation Kusumba, Surender
The Eastasouth Journal of Information System and Computer Science Vol. 1 No. 01 (2023): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v1i01.842

Abstract

Modern enterprises face mounting challenges in maintaining financial data integrity across fragmented system landscapes. Traditional reconciliation processes rely heavily on manual intervention and periodic batch processing. These methods introduce operational inefficiencies and elevate the risk of financial misstatement. Accounts Payable, General Ledger, Treasury, and Standard General Ledger systems operate independently with limited integration. Data moves between these platforms through scheduled transfers that create timing mismatches and semantic inconsistencies. Finance teams spend extensive time comparing reports and investigating discrepancies during period-end closing cycles. Human error compounds these challenges as staff manually validate thousands of transactions. The lack of real-time visibility prevents early detection of errors and fraud. Organizations need transformative solutions that automate reconciliation workflows and provide continuous financial assurance. Unified Ledger Integrity Systems address these critical gaps through centralized data architectures and intelligent automation. These platforms ingest transaction data from disparate sources into a single reconciliation engine. Rules-based matching algorithms identify corresponding transactions across systems automatically. Machine learning models enhance matching accuracy over time by learning from historical patterns. Exception management workflows route unmatched transactions to appropriate team members for investigation. Continuous processing occurs throughout the business day rather than in periodic batches. This architectural shift enables finance organizations to transition from reactive auditing to proactive data quality management. Real-time exception flagging allows immediate investigation while transaction context remains fresh. Comprehensive audit trails satisfy regulatory compliance requirements and support external auditor reliance on internal controls. Organizations adopting these platforms experience substantial reductions in closing cycle times and improvements in data accuracy. Finance professionals redirect their efforts from manual validation to strategic exception analysis. The technology establishes a resilient foundation for corporate governance and enables agile decision-making based on high-confidence financial information.
Predicting Data Contract Failures Using Machine Learning Chirumamilla, Koteswara Rao
The Eastasouth Journal of Information System and Computer Science Vol. 1 No. 01 (2023): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v1i01.843

Abstract

Data contracts have emerged as a foundational mechanism for ensuring reliable communication between producers and consumers in modern distributed data ecosystems. They specify expected schemas, semantic intentions, and quality constraints, forming the basis for trustworthy data exchange across pipelines and organizational boundaries. Despite their growing adoption, contract violations remain a persistent operational challenge. These failures frequently stem from subtle schema shifts, unexpected type variations, incomplete records, or semantic inconsistencies introduced during upstream system changes. Traditional validation approaches—often built on static rules or manual inspection—struggle to keep pace with evolving datasets, diverse integration patterns, and continuous delivery cycles. As a result, contract breaches propagate downstream, causing pipeline interruptions, test instability, and avoidable production incidents. This paper presents a machine learning–driven framework designed to anticipate data contract failures before they manifest. The approach draws on both historical and real-time metadata, capturing patterns in schema evolution, anomaly trajectories, operational log signals, and field-level drift behavior. A hybrid modeling strategy is employed, combining gradient-boosted decision trees for structured anomaly detection, temporal drift modules for sequential pattern monitoring, and embedding-based schema representations for high-dimensional contract features. By integrating these components, the system provides early warning indicators that enable teams to intervene proactively rather than react after failures disrupt operations. The framework was evaluated using datasets from financial services, e-commerce platforms, and healthcare systems—domains characterized by diverse data heterogeneity and high operational sensitivity. Across these environments, the model achieved up to 79% accuracy in predicting contract violations, reduced downstream pipeline failures by 42%, and shortened incident triage time by 37%. These results highlight the potential of ML-driven predictive validation as a practical path toward resilient, self-monitoring data infrastructures in enterprise settings.
Reinforcement Learning to Optimize ETL Pipelines Chirumamilla, Koteswara Rao
The Eastasouth Journal of Information System and Computer Science Vol. 1 No. 02 (2023): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v1i02.844

Abstract

Extract–Transform–Load (ETL) pipelines remain a critical component of enterprise data infrastructure, supporting analytics, reporting, and machine learning by preparing raw data for downstream consumption. As organizations scale, these pipelines must process increasingly diverse datasets while adapting to shifting workloads, irregular input patterns, and evolving business requirements. Conventional optimization approaches rely on static rules, hand-tuned configurations, or heuristic scheduling, all of which struggle to maintain efficiency when system behavior changes over time. Manual tuning becomes particularly difficult in large environments where hundreds of pipelines compete for shared compute resources and experience unpredictable variations in data volume and schema complexity. This paper presents a reinforcement learning (RL)–based framework designed to autonomously optimize ETL execution without human intervention. The system formulates ETL optimization as a sequential decision-making problem, where an RL agent learns to select transformation ordering, resource allocation strategies, caching policies, and execution priorities based on the current operational state. State representations incorporate metadata signals, historical performance trends, data quality indicators, and real-time workload statistics. Through iterative reward-driven learning, the agent gradually identifies strategies that improve throughput, reduce processing cost, and stabilize pipeline performance across heterogeneous environments. The framework was evaluated in production-like settings spanning financial services, retail analytics, and telecommunications data operations. Across these domains, the RL-driven system reduced end-to-end execution time by 33%, lowered compute utilization costs by 27%, and increased data quality throughput by 41%. These results highlight the promise of reinforcement learning as a foundation for building adaptive, self-optimizing ETL systems that respond to operational variability and reduce the need for manual intervention. The work demonstrates a viable pathway toward autonomous data engineering platforms capable of supporting large-scale enterprise workloads.
A Unified Multi-Signal Correlation Architecture for Proactive Detection of Azure Cloud Platform Outages Sannareddy, Sai Bharath; Sunkari, Suresh
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.845

Abstract

Cloud platforms constitute the operational substrate for modern digital enterprises, yet their internal health telemetry remains intrinsically opaque, delayed, and non-deterministic from the perspective of tenant-facing reliability engineering. Despite the extensive instrumentation available within Microsoft Azure—including Service Health advisories, Resource Health telemetry, and platform diagnostic exports—empirical evidence continually demonstrates structural limitations that impede timely identification of regional instabilities, control-plane disruptions, propagation inconsistencies, and multi-service correlated failures. These limitations introduce latency between fault inception and observable acknowledgement, creating blind spots that severely constrain operational response windows for high-availability systems. This paper presents a novel Unified Multi-Signal Correlation Architecture (UMSCA) designed to overcome inherent deficiencies in provider-sourced telemetry by constructing a proactive, cross-signal, time-aligned reliability intelligence layer. The proposed framework integrates four heterogeneous data modalities—Azure Service Health, Azure Resource Health, Event Hub–streamed diagnostic telemetry, and distributed synthetic endpoint instrumentation—and fuses them using (i) canonical semantic normalization, (ii) probabilistic temporal alignment, (iii) inter-signal divergence detection, and (iv) multi-source reliability inference models. A large-scale enterprise simulation comprising 40 subscriptions, 18 geo-diverse Azure regions, 1,200 heterogeneous cloud resources, and over 3.2M telemetry events demonstrates that UMSCA reduces Mean Time to Detect (MTTD) by 88%, improves multi-signal correlation accuracy to 92%, lowers false-positive escalation by 52%, and estimates cross-region blast radius with up to 93% accuracy.
An Integrated Production Pipeline for 2D Animation in Cultural Heritage Visualization Zaliluddin, Dadan; Prasetyo, Tri Ferga; Ibrahim, Maulana
The Eastasouth Journal of Information System and Computer Science Vol. 3 No. 02 (2025): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v3i02.846

Abstract

The visualization of cultural heritage through digital media has become an effective approach to preserving and disseminating historical narratives to a wider audience. However, the production of 2D animation for cultural heritage visualization often faces challenges related to inconsistent workflows, inefficiencies in production stages, and the lack of structured integration between storytelling and technical animation processes. This study aims to design and implement an integrated production pipeline for 2D animation that supports systematic, efficient, and reproducible development of cultural heritage visualization. The proposed pipeline is structured into three main stages: pre-production, production, and post-production, incorporating storytelling design, visual asset development, animation principles, and compositing techniques. The research adopts a design-based research approach, using a local cultural heritage case study as the implementation context. Data were collected through observation, documentation, and iterative development of animation assets, followed by qualitative evaluation of workflow effectiveness and production consistency. The results demonstrate that the integrated pipeline improves production efficiency, enhances visual coherence, and supports accurate representation of cultural narratives. The proposed framework provides a practical reference for animators, educators, and researchers in developing 2D animation-based cultural heritage visualization. This study contributes to the field of animation production systems by offering a structured pipeline model that bridges technical animation processes with cultural storytelling requirements.
Ai-Driven Devops Automation for Ci/Cd Pipeline Optimization Kakarla, Roshan; Sannareddy, Sai Bharath
The Eastasouth Journal of Information System and Computer Science Vol. 2 No. 01 (2024): The Eastasouth Journal of Information System and Computer Science (ESISCS)
Publisher : Eastasouth Institute

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.58812/esiscs.v2i01.849

Abstract

Modern CI/CD pipelines have become cognitively overloaded, policy-fragile, and operationally inefficient as software delivery scales across microservices, multi-cloud platforms, and regulated environments. While DevOps automation has improved deployment velocity, it remains largely rule-based, reactive, and incapable of reasoning over complex pipeline behavior, failure patterns, or governance constraints. This paper introduces a systemic, AI-driven DevOps automation framework designed to optimize CI/CD pipelines through continuous learning, risk-aware decision-making, and policy-aligned control. The core contribution is a closed-loop, intelligence-driven control plane that integrates telemetry inference, pipeline behavior modeling, and constrained decision automation to optimize build reliability, deployment throughput, and operational toil while preserving human oversight and enterprise governance. Unlike existing approaches that focus on isolated optimizations or tool-level enhancements, the proposed framework treats CI/CD as a distributed socio-technical system, addressing failure modes related to scale, drift, cognitive load, and compliance. We describe the architecture, lifecycle control flow, and governance mechanisms of the proposed system, and evaluate its impact using operational metrics such as mean time to detection (MTTD), mean time to recovery (MTTR), pipeline failure recurrence, and policy deviation rates. The results demonstrate that AI-driven DevOps automation, when designed as a governed control system rather than an autonomous executor, can materially improve reliability, safety, and delivery efficiency in enterprise CI/CD environments.

Page 9 of 11 | Total Record : 102