cover
Contact Name
Novianita Rulandari
Contact Email
journal@idscipub.com
Phone
+6282115151339
Journal Mail Official
journal@idscipub.com
Editorial Address
Gondangdia Lama Building 25, RP. Soeroso Street No.25, Jakarta, Indonesia, 10330
Location
Kota adm. jakarta pusat,
Dki jakarta
INDONESIA
Digitus : Journal of Computer Science Applications
ISSN : -     EISSN : 30313244     DOI : https://doi.org/10.61978/digitus
Core Subject : Science,
Digitus : Journal of Computer Science Applications with ISSN Number 3031-3244 (Online) published by Indonesian Scientific Publication, is a leading peer-reviewed open-access journal. Since its establishment, Digitus has been dedicated to publishing high-quality research articles, technical papers, conceptual works, and case studies that undergo a rigorous peer-review process, ensuring the highest standards of academic integrity. Published with a focus on advancing knowledge and innovation in computer science applications, Digitus highlights the practical implementation of computer science theories to solve real-world problems. The journal provides a platform for academics, researchers, practitioners, and technology professionals to share insights, discoveries, and advancements in the field of computer science. With a commitment to fostering interdisciplinary approaches and technology-driven solutions, the journal aligns itself with global challenges and contemporary technological trends.
Articles 45 Documents
Early Prediction of At Risk Students Using Minimal Data: A Machine Learning Framework for Higher Education Hamsiah; Adiyati, Nita; Subekti, Rino
Digitus : Journal of Computer Science Applications Vol. 3 No. 2 (2025): April 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i2.953

Abstract

Early identification of academically at risk students is essential for timely intervention and improved retention in higher education. This study investigates the effectiveness of using pre admission and early semester LMS data to predict student risk using machine learning models. The objective is to assess whether limited, readily available data from the first four weeks of instruction can reliably support early warning systems. A supervised learning framework was applied using the Open University Learning Analytics Dataset (OULAD), with features derived from student demographics and early LMS activity logs. Models evaluated include Logistic Regression, XGBoost, and CatBoost, with time based validation and SMOTE employed to address class imbalance. Model performance was measured using ROC AUC, F1 Score, and Recall. The CatBoost model achieved the best performance, with an F1 score of 0.770 and ROC AUC of 0.750, significantly outperforming baseline models. Quiz submission behavior, login frequency, and pre admission qualification level emerged as the most predictive features. Results also revealed a steady week by week improvement in model accuracy, confirming the increasing value of LMS engagement data over time. These findings affirm that early stage student data can be used effectively to predict academic risk, enabling institutions to act before major assessments are conducted. The study emphasizes the need for institutional readiness, ethical implementation, and inclusive practices in deploying predictive tools. Future research should expand the feature space and test cross institutional generalizability to refine early warning systems further.
Generalizable and Energy Efficient Deep Reinforcement Learning for Urban Delivery Robot Navigation Samroh; Munthe, Era Sari
Digitus : Journal of Computer Science Applications Vol. 3 No. 2 (2025): April 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i2.954

Abstract

The increasing demand for contactless urban logistics has driven the integration of autonomous delivery robots into real world operations. This study investigates the application of Deep Reinforcement Learning (DRL) to enhance robot navigation in complex urban environments, focusing on three advanced models: MODSRL, SOAR RL, and NavDP. MODSRL employs a multi objective framework to balance safety, efficiency, and success rate. SOAR RL is designed to handle high obstacle densities using anticipatory decision making. NavDP addresses the sim to real gap through domain adaptation and few shot learning. The models were trained and evaluated in simulation environments (CARLA, nuScenes, Argoverse) and validated using real world deployment data. Evaluation metrics included success rate, collision frequency, and energy efficiency. MODSRL achieved a 91.3% success rate with only 4.2% collision, outperforming baseline methods. SOAR RL showed robust performance in obstacle rich scenarios but highlighted a safety efficiency trade off. NavDP improved real world success rates from 50% to 80% with minimal adaptation data, demonstrating the feasibility of sim to real transfer. The results confirm the effectiveness of DRL in advancing autonomous delivery navigation. Integrating domain generalization, hybrid learning, and real time adaptation strategies will be essential to support large scale urban deployment. Future research should prioritize explainability, continual learning, and user centric navigation policies.
Decentralized Identity in FinTech: Blockchain Based Solutions for Fraud Prevention and Regulatory Compliance Yuni T, Veronika; Soderi, Ahmad
Digitus : Journal of Computer Science Applications Vol. 3 No. 3 (2025): July 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i3.955

Abstract

The FinTech sector is facing escalating threats from identity theft and digital fraud, with global losses exceeding US$42 billion annually. This study explores how blockchain based identity systems particularly Verifiable Credentials (VC), Decentralized Identifiers (DID), and selective disclosure protocols can enhance digital security, reduce onboarding time, and ensure compliance with evolving global standards. A qualitative and comparative methodology was applied, analyzing data from regulatory bodies (FTC, FATF, NIST), industry case studies, and technical frameworks (OpenID4VC, SD JWT, W3C). Results reveal that blockchain identity solutions reduce fraud risk by preventing synthetic identity use, while significantly improving authentication success rates through biometric and passkey based logins. Reusable KYC models integrated with VC/DID frameworks cut onboarding durations from weeks to days, demonstrating substantial operational efficiency. Furthermore, alignment with GDPR, eIDAS 2.0, and AML/CFT standards confirms the regulatory readiness of these systems. The findings suggest that decentralized identity offers a viable, scalable alternative to traditional identity verification, enabling secure, privacy preserving, and user controlled authentication. Despite challenges such as integration complexity and regulatory fragmentation, the strategic advantages in security and compliance position blockchain identity systems as essential tools for the future of FinTech.
Enhancing Software Quality Through Automated Code Review Tools: An Empirical Synthesis Across CI/CD Pipelines Gunawan, Budi; Sitorus, Anwar T
Digitus : Journal of Computer Science Applications Vol. 3 No. 4 (2025): October 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i4.956

Abstract

Automated Code Review Tools (ACRT) have become increasingly integral to modern software development workflows, particularly within continuous integration and deployment (CI/CD) environments. This study aims to evaluate the effectiveness of ACRT in improving software quality, accelerating vulnerability remediation, and enhancing developer productivity. Using a combination of empirical analysis, industry case studies, and academic benchmarks, we examine how tools such as SonarQube, CodeQL, Copilot Autofix, and secret scanners impact key quality metrics including defect density, Mean Time to Repair (MTTR), and pull request (PR) throughput. A quasi experimental design was employed using Interrupted Time Series (ITS) and Regression Discontinuity Design (RDD) to measure longitudinal outcomes across six open source and enterprise projects. Results indicate that defect density decreased by 15–30% following ACRT adoption, accompanied by notable improvements in security MTTR. For example, Copilot Autofix reduced XSS remediation times from 180 minutes to just 22 minutes, underscoring the tool’s potential for accelerating vulnerability management. PR throughput also increased by up to 40%. However, this efficiency gain coincided with a 20–30% decline in human code review interactions, highlighting a trade-off between automation benefits and the reduced depth of manual oversight. We conclude that ACRT tools, when integrated thoughtfully into development pipelines, can deliver measurable improvements in software quality and responsiveness. However, sustained benefits require careful tuning, contextual alerting, and a hybrid review strategy that maintains human involvement to preserve long term maintainability.
Policy in Practice: A Systematic Review of WCAG 2.2 and ADA 2024 Effects on Web and Mobile Accessibility Purwandari, Nuraini; Dewi, Ratna Kusuma; Rinaldo; Sucipto, Purwo Agus
Digitus : Journal of Computer Science Applications Vol. 3 No. 2 (2025): April 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i2.957

Abstract

Digital accessibility remains a global concern, affecting 1.3 billion people with disabilities. This study evaluates the impact of two policy changes WCAG 2.2 and the 2024 ADA Final Rule on digital interface compliance. A systematic review was conducted using PRISMA 2020 guidelines. Data were sourced from academic databases and regulatory documents spanning 2015–2024. Studies were selected based on their relevance to WCAG/ADA compliance. Quality appraisal was carried out using the Mixed Methods Appraisal Tool (MMAT), and findings were synthesized narratively across web and mobile contexts. WCAG 2.2 added success criteria to improve usability for users with cognitive, motor, and visual impairments. ADA 2024 requires U.S. public sector platforms to meet WCAG 2.1 AA, while the European Accessibility Act shows uneven implementation among member states. WebAIM’s 2024 audit revealed that 95.9% of websites still fail basic accessibility checks, and mobile platforms show even lower compliance. Common issues include poor contrast, missing alt text, and inadequate touch targets. Automated tools alone are insufficient without assistive technology validation. Over reliance on ARIA, limited developer training, and inconsistent policy enforcement persist as barriers to effective implementation. Regulatory updates represent progress but must be supplemented by education, standardized testing protocols, and user involved design practices. Sustainable accessibility requires a shift from reactive compliance to proactive inclusivity, supported by policy, pedagogy, and participatory design.
Latency Aware Edge Architectures for Industrial IoT: Design Patterns and Deterministic Networking Integration Harriz, Muhammad Alfathan
Digitus : Journal of Computer Science Applications Vol. 3 No. 3 (2025): July 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i3.958

Abstract

This study explores the design patterns and latency budgets required for real time performance in edge based Industrial Internet of Things (IIoT) systems. As industrial applications increasingly demand ultra low latency for control loops and automation tasks, cloud computing architectures fall short in meeting strict timing requirements. The research investigates architectural configurations such as on premises edge computing, hybrid edge↔cloud frameworks, and 5G Multi access Edge Computing (MEC), all integrated with deterministic networking technologies like Time Sensitive Networking (TSN). The methodology includes modeling latency partitions across communication, computation, and execution layers, evaluating IIoT protocols such as OPC UA PubSub and MQTT Sparkplug B, and measuring metrics like end to end latency, jitter, and deadline miss percentages under realistic workloads. Results confirm that edge architectures, when combined with TSN and real-time operating environments, can achieve latency budgets as low as approximately 1 millisecond (ms) for servo loops and between 6–12 ms for machine vision tasks. These values highlight the feasibility of meeting industrial automation requirements. The conclusion underscores the importance of matching communication technologies wired TSN versus 5G URLLC according to environmental constraints and specific application requirements. It also emphasizes the role of hybrid architectures and standardized protocols in enabling scalable, interoperable, and deterministic IIoT systems. This work contributes a validated framework for deploying real time industrial systems capable of meeting the performance thresholds of Industry 4.0.
Real Time Mobility Intelligence: Evaluating Kafka Based Pipelines in Global Smart Transit Systems Sugianto; Arainy, Corizon Sinar
Digitus : Journal of Computer Science Applications Vol. 3 No. 4 (2025): October 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i4.959

Abstract

Real-time streaming architectures are redefining the landscape of urban transit analytics by enabling low latency, data driven decision making. This study evaluates and compares the real time data processing capabilities of public transit systems in London, New York, and Singapore. The objective is to determine how architectural choices, data freshness, and machine learning integration influence key performance indicators such as latency, ETA accuracy, and anomaly detection. The methodology involves a multi city case study, where Kafka based pipelines integrated with Apache Flink and Spark were assessed for ingestion, processing, and service delivery. Datasets included GTFS Realtime, SIRI feeds, and contextual APIs (e.g., speed bands and crowd density). Metrics for evaluation included feed latency, mean absolute error (MAE) and root mean square error (RMSE) for ETA, and response times for anomaly detection. The results demonstrate that Singapore’s transit system outperformed its counterparts with the lowest latency (~12s), highest ETA accuracy (MAE = 18s; RMSE = 25s), and superior anomaly detection via multi sensor fusion. London and New York, while technologically robust, faced constraints due to longer feed update intervals and integration complexities. Kafka ML's online learning enhanced model adaptability, significantly reducing ETA prediction errors across dynamic conditions. Furthermore, stress testing revealed Singapore’s architecture as the most resilient under peak load. The study concludes that the effectiveness of real-time urban transit systems depends on harmonizing streaming infrastructure... Singapore’s architecture may serve as a potential reference model for other cities, while recognizing contextual differences in implementation. Singapore’s architecture offers a scalable template for other cities. Ethical considerations, including data governance and passenger privacy, are essential for sustainable implementation.
Real Time Traffic Engineering with In Band Telemetry in Software Defined Data Centers Nugroho, Aryo; Juwari; Marthalia, Lia
Digitus : Journal of Computer Science Applications Vol. 3 No. 3 (2025): July 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i3.974

Abstract

As data centers scale to accommodate dynamic workloads, real-time and fine-grained traffic engineering (TE) becomes critical. Software Defined Networking (SDN) offers centralized control over data flows, yet its effectiveness is constrained by traditional telemetry mechanisms that lack responsiveness. In-Band Network Telemetry (INT) addresses this gap by embedding real-time path metrics directly into packets, enabling adaptive traffic control based on live network conditions. This study implements and evaluates INT in a programmable Clos fabric using P4 enabled switches. It compares three TE strategies: static ECMP, switch assisted CONGA, and INT informed INT HULA. The simulation incorporates synthetic and trace based data center workloads, including elephant flows and incast scenarios. Performance is assessed using flow completion time (FCT), queue depth, link utilization, and failure recovery speed. INT metadata sizes (32–96 bytes) are also analyzed to quantify overhead vs. performance trade offs. Results indicate that INT HULA consistently outperforms ECMP and CONGA. It reduces FCT by up to 50%, decreases queue occupancy by a factor of three, increases link utilization by more than 25%, and shortens reroute times from 85 ms to 20 ms. These gains are achieved with manageable telemetry overhead and without requiring hardware changes. INT’s real time visibility also improves decision making in centralized SDN controllers and supports hybrid TE architectures. In conclusion, INT fundamentally enhances SDN based TE by enabling closed loop, real time optimization. Its integration with programmable data planes and potential for AI based control loops positions it as a cornerstone of next generation data center networks.
Evaluating Deep Learning Models for Humanitarian Sentiment Classification in Crisis Tweets: A Benchmark Study Junaedi, Edi
Digitus : Journal of Computer Science Applications Vol. 3 No. 4 (2025): October 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i4.975

Abstract

Social media platforms have emerged as essential channels for real time crisis communication, offering valuable insights into public sentiment and humanitarian needs during emergencies. This study benchmarks the performance of state of the art deep learning models for classifying sentiment and humanitarian relevance in crisis related tweets. Using publicly available datasets CrisisMMD, HumAID, and CrisisBench we evaluate three architectures: IDBO CNN BiLSTM, BERTweet, and CrisisTransformers. These models were assessed using cross validation and standard performance metrics (accuracy, F1 score, precision, and recall). Results indicate that CrisisTransformers outperform both traditional CNN LSTM hybrids and general purpose transformers, achieving an accuracy of 0.861 and F1 score of 0.847. Domain specific pretraining significantly enhances contextual understanding, particularly in multilingual and ambiguous tweet scenarios. While transformer models offer superior classification capabilities, their computational complexity poses challenges for real time deployment. Additionally, operational risks, such as data bias and misinformation, necessitate careful management through structured human oversight and the integration of explainable AI mechanisms. This research provides a robust comparison of NLP models for crisis applications and recommends strategies for effective deployment, including bias mitigation and fairness aware learning. The findings contribute to building ethical and efficient NLP systems for humanitarian response.
Balancing Performance, Cost, and Sustainability in Software Engineering Munthe, Era Sari; Marthalia, Lia
Digitus : Journal of Computer Science Applications Vol. 3 No. 3 (2025): July 2025
Publisher : Indonesian Scientific Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61978/digitus.v3i3.1075

Abstract

The environmental impact of Information and Communication Technology (ICT) has become a global concern, especially with the increasing energy consumption of data centers, artificial intelligence, and software systems. This narrative review explores how green computing and sustainable software engineering practices can address these environmental challenges. Using a systematic search across Scopus, IEEE Xplore, Web of Science, and Google Scholar, the review identifies best practices in integrating sustainability across the software lifecycle. Key findings reveal that energy-efficient coding, optimized database systems, and green AI strategies can significantly reduce energy use and carbon emissions. Cloud and serverless architectures offer additional sustainability potential when paired with proper energy monitoring tools. The review also highlights how educational reforms and organizational governance play essential roles in promoting eco-conscious practices. However, challenges persist. These include limited awareness among practitioners, lack of standardized metrics for software sustainability, and weak cross-disciplinary collaboration. Regional disparities also influence adoption, with Europe leading due to stronger policy frameworks, while Asia and North America show mixed trends. This study concludes that integrating sustainability into software engineering requires both technical innovations and systemic reforms. Future research should focus on empirical validation of sustainability frameworks, development of standard evaluation metrics, and promotion of interdisciplinary approaches. Sustainable ICT practices are not only an environmental necessity but also a strategic imperative for the future of digital innovation.