cover
Contact Name
Adam Mudinillah
Contact Email
adammudinillah@staialhikmahpariangan.ac.id
Phone
+6285379388533
Journal Mail Official
adammudinillah@staialhikmahpariangan.ac.id
Editorial Address
Jorong Kubang Kaciak Dusun Kubang Kaciak, Kelurahan Balai Tangah, Kecamatan Lintau Buo Utara, Kabupaten Tanah Datar, Provinsi Sumatera Barat, Kodepos 27293.
Location
Kab. tanah datar,
Sumatera barat
INDONESIA
Journal of Computer Science Advancements
ISSN : 30263379     EISSN : 3024899X     DOI : https://doi.org/10.70177/jsca
Core Subject : Science,
Journal of Computer Science Advancements is an international peer-reviewed journal dedicated to interchange for the results of high quality research in all aspect of science, engineering and information technology. The journal publishes state-of-art papers in fundamental theory, experiments and simulation, as well as applications, with a systematic proposed method, sufficient review on previous works, expanded discussion and concise conclusion. As our commitment to the advancement of science and technology, the Journal of Computer Science Advancements follows the open access policy that allows the published articles freely available online without any subscription.
Articles 5 Documents
Search results for , issue "Vol. 3 No. 5 (2025)" : 5 Documents clear
STUDENT GRADUATION PREDICTION USING DECISION TREE ALGORITHM WITH CRISP-DM METHOD (CASE STUDY: ITB AHMAD DAHLAN) Husni, Kholilah; Sestri, Elliya; Terisia, Vany
Journal of Computer Science Advancements Vol. 3 No. 5 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i5.2429

Abstract

On-time graduation is an important indicator of higher education effectiveness; however, delays in student graduation are still observed at ITB Ahmad Dahlan Jakarta. This study develops a student graduation prediction system using the Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology and the Decision Tree algorithm based on historical academic data. The model was built through six CRISP-DM stages, including problem understanding, data preparation, modeling, and evaluation. Testing results indicate high performance with an Accuracy of 97.44%, Precision of 97.14%, Recall of 100%, and F1-Score of 98.55%. This system has the potential to support strategic decision-making to enhance academic quality through data-driven approaches.
PEELING THE WILLOW CHIP GOOGLE’S BREAKTHROUGH IN TAMING QUANTUM ERROR Dara, Ravi; Dara, Chenda; Sothy, Chak
Journal of Computer Science Advancements Vol. 3 No. 5 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i5.3324

Abstract

The realization of fault-tolerant quantum computing is currently impeded by the stochastic nature of qubit decoherence and the inherent complexity of scaling control systems. This study rigorously evaluates the architectural innovations of Google’s Willow processor, specifically investigating its efficacy in mitigating noise through surface code error correction. The primary objective is to verify the hypothesis of exponential error suppression within a superconducting transmon array, determining if the system can surpass the critical “break-even” point. Methodologically, the research employs a quantitative performance analysis, configuring physical qubits into logical units of varying code distances (d=3 to d=7) and subjecting them to sustained syndrome extraction cycles under millikelvin cryogenic conditions. Results indicate a fundamental departure from previous scaling paradoxes; logical error rates were observed to halve with every increment in code distance, definitively crossing the algorithmic break-even threshold. The data confirms that real-time decoding and optimized tunable coupler designs effectively isolate errors, preventing topological lattice corruption. In conclusion, the Willow chip provides empirical validation that increasing system size now yields higher fidelity, establishing a critical engineering baseline for the development of large-scale, utility-grade quantum computers.
QUANTUM ADVANTAGE HAS ARRIVED: TANGIBLE IMPACTS ON DRUG DISCOVERY AND NEW MATERIALS Wei, Li; Hui, Zhou; Yang, Liu
Journal of Computer Science Advancements Vol. 3 No. 5 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i5.3325

Abstract

The advancement of computational chemistry is currently stalled by the exponential memory scaling required to simulate strongly correlated electron systems on classical supercomputers. This fundamental barrier significantly impedes the rational design of complex pharmaceuticals and next-generation catalytic materials. This research aims to rigorously validate the immediate utility of Noisy Intermediate-Scale Quantum (NISQ) processors, demonstrating that “Quantum Advantage” has shifted from a theoretical milestone to a practical industrial reality. We employed a comparative research design utilizing the Variational Quantum Eigensolver (VQE) algorithm on the IBM Eagle quantum processor. The study targeted the electronic structure of iron-sulfur clusters and KRAS-G12C inhibitor binding sites, benchmarking quantum outputs against classical Density Functional Theory (DFT) and Full Configuration Interaction (FCI) standards, utilizing Zero-Noise Extrapolation for error mitigation. Results indicate that quantum simulations achieved chemical accuracy (within 1.6 kcal/mol) for these complex systems, whereas classical methods failed with deviations exceeding 8 kcal/mol. The data confirms that quantum hardware can now resolve electronic correlations invisible to classical approximation. We conclude that quantum computing offers a tangible, immediate pathway to accelerate discovery cycles in drug development and material science, necessitating the integration of hybrid quantum workflows into modern R&D pipelines.
THE QUBIT PARADOX: WHY MORE QUBITS ACTUALLY LOWER ERROR RATES? Fujita, Miku; Suzuki, Ren; Nishida, Daiki
Journal of Computer Science Advancements Vol. 3 No. 5 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i5.3329

Abstract

Physical qubits intuitively introduces greater cumulative noise and control complexity. This “Qubit Paradox” presents a fundamental barrier to scalability, suggesting that larger systems might become inherently less stable. This research aims to rigorously validate the threshold theorem, defining the precise boundary where topological protection overcomes physical noise accumulation. We utilized high-fidelity Monte Carlo simulations of Rotated Surface Codes, scaling from distance d=3 to d=9, under realistic circuit-level noise models including leakage and crosstalk. Decoding was executed using the Minimum Weight Perfect Matching (MWPM) algorithm to analyze logical failure rates across 109 error correction cycles. Results identify a critical physical error threshold of approximately 0.57%. Below this value, logical error rates exhibited exponential suppression via power-law decay, reducing by seven orders of magnitude at distance-9. Conversely, systems operating above this threshold demonstrated error amplification with increased scale. We conclude that the paradox resolves only when individual gate fidelity surpasses the threshold, mandating that hardware optimization must precede quantitative scaling. These findings establish a validated roadmap for the transition from the NISQ era to fault-tolerant architecture.
MIMICKING THE HUMAN BRAIN: NEUROMORPHIC ARCHITECTURE SOLUTIONS FOR AI ENERGY EFFICIENCY Anh, Nguyen Tuan; Anh, Le Thi Lan; Thao, Pham Thanh
Journal of Computer Science Advancements Vol. 3 No. 5 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i5.3330

Abstract

The exponential proliferation of Artificial Intelligence (AI) is currently constrained by the “memory wall” and excessive power consumption inherent in traditional Von Neumann architectures. This study addresses these physical limitations by proposing a bio-inspired neuromorphic architecture that integrates memristive crossbar arrays with event-driven Spiking Neural Networks (SNNs) to mimic biological synaptic efficiency. The research employs a quantitative cross-layer simulation framework to benchmark the proposed design against industry-standard GPUs and TPUs, utilizing standard datasets to evaluate inference latency, power dissipation, and classification accuracy. Results indicate that the neuromorphic architecture achieves a reduction in energy consumption by orders of magnitude (0.12 pJ/operation) compared to baseline accelerators, with power usage scaling linearly with input sparsity. Although a minor trade-off in precision was observed due to device stochasticity, the system maintained a competitive classification accuracy of 92.4%. The study concludes that mimicking the asynchronous nature of the human brain offers a sustainable paradigm for “Green AI,” validating neuromorphic computing as a critical solution for overcoming the energy crisis in next-generation edge intelligence and autonomous systems.

Page 1 of 1 | Total Record : 5