Claim Missing Document
Check
Articles

Found 2 Documents
Search

The Role of Deep Learning in Cancer Detection: A Systematic Review of Architectures, Datasets, and Clinical Applicability Abdurrahman, Muhammad Farhan; Rianto, Yan; Hamzah, Nasir; Firmansyah, Muhammad; Prawira, Nurul Adi; Nugraha, Thomas Fajar
Jurnal Teknik Informatika (Jutif) Vol. 6 No. 5 (2025): JUTIF Volume 6, Number 5, Oktober 2025
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2025.6.5.4748

Abstract

Early cancer detection continues to be a significant challenge in clinical practice due to limitation of conventional diagnostic technique that often takes time and error prone. This systematic review evaluates the efficacy of deep learning (DL) architecture and datasets to improve cancer detection and diagnosis. We performed a structural analysis on 40 high-impact research paper published in Q1 journals between 2014 and 2025, considering DL model performance, datasets, and clinical relevance. Results indicate that fundamental architectures such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) consistently report high diagnostic accuracy (>90%) on radiology- and histopathology-based imaging datasets. Conversely, DL performance on non-imaging clinical data, including electronic medical records (EMDs), is more varied. Evaluation metrics such as AUC and DICE shows the trade-off between classification precision and segmentation accuracy. Despite their potential, DL models have significant limitations in terms of generalization, interpretability, and integration within real-world clinical workflows. This review highlights the need for standardized evaluation, implementation of ethical models, and multi-modal data fusion to facilitate wider and more equitable clinical uptake of DL in cancer diagnostics.
Robust Few Shot Biological Pathology Classification via Optimized Contrastive MobileNetV2: A Transferable Model for Low Resource Medical Imaging Prawira, Nurul Adi; Firmansyah, Muhammad; Marutho, Dhendra; Ouhab, Achraf; Ilham, Ahmad
Journal of Intelligent Computing & Health Informatics Vol 7, No 1 (2026): March
Publisher : Universitas Muhammadiyah Semarang Press

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26714/jichi.v7i1.20179

Abstract

Artificial intelligence has revolutionized computational diagnostics, however deploying reliable intelligent systems in extreme low-resource environments remains a critical structural challenge in health informatics. Conventional deep learning architectures, such as standard Convolutional Neural Networks (CNNs), are inherently data-hungry, making them prone to severe overfitting and catastrophic generalization failures when applied to rare biological pathologies. To overcome this limitation, we propose an Optimized Contrastive MobileNetV2 architecture embedded within a Few-Shot Learning (FSL) framework. By mathematically modifying the latent space representation using a contrastive loss function, the proposed model learns discriminative metric distances rather than relying on massive raw feature memorization. To rigorously validate the algorithm, we utilize a highly constrained dataset comprising merely 120 biological pathogen samples as a cross-domain proxy testbed, accurately simulating the extreme visual complexity and data scarcity typical of rare medical diagnostic scenarios. Extensive episodic evaluations demonstrate that the proposed methodology significantly outperforms conventional baselines. Under a 10-shot learning paradigm, the contrastive architecture achieved a macro-averaged accuracy of 89.2% and an F1-Score of 89.3%, remaining statistically robust against stochastic variations (p < 0.001). Furthermore, the integration of depthwise separable convolutions restricts the model complexity to approximately 3.4 × 10^6 parameters. Crucially, empirical evaluations confirm that this framework occupies merely 13.5 MB of physical storage and achieves an ultra-low inference latency of 12.5 ms per image. Ultimately, this study establishes a highly transferable, computationally efficient algorithmic model ready for seamless integration into intelligent clinical decision support systems and remote edge-computing health architectures.