Claim Missing Document
Check
Articles

Found 3 Documents
Search

Detection of location-specific intra-cranial brain tumors Usharani, Shola; Lakshmanan, Rama Parvathy; Rajakumaran, Gayathri; Basu, Aritra; Nandam, Anjana Devi; Depuru, Sivakumar
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 1: February 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i1.pp428-438

Abstract

Mutations or abnormalities in genes can occasionally cause cells to grow uncontrolled, resulting in a tumor, which is very dangerous. These are the most prevalent cancer causes. They are caused by significant damage to genes in a specific cell during a person's existence. Brain tumors are increasing rapidly, majorly brain tumor cases in the US are projected to rise from 27,000 in 2020 to 31,000 in 2023 at an annual growth rate of 1.5%, all the cases are rising because of the detection of the tumors in the late phase. Thus, it needs the hour to create something which can solve this anomaly and help us detect the tumor rapidly and efficiently. While major research papers on brain tumor detection mainly focus on the detection and classification of the tumors, the presented research aims to first detect the tumor using pre-recognized photos using machine learning object detection models. Then after successful detection of the tumor, the study team plans to determine its precise coordinates and display the tumor and its location in the picture.
A Novel Deep Learning Framework for Enhanced Glaucoma Detection Using Attention-Gated U-Net, Deep Wavelet Scattering, and Vision Transformers V, Krishnamoorthy; S, Sivanantham; V, Akshaya; S, Nivedha; Depuru, Sivakumar; M, Manikandan
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 2 (2025): April
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i2.706

Abstract

Globally, Glaucoma is a major cause of permanent blindness, and maintaining eyesight depends on early detection. Here, a brand-new deep-learning system for glaucoma prediction. In this work, we offer a novel deep-learning approach for enhanced glaucoma prediction that uses a denoising generative adversarial network for preprocessing the input image is provided, later the segmentation is carried out by Attention-Gated U-Net with Dilated Convolutions to segment the optic cup and optic disc. Feature Extraction Using a Deep Wavelet Scattering Network and finally the glaucoma classification is carried out by the Vision Transformers. An attention-gated U-Net with dilated convolutions for segmentation, which improves the accuracy of optic disc and cup boundaries by 7% compared to conventional U-Net methods is introduced. A Deep Wavelet Scattering Network (DWSN) for feature extraction that achieves a 5% improvement in feature discrimination over conventional CNNs by capturing multiscale texture and structural information is suggested. Lastly, ViT, which is based on transfer learning, is used for classification; it has a 94.6% accuracy rate, a 93.8% sensitivity rate, and a 95.2% specificity rate. The suggested approach outperformed CNN-based models by improving by about 4% on all criteria. The system achieved an F1 score of 0.95 and an AUC (Area Under Curve) of 0.96 when tested on publicly accessible glaucoma datasets. Multi-stage deep-learning processing for glaucoma prediction by integrating a denoising generative adversarial network for image preprocessing, Attention-Gated U-Net with Dilated Convolutions for exact optic cup and disc segmentation, deep wavelet scattering for feature extraction, and Vision Transformers for glaucoma classification.
AMIN-CNN: Enhancing Brain Tumor Segmentation through Modality-Aware Normalization and Deep Learning Depuru, Sivakumar; Kumar, M. Sunil
Journal of Electronics, Electromedical Engineering, and Medical Informatics Vol 7 No 3 (2025): July
Publisher : Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35882/jeeemi.v7i3.934

Abstract

Accurate segmentation of reliable brain tumor detection is essential for early diagnosis and treatment, which helps to increase patient survival rates. However, the inherent variability in tumor shape, size, and intensity across different MRI modalities makes automated segmentation a challenging task. Traditional deep learning approaches, such as U-Net and its variants, provide robust results but often struggle with modality-specific inconsistencies and generalization across diverse datasets. This research presented AMIN-CNN, an adaptive multimodal invariant normalization incorporating a novel 3D convolutional neural network to improve brain tumors segmentation across various MRI technologies. Through adaptive normalization, AMIN-CNN covers modality-specific differences more effectively than Basic CNN and U-Net, leading to improved integration of multimodal MRI input data. The model maintains strong learning performance with minimal overfitting beyond epoch 50. Regularization techniques can reduce this. AMIN-CNN stands out with the best Dice Score (about 0.92 WT, 0.87 ET, and 0.89 TC), Precision (0.3), accuracy of 93.2 % and can decrease false positives. The lower Sensitivity in AMIN-CNN results in it finding the smaller but more correct tumor regions, making it more precise. Compared with traditional methods, AMIN-CNN demonstrates a competitive or better segmentation result and maintains computational efficiency. The model has demonstrated strong independence, with a Hausdorff Distance of 20, compared to 100 for other models. According to these test results, AMIN-CNN is the most effective and clinically correct method among the different architectures, mainly due to its high precision and ability to measure tumors with accuracy.