Claim Missing Document
Check
Articles

Found 5 Documents
Search

Diabetes Mellitus Disease Analysis using Support Vector Machines and K-Nearest Neighbor Methods Nusantara Habibi, Ahmad Rizky; Sufiyandi, Ilham; Murni, Murni; Jayed, A K M; Nakib, Arman Mohammad; Syukur, Abdul; Furizal, Furizal
Indonesian Journal of Modern Science and Technology Vol. 1 No. 1 (2025): January
Publisher : CV. Abhinaya Indo Group

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64021/ijmst.1.1.22-27.2025

Abstract

Diabetes Mellitus (DM) is a chronic disease characterized by high blood sugar levels and can cause various serious complications if not treated properly. This study aims to analyze the effectiveness of Support Vector Machines (SVM) and K-Nearest Neighbor (KNN) methods in classifying diabetes mellitus patient data. The methodology used includes collecting diabetes datasets, preprocessing data, and applying SVM and KNN algorithms to perform classification. The performance of both methods is analyzed using evaluation metrics such as accuracy, precision, recall, and F1-score. The experimental results show that the SVM method provides more optimal performance in classifying diabetes data compared to KNN, with higher accuracy and lower error rate. This finding indicates that SVM is more suitable for early detection of diabetes mellitus in the dataset used in this study.
Understanding Time Series Forecasting: A Fundamental Study Furizal, Furizal; Ma’arif, Alfian; Kariyamin, Kariyamin; Firdaus, Asno Azzawagama; Wijaya, Setiawan Ardi; Nakib, Arman Mohammad; Ningrum, Ariska Fitriyana
Buletin Ilmiah Sarjana Teknik Elektro Vol. 7 No. 3 (2025): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/biste.v7i3.13318

Abstract

Time series forecasting plays a vital role in economics, finance, engineering, etc., due to its predictive power based on past data. Knowing the basic principles of time series forecasting enables wiser decisions and future optimization. Despite its importance, some researchers and professionals find it difficult to use time series forecasting techniques effectively, especially with complex data settings and selection of methods for a particular problem. This study attempts to explain the subject of time series forecasting in a comprehensive and simple manner by integrating the main stages, components, preprocessing steps, popular forecasting models, and validation methods to make it easier for beginners in the field of study to understand. It explains the important components of time series data such as trend, seasonality, cyclical components, and irregular components, as well as the importance of data preprocessing steps, proper model selection, and validation to achieve better forecasting accuracy. This study offers useful material for both new and experienced researchers by providing guidance on time series forecasting techniques and approaches that will help in enhancing the value of decision making.
Semi-Supervised Learning for Retinal Disease Detection: A BIOMISA Study Nakib, Arman Mohammad; Shahed Jahidul Haque
Scientific Journal of Engineering Research Vol. 1 No. 2 (2025): April
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i2.2025.14

Abstract

Proper immediate identification of Age-related Macular Degeneration (AMD) together with Central Serous Retinopathy (CSR) and Macular Edema (ME) is crucial for protecting vision. OCT imaging achieves better condition detection through automated model-based detection processes. The majority of studies in this domain utilize supervised learning because these approaches need large labeled dataset resources. The method confronts two essential obstacles due to limited medical data labeling quality, expensive expert training costs, and with irregular medical condition distributions. The considered factors limit practical implementation of these methods and their meaningful expansions. The study evaluates how semi-supervised learning techniques analyze retinal diseases in images that originate from the BIOMISA Macula database while providing diagnostic details about AMD, CSR, and ME in addition to Normal retinal results. SSL functions uniquely from fully supervised methods through its unique capability to process labeled and unlabeled data, which lowers manual annotation needs while improving generalized output performance. SSL delivers better results than traditional supervised learning practices through its ability to manage class irregularities and process extensive medical image files. The establishment of SSL as an attractive third option in medical settings with limited labeled data proves through research findings. The study provides insights regarding SSL use in diagnosis of retinal diseases alongside demonstrating its medical potential in healthcare environments. Future investigation designs improved deep learning algorithms which would enable higher system scalability and cost-effective diagnostics for ophthalmic disease systems.
Effectiveness of Fourier, Wiener, Bilateral, and CLAHE Denoising Methods for CT Scan Image Noise Reduction Kobra, Mst Jannatul; Nakib, Arman Mohammad; Mweetwa, Peter; Rahman, Md Owahedur
Scientific Journal of Engineering Research Vol. 1 No. 3 (2025): July
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i3.2025.27

Abstract

The proper reduction of noise inside CTscan Images remains crucial to achieve both better diagnosis results and clinical choices. This research analyzes through quantitative metrics the effectiveness of four popular noise reduction methods which include Fourier-based denoising and Wiener filtering as well as bilateral filtering and Contrast Limited Adaptive Histogram Equalization (CLAHE) applied to more than 500 CTscan Images. The investigated methods were assessed quantitatively through Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) while Mean Squared Error (MSE) served as the additional metric for evaluation. The evaluated denoising methods revealed bilateral filtering as the best technique based on its 50.37 dB PSNR and 0.9940 SSIM together with its 0.5967 MSE. Denoising with Fourier-based methods succeeded in removing high-frequency noise however it produced PSNR of 25.89 dB along with SSIM of 0.8138 while maintaining MSE at 167.4976 indicating lost crucial Image information. The performance balance of Wiener filtering resulted in 40.87 dB PSNR and 0.9809 SSIM and 5.3270 MSE that outperformed Fourier denoising in SSIM yet demonstrated higher MSE. CLAHE produces poor denoising outcomes because it achieves the lowest PSNR of 21.51 dB together with SSIM of 0.5707, and the maximum MSE of 459.1894 while creating undesirable artifacts. This research stands out through a full evaluation of four denoising techniques on a big dataset to create more precise analysis than prior research. The research results show bilateral filtering to be the most reliable technique for CTscan Image noise reduction when maintaining picture quality and thus represents a suitable choice for clinical use. This research adds new information to medical imaging research about quality enhancement which directly benefits clinical diagnostics and therapeutic planning.
Hybrid K-means, Random Forest, and Simulated Annealing for Optimizing Underwater Image Segmentation Kobra, Mst Jannatul; Rahman, Md Owahedur; Nakib, Arman Mohammad
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.46

Abstract

The process of underwater image segmentation is also very difficult because the data collected by the underwater sensors and cameras is of very high complexity, and much data is generated and in that case, the data is not well seen, the color is distorted, and the features overlap. Current solutions, including K-means clustering and Random Forest classification, are unable to partition complex underwater images with high accuracy, or are unable to scale to large datasets, although the possibility of dynamically optimizing the number of clusters has not been fully explored. To fill these gaps, this paper advises a hybrid solution that combines K-means clustering, Random Forest classification and the Simulated Annealing optimization as a complete end to end system to maximize the efficiency and accuracy of segmentation. K-means clustering first divides images based on pixel intensity, Random Forest narrows its segmentation of images with features like texture, color and shape, and Simulated Annealing determines the desired number of clusters dynamically to segment images with minimal segmentation error. The segmentation error of the proposed method was 30 less than the baseline K-means segmentation accuracy of 65 percent and the proposed method segmentation accuracy was 95% with an optimal cluster number of 10 and a mean error of 7839.22. This hybrid system offers a large-scale, scalable system to underwater image processing that is robust and has applications in marine biology, environmental research, and autonomous underwater system exploration.