Claim Missing Document
Check
Articles

Found 3 Documents
Search

Optimizing diabetes prediction using machine learning: a random forest approach Maenge, Aone; Sigwele, Tshiamo; Bhende, Cliford; Mokgethi, Chandapiwa; Kuthadi, Venumadhav; Omogbehin, Blessing
International Journal of Advances in Applied Sciences Vol 14, No 2: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijaas.v14.i2.pp454-468

Abstract

Diabetes, a leading cause of global mortality, is responsible for millions of deaths annually due to complications such as heart disease, kidney failure, and stroke. Projections indicate that 700 million people will be affected by diabetes in 2045, placing immense strain on global healthcare systems. Early detection and accurate prediction of diabetes are essential in mitigating complications and reducing mortality rates. However, existing diabetes prediction frameworks face challenges, including imbalanced datasets, overfitting, inadequate feature selection, insufficient hyperparameter tuning, and lack of comprehensive evaluation metrics. To address these challenges, the proposed random forest diabetes prediction (Random DIP) framework integrates advanced techniques such as hyperparameter tuning, balanced training, and optimized feature selection using a random search cross-validation (RandomizedSearchCV). This framework significantly improves predictive accuracy and ensures reliable clinical applicability. Random DIP achieves 99.4% accuracy, outperforming related works by 7.23%, the area under curve (AUC) of 99.6%, surpassing comparable frameworks by 7.32%, a recall of 100%, exceeding existing models by 9.65%, a precision (97.8%), F1-score (98.9%), and outperformance of 6.69%. These metrics demonstrate Random DIP's excellent capacity to identify diabetes cases while minimizing false negatives (FPs) and providing reliable predictions for clinical use. Future work will focus on integrating real-time clinical data and expanding the framework to accommodate multi-disease prediction for broader healthcare applications.
A review on ischemic heart disease prediction frameworks using machine learning Bhende, Kabo Clifford; Sigwele, Tshiamo; Mokgethi, Chandapiwa; Maenge, Aone; Kuthadi, Venu Madhav
International Journal of Advances in Applied Sciences Vol 14, No 2: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijaas.v14.i2.pp361-372

Abstract

Ischemic heart disease (IHD) is a leading cause of mortality worldwide, calling for advanced predictive models for timely intervention. Current literature reviews on machine learning (ML)-based IHD prediction frameworks often focus on predictive accuracy but lack depth in areas like dataset diversity, model interpretability, and privacy considerations. Existing IHD prediction frameworks face limitations, including reliance on small, homogenous datasets, limited critical analysis, and issues with model transparency, reducing their clinical utility. This review addresses these gaps through a systematic, comparative analysis of popular ML models, such as random forest (RF) and support vector machines (SVM), noting their strengths and limitations. Key contributions include a qualitative examination of prevalent tools, datasets, and evaluation metrics, identification of gaps in dataset diversity and interpretability; and recommendations for improving model transparency and data privacy. Major findings reveal a trend toward ensemble models for accuracy but highlight the need for explainable artificial intelligence (AI) to support clinical decisions. Future directions include using federated learning to enhance data privacy, integrating unstructured data for comprehensive prediction, and advancing explainable AI to build trust among healthcare providers. By addressing these areas, this review aims to guide future research toward developing robust, transparent ML frameworks that can be more effectively deployed in clinical settings.
Securing cloud data with machine learning: trends, gaps, and performance metrics Ifeoluwa Omogbehin, Blessing; Sigwele, Tshiamo; Semong, Thabo; Maenge, Aone; Nedev, Zhivko; Hlomani, Hlomani
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp44-55

Abstract

The increasing reliance on cloud computing has raised significant concerns about the security of data access control, as traditional models are insufficient in managing the dynamic and large-scale nature of cloud environments. This review evaluates machine learning (ML)-based approaches to improve cloud data security, with a particular focus on advancements in anomaly detection and insider threat prevention. Deep learning (DL) models emerge as the most dominant, utilized by 47% of the studies due to their superior ability to process large datasets and adapt to real-time environments. Random forest models are also prominent, being adopted in 20% of the studies for their strong performance in anomaly detection and categorization. TensorFlow stands out as the most widely used tool, featuring in nearly 37% of the reviewed works, while datasets like Amazon Access and computer emergency response team (CERT) are employed in 20% and 13% of the research, respectively. Anomaly detection and prevention are critical priorities, accounting for 41.2% of the research objectives. However, gaps remain, with 21.7% of the studies noting adversarial vulnerabilities and 13% identifying limitations in dataset diversity. The review recommends further development of ML models to address these challenges, expanding dataset diversity, and improving real-time monitoring techniques to enhance cloud data security.