Research Background: Dementia is a growing global public health challenge driven by population ageing and increased life expectancy. Clinical Decision Support Systems (CDSS) have emerged as important tools to assist clinicians in early diagnosis, risk stratification, prognosis estimation, and personalized care planning in dementia management. Recent advances in predictive analytics and artificial intelligence (AI), particularly machine learning and deep learning models, have significantly enhanced the analytical capabilities of CDSS. However, the integration of these technologies into clinical practice remains limited due to concerns related to interpretability, generalizability, and ethical accountability. This study aims to review the development of CDSS for dementia management that integrate predictive analytics with Explainable Artificial Intelligence (XAI). Methods: A systematic literature review was conducted using peer-reviewed publications from major academic databases published between 2017 and 2025. The analysis focuses on algorithmic approaches, data sources, validation strategies, and explainability techniques applied in contemporary dementia CDSS. Key Findings: The findings indicate that predictive models demonstrate high accuracy in detecting early cognitive impairment and predicting disease progression. Nevertheless, their clinical implementation is often constrained by the “black-box” nature of many AI models and limited external validation. Explainable AI methods such as SHAP, LIME, and attention-based networks are increasingly used to improve transparency and clinician trust. Contribution: This study contributes an integrative perspective that emphasizes the importance of balancing predictive performance with interpretability, ethical governance, and clinical usability. Conclusion: It concludes that integrating predictive analytics with XAI is essential for developing trustworthy and clinically applicable CDSS in dementia care.
Copyrights © 2026