Financial and tax fraud remains a major challenge in emerging economies where digital transformation outpaces regulatory oversight. This study presents an explainable hybrid machine learning framework designed to enhance fraud analytics and tax governance in Indonesia. The model integrates unsupervised anomaly detection (Isolation Forest, DBSCAN) and supervised learning (Random Forest, Logistic Regression) to identify irregularities in financial transactions. Model explainability is achieved through SHAP (SHapley Additive Explanations), enabling transparency in high-risk classifications. The proposed Streamlit-based dashboard supports real-time data visualization and interactive model evaluation by policymakers. Experimental results demonstrate a 99% overall accuracy with strong interpretability, underscoring the frameworkâs value in bridging machine learning and public sector decision-making. The findings contribute to the growing field of explainable AI for digital governance, offering a scalable and ethical solution to fraud detection in developing economies.
Copyrights © 2025