Winny, Purbaratri
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Transparent AI for Welfare Programs: Explainable Fraud Detection Using Publicly Available Administrative Data Sutrisno, Sutrisno; Winny, Purbaratri
Global Science: Journal of Information Technology and Computer Science Vol. 2 No. 1 (2026): March: Global Science: Journal of Information Technology and Computer Science
Publisher : International Forum of Researchers and Lecturers

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70062/globalscience.v2i1.184

Abstract

This study examines the application of Transparent Artificial Intelligence (AI) for fraud detection in public welfare programs using publicly available administrative data. Persistent challenges in welfare governance such as misallocation, fraud, and data inaccuracy necessitate analytical frameworks that are both effective and explainable. The research aims to design and evaluate an interpretable anomaly detection system capable of identifying irregularities in welfare distribution while maintaining transparency and accountability. Methodologically, the study employs two unsupervised models Isolation Forest and Local Outlier Factor (LOF) to detect anomalies in sub-district-level welfare data, incorporating features such as population size, number of beneficiaries, and coverage ratio. An Explainable AI (XAI) framework integrating surrogate Random Forests, Permutation Feature Importance (PFI), and local linear surrogates (LIME-like) is applied to ensure interpretability of both global and local model behaviors. Findings reveal that receivers per 1000 population and percentage coverage are dominant determinants of anomaly scores. Fifteen administrative units were flagged for potential inconsistencies suggesting over- or under-reporting of beneficiaries. Cross-validation between IF and LOF models confirmed consistency in identifying anomalous regions. The integrated XAI explanations enhance transparency, enabling policymakers and auditors to trace the rationale behind detected anomalies. In conclusion, the proposed Transparent AI framework demonstrates that combining anomaly detection with interpretability tools can strengthen accountability and fairness in welfare administration. It offers a reproducible, ethical, and data-driven approach to social program monitoring, reinforcing public trust and supporting responsible AI governance.