This study examines the application of Transparent Artificial Intelligence (AI) for fraud detection in public welfare programs using publicly available administrative data. Persistent challenges in welfare governance such as misallocation, fraud, and data inaccuracy necessitate analytical frameworks that are both effective and explainable. The research aims to design and evaluate an interpretable anomaly detection system capable of identifying irregularities in welfare distribution while maintaining transparency and accountability. Methodologically, the study employs two unsupervised models Isolation Forest and Local Outlier Factor (LOF) to detect anomalies in sub-district-level welfare data, incorporating features such as population size, number of beneficiaries, and coverage ratio. An Explainable AI (XAI) framework integrating surrogate Random Forests, Permutation Feature Importance (PFI), and local linear surrogates (LIME-like) is applied to ensure interpretability of both global and local model behaviors. Findings reveal that receivers per 1000 population and percentage coverage are dominant determinants of anomaly scores. Fifteen administrative units were flagged for potential inconsistencies suggesting over- or under-reporting of beneficiaries. Cross-validation between IF and LOF models confirmed consistency in identifying anomalous regions. The integrated XAI explanations enhance transparency, enabling policymakers and auditors to trace the rationale behind detected anomalies. In conclusion, the proposed Transparent AI framework demonstrates that combining anomaly detection with interpretability tools can strengthen accountability and fairness in welfare administration. It offers a reproducible, ethical, and data-driven approach to social program monitoring, reinforcing public trust and supporting responsible AI governance.