The growing adoption of Artificial Intelligence (AI) in government has intensified the need for transparent, accountable, and trustworthy decision-making systems. This study conducts a systematic literature review to examine how Explainable AI (XAI) is applied within the public sector, identify the dominant techniques used, and analyze their benefits and challenges. Using PRISMA guidelines, studies were collected from major academic databases including Scopus, Web of Science, IEEE Xplore, SpringerLink, ACM Digital Library, and Google Scholar. The findings reveal that XAI development in government contexts has grown significantly over the past decade, with SHAP, LIME, decision trees, counterfactual explanations, and rule-based models emerging as the most frequently used methods. These techniques support public-sector decision making by enhancing transparency, strengthening accountability, reducing bias, improving auditability, and fostering public trust. However, persistent challenges remain, including technical complexity, trade-offs between accuracy and interpretability, limited AI literacy among officials, lack of standard frameworks, and legal or ethical risks. The review highlights the need for more domain-specific XAI guidelines, user-centered explanation tools, and integrated evaluation frameworks. This research contributes a comprehensive synthesis of current XAI applications in government and outlines a future research agenda to support the development of responsible, explainable, and ethically aligned AI for public administration.