The rapid adoption of machine learning (ML) in the public sector has increased the need for transparent, accountable, and trustworthy algorithmic decision-making, particularly in high-stakes domains such as social welfare, healthcare, security, and public administration. However, existing approaches to explainable machine learning (XML) remain fragmented, focusing primarily on technical explanation techniques without integrating the institutional, ethical, and user-centered requirements of government environments. This research aims to develop a unified theoretical practical framework that operationalizes explainability across the entire ML lifecycle for critical public-sector applications. This study adopts a qualitative, multi-stage research design that combines theoretical synthesis, framework construction, and empirical validation through expert assessment and case-based evaluation.The results demonstrate that explainability is a multidimensional construct that extends beyond algorithmic transparency to include contextual risk assessment, adaptive explanation delivery, and governance mechanisms such as auditability, human oversight, and documentation standards. The proposed framework integrates four interconnected layers context analysis, model design and transparency, explanation delivery, and oversight and governance providing a structured pathway for implementing explainable ML systems that meet public-sector standards of fairness, legitimacy, and accountability. Expert feedback and case evaluations confirm that the framework enhances interpretability, reduces misinterpretation risks, and supports more informed decision-making among stakeholders. This research contributes to the advancement of responsible AI in government by offering a comprehensive model that bridges technical methods with policy and practice, paving the way for more transparent and trustworthy ML adoption in public-sector services.