High-stakes decision-making systems such as those used in healthcare, finance, and criminal justicedemand not only high predictive accuracy but also transparency to ensure trust, accountability, and ethical compliance. Explainable Artificial Intelligence (XAI) has emerged as a pivotal approach to address the black-box nature of complex machine learning models, offering interpretable insights into model predictions. This study presents a comparative analysis of leading XAI techniques, including SHAP, LIME, Counterfactual Explanations, and Rule-based Surrogates, across three real-world high-stakes domains. Using standardized evaluation metrics—fidelity, stability, usability, and computational efficiency—we examine the trade-offs between explanation quality and system performance. The results reveal that while SHAP consistently provides the highest fidelity explanations, it suffers from higher computational costs, whereas LIME offers faster, though sometimes less stable, explanations. Counterfactual methods excel in user interpretability but face challenges in generating plausible scenarios for complex datasets. Our findings highlight that no single XAI method is universally optimal; rather, the selection should align with domain-specific requirements and the criticality of the decisions involved. This comparative study contributes to the discourse on responsible AI deployment by providing actionable insights for practitioners, policymakers, and researchers seeking to integrate XAI into high-stakes environments.