Explainable Artificial Intelligence (XAI) has emerged as a critical enabler for deploying AI-driven medical imaging systems where transparency, trust, and accountability are paramount. However, most current taxonomies of XAI methods categorize techniques based on algorithmic families (e.g., saliency maps, attribution methods), which often fail to reflect the practical requirements of clinical tasks. This paper proposes a novel task-centric taxonomy of XAI in medical imaging that aligns explanation techniques with four key clinical tasks: classification, detection, segmentation, and prognostic assessment. For each task, we analyze how different XAI methods enhance model interpretability, their suitability for clinical decision-making, and their limitations in real-world applications. Our taxonomy aims to provide a practical framework for researchers and practitioners to select appropriate XAI strategies tailored to the specific demands of medical imaging workflows. Furthermore, we highlight the current gaps in task-specific explainability and propose future research directions towards clinically meaningful, task-driven XAI solutions. This work serves as a step towards bridging the gap between technical XAI developments and the functional needs of clinical practice.