The increasing volume of waste resulting from urbanization and population growth poses significant challenges to waste management systems, particularly in the sorting stage. Deep learning approaches, especially Convolutional Neural Networks (CNNs), have been widely employed for waste image classification due to their ability to automatically extract complex visual features. However, a major limitation of these approaches lies in their limited interpretability, which may hinder user trust and real-world adoption. This study proposes an Explainable Deep Learning Framework for organic and inorganic waste image classification by integrating the MobileNetV2 architecture with Explainable Artificial Intelligence (XAI) methods, namely Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP). MobileNetV2 is utilized as a feature extractor due to its computational efficiency and suitability for deployment on resource-constrained devices. The dataset used in this study consists of a combination of a public benchmark dataset and field-acquired waste images, processed using a transfer learning approach. Model performance is evaluated using accuracy, precision, recall, and f1-score metrics. Experimental results demonstrate that the proposed model achieves a validation accuracy of 90.25% with balanced performance across both classes. Furthermore, interpretability analysis using Grad-CAM and SHAP reveals that the model focuses on semantically relevant visual features and provides explainable feature contributions. These findings confirm that integrating lightweight CNN architectures with XAI techniques can produce waste classification systems that are accurate, transparent, and accountable.