Recommender systems play a critical role in shaping user decisions across digital platforms; however, the increasing complexity of recommendation algorithms has raised serious concerns regarding transparency, trust, and accountability. This study focuses on enhancing the transparency of recommender systems by integrating Explainable Artificial Intelligence (XAI) techniques within a MovieLens-based recommendation framework. The primary problem addressed is the opacity of conventional recommendation models, which limits user understanding of why certain items are recommended and may reduce trust, perceived fairness, and system acceptance. Accordingly, the main objective of this research is to design and evaluate a hybrid explainable recommender system that balances predictive accuracy with human-understandable explanations. The proposed approach combines Matrix Factorization, feature-importance-aware neural networks, and knowledge graph embeddings to construct a robust recommendation model. To enhance explainability, multiple XAI strategies are integrated, including model-agnostic methods (LIME, SHAP, and CLIME), argumentation-based explanations, and context-aware personalized explanations. A comprehensive evaluation framework is employed, incorporating algorithmic metrics (accuracy, fidelity, robustness, counterfactual consistency, and fairness) alongside human-centered evaluations measuring trust, transparency, cognitive load, and perceived usefulness. Experimental results demonstrate that the knowledge graph–enhanced hybrid model achieves superior recommendation accuracy compared to baseline approaches. Moreover, context-aware explanations consistently outperform other methods in terms of fidelity, robustness, and user-perceived transparency, while argumentation-based explanations are found to be the most persuasive. CLIME offers a strong balance between technical stability and interpretability. The findings indicate that no single explainability technique is universally optimal; instead, hybrid and adaptive explanation strategies are most effective. In conclusion, this study confirms that human-centered, context-adaptive XAI significantly improves transparency and user trust in recommender systems, highlighting explainability as a fundamental component rather than an optional enhancement.