The increasing spread of misinformation through digital platforms has raised significant concerns about its societal impact, particularly in political, health, and social domains. Deep learning models in Natural Language Processing (NLP) have shown high performance in detecting misinformation, but their lack of interpretability remains a major challenge for trust, transparency, and accountability. As black-box models, they often fail to provide insights into how predictions are made, limiting their acceptance in sensitive real-world applications. This study investigates the integration of Explainable Artificial Intelligence (XAI) techniques to enhance the interpretability of deep learning models used in misinformation detection. The primary objective of this research is to evaluate how different XAI methods can be applied to explain and interpret the decisions of NLP-based misinformation classifiers. A comparative analysis was conducted using state-of-the-art deep learning models such as BERT and LSTM on benchmark datasets, including FakeNewsNet and LIAR. XAI methods including SHAP (SHapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and attention visualization were applied to analyze model behavior and feature importance. The findings reveal that while deep learning models achieve high accuracy in misinformation detection, XAI methods significantly improve transparency by highlighting influential words and phrases contributing to model decisions. SHAP and LIME proved particularly effective in providing human-understandable explanations, aiding both developers and end-users. In conclusion, incorporating XAI into NLP-based misinformation detection frameworks enhances model interpretability without sacrificing performance, paving the way for more responsible and trustworthy AI deployment in combating online misinformation.