International Journal of Research and Applied Technology (INJURATECH)
Vol. 4 No. 2 (2024): Vol 4 No 2 (2024)

Explainable AI (XAI) for Fake News Detection: A Review of Interpretability in Deep Learning Models for Misinformation Classification

Munawaroh, Silvi (Unknown)



Article Info

Publish Date
08 Dec 2024

Abstract

This study provides a comprehensive review of Explainable AI (XAI) applications in fake news detection, addressing the critical "black-box" nature of deep learning models used for misinformation classification. We systematically analyze various interpretability techniques, categorized into ante-hoc and post-hoc methods, applied to neural architectures such as CNNs, RNNs, and Transformers. The study evaluates how these techniques extract linguistic, social context, and visual features to justify classification outcomes. The findings reveal that while attention mechanisms and gradient-based explanations improve transparency, there remains a significant trade-off between model complexity and explanatory clarity. The discussion highlights the challenges of "explanation consistency" and the susceptibility of interpretability tools to adversarial attacks. We conclude that integrating XAI is essential for fostering user trust and regulatory compliance. Future research should prioritize human-centric evaluations to ensure that AI-generated explanations are cognitively accessible to non-expert end-users.

Copyrights © 2024






Journal Info

Abbrev

injuratech

Publisher

Subject

Civil Engineering, Building, Construction & Architecture Computer Science & IT Control & Systems Engineering Electrical & Electronics Engineering Engineering

Description

INJURATECH cover all topics under the fields of Computer Science, Information system, and Applied Technology. Scope: Computer Based Education Information System Database Systems E-commerce and E-governance Data mining Decision Support System Management Information System Social Media Analytic Data ...