The rapid spread of hoaxes in the digital era poses a significant threat to social, political, and economic stability worldwide. Artificial Intelligence (AI) has emerged as a promising solution for detecting and mitigating hoaxes through advanced techniques such as natural language processing, machine learning, and deep learning. This study employs a literature review method to examine the role of AI in identifying, verifying, and limiting the dissemination of false information across digital platforms. Data were synthesized from relevant academic articles, reports, and previous studies to provide a comprehensive understanding of current approaches and challenges. The findings reveal that AI technologies enhance the efficiency and accuracy of automatic hoax detection, enabling real-time analysis of large-scale digital content. However, challenges persist, particularly related to data scarcity, algorithmic bias, and the complexity of understanding cross-linguistic and cross-cultural contexts. To address these limitations, collaborative efforts involving governments, technology developers, academic researchers, and society are required. Strengthened regulatory frameworks, cross-sector partnerships, and improved digital literacy are essential to ensure the sustainable and ethical application of AI in combating hoaxes. This study contributes to the growing body of knowledge by highlighting the dual role of AI as both a technical and socio-ethical tool in countering misinformation. It provides practical insights for policymakers, educators, and technology practitioners on integrating AI-based detection systems with broader strategies for digital literacy and regulatory governance. By presenting a comprehensive framework, the study underscores how AI can be leveraged responsibly to safeguard information integrity in the digital age.
Copyrights © 2025