This study was carried out to examine the impact of AI generated means information (deep fakes) on vaccine advocacy, investigate the sources of deep fakes and identify factors contributing to their spread. The elaboration likelihood model (ELM) was employed to explain how people process and respond to AI generated misinformation. A library research method was used involving the collection and analysis of existing data from various secondary sources. The study revealed that social media platforms, anti-vaccine groups, malicious actors, and influencers are primary sources of deep fakes. It was found that emotional appeal, personalization, vulnerability in media literacy, and confirmation bias contribute to the spread of misinformation. It was concluded that they proliferation of deep fakes, has significantly eroded public trust in vaccines and health authorities highlighting the need for a multifaceted approach to combat misinformation. It is therefore recommended that social media platforms should implement robust verification mechanisms, public health authorities should developed fact-based information addressing emotional concerns and the public should be educated on media literacy skills
Copyrights © 2026