Dengue outbreaks have become a common occurrence in South Asian countries, including Bangladesh. It has caused widespread concern among people from all walks of life. Various misinformation about dengue proliferates among people, mainly through social media platforms. The project was designed to investigate the performance of generative AIs in detecting dengue-related disinformation. In this study, two famous generative AIs were chosen to explore the performance of generative AI in detecting dengue-related misinformation: ChatGPT and Google Bard. These AI platforms were given widely distributed misinformation about dengue and asked to determine whether it was accurate or untrue. False information was identified through content analysis of various stories about the dengue outbreak, particularly those circulating on social media platforms. After getting responses from generative AIs, the answers were cross-checked using fact-checkers and public health databases such as WHO and CDC to determine whether the answers were correct or not. This study examined the performance of three AI systems (ChatGPT and Google Bard) in reacting to ten regularly disseminated misconceptions about dengue, particularly on social networking sites. Based on public health database statements (e.g., WHO, CDC) and fact-checker comments, ChatGPT and Google BARD demonstrated promising outcomes in detecting disinformation and presenting factual information. Dengue outbreaks have become common in developing nations such as Bangladesh, and spreading dengue-related misinformation has become commonplace. While it is known that generative AI systems have inherent limitations and may not always excel at dealing with complex real-world circumstances, they have shown promise in terms of consistent answers and performance in the public health sector. More studies in this field are needed to realize the full promise of AI chatbots in these sectors.
Copyrights © 2024