Purposes - The purpose of this research is to analyze the impact of the growth of AI-generated content on the accuracy and reliability of online information. Specifically, the research examines the challenges in detecting AI content, considering the limitations of AI tools like ZeroGPT and OpenAI’s Text Classifier, and explores how these challenges may influence public trust in online information. Methodology - This study employs a mixed-method approach combining quantitative data collection through surveys and qualitative case study analysis of AI-generated content controversies, such as articles from CNET and Microsoft. Data was analyzed using Structural Equation Modeling (SEM) to evaluate the relationships between AI usage and user trust. Findings - The results indicate that while there is a positive relationship between AI usage and public trust, the impact is not statistically significant. Issues like model collapse and AI inbreeding contribute to the challenge of maintaining content accuracy, which in turn affects the trustworthiness of AI-generated information. Novelty - This research contributes to the growing body of knowledge on AI-generated content by focusing on its impact on public trust, a relatively underexplored area. The study also introduces the concept of "model collapse" and "AI inbreeding" as critical factors that may undermine the reliability of AI-generated information. Research Implications - The findings have practical implications for media industries and AI developers. Enhancing AI algorithms to improve content accuracy and reliability, combined with stronger human oversight, could help mitigate the risks associated with AI-generated content and restore public trust in online information. The study also calls for the development of more advanced detection tools and ethical guidelines to govern the use of AI in information dissemination.
Copyrights © 2024