Purposes - This research aims to investigate the phenomenon of "model collapse" within Generative Adversarial Networks (GANs) when AI models are trained using AI-generated content. The study focuses on understanding the implications of model collapse on the quality of AI outputs, exploring new concepts like "Model Autography Disorder" (MAD) and "Habsburg AI," and discussing the broader ethical and social impacts of AI self-consumption. Methodology - The study utilizes a mixed-methods approach, combining simulation experiments with qualitative interviews. GAN models were trained on AI-generated data to simulate model collapse, and various techniques were applied to mitigate this collapse. Expert interviews provided insights into the ethical considerations and future directions for generative AI development. Findings - The research demonstrates that model collapse significantly impacts the performance and diversity of AI outputs when trained on synthetic data. Although some mitigation techniques show potential, they do not fully prevent the collapse. Concepts like MAD and Habsburg AI offer deeper understanding into the risks of AI self-consumption and its broader implications for AI-driven systems. Novelty - The introduction of new terms like "Model Autography Disorder" and "Habsburg AI" adds unique perspectives to the discourse on AI sustainability. The study is among the first to examine the ethical and technical challenges posed by AI self-consumption and its long-term effects on AI-generated content. Research Implications - This study underscores the necessity for stricter guidelines on using AI-generated content in training models to prevent model collapse. It also highlights the need for hybrid training methods and ongoing ethical considerations to ensure the quality, reliability, and sustainability of AI-driven systems.
Copyrights © 2024