The rapid advancement of artificial intelligence (AI) has introduced a new paradigm in informatics known as Generative AI. One of the key driving forces behind this innovation is the application of deep learning algorithms, which can emulate human cognitive patterns to automatically generate text, images, audio, and video. This study aims to analyze how deep learning algorithms particularly Generative Adversarial Networks (GANs) and Transformer-based Models (such as GPT and Diffusion Models) are utilized in developing generative AI systems for automated content creation. The research employs a literature review of recent studies, comparative analysis of generative models, and performance evaluation based on quality, creativity, and computational efficiency. The findings reveal that Transformer-based models exhibit greater adaptability in understanding semantic context and producing more realistic content compared to traditional GAN models. However, challenges such as overfitting, data bias, and high computational resource demands remain major obstacles to large-scale implementation. This study concludes that optimizing deep learning algorithms supported by ethical considerations and careful data management will be crucial to the successful development of generative AI that is both effective and responsible within the modern informatics ecosystem.
Copyrights © 2025