This paper develops an Artificial Intelligence assisted animation storyboard design framework that uses Stable Diffusion 1.5 (SD-1.5) together with Visual Geometry Group 1-Convolutional Neural Network (VGG-1CNN) and Generative Pre-trained Transformer 3.5 (GPT-3.5) to produce automated game character images and narrative-focused storyboards. The proposed system utilizes combined text and sketch prompts for generating storyboard frames which preserve visual coherence together with stylistic continuity. The three main elements that power improved image generation through advanced diffusion control techniques include Contrastive Language-Image Pretraining (CLIP) neural networks and VGG-1CNN and Variational Autoencoder (VAE). The sequence starts by translating textual descriptions into numerical latent space codes using a neural network before the computer generates images based on these guidelines. The basic sketch receives edge detection through Canny edge maps to give better results in image refinement. By applying the VGGNet architecture to vector representations of generated images the system improves visual precision together with prompt compliance. The image quality receives additional enhancement through an iterative scheduler-based removal of noise which refines vector representations during multiple successive stages. The deployment of GPT-3.5 gives the system ability to create written narratives suited for each story frame while preserving narrational continuity. A decoder-based upscaling technique applies to the final output to generate high-resolution visually appealing storyboard frames that properly highlight the visual elements alongside textual content. The automated solution established through this model delivers an efficient pre-production animation pipeline automation that minimizes work efforts and conserves artistic and narrative quality.