This study develops a Generative Adversarial Network (GAN)-based model to restore partially degraded facial images by reconstructing missing regions while preserving the structural integrity of the face. The model adopts an encoder-decoder architecture enhanced with skip connections and residual blocks to improve restoration accuracy. The training process utilizes 1,000 paired images, comprising 500 original and 500 occluded facial images, with 200 images allocated for testing. The model was trained over 50 epochs, resulting in a consistent reduction of generator loss from 0.80 to 0.67 and stabilization of discriminator loss at 0.70. Qualitative evaluation indicates the model’s capability to reconstruct facial features such as eyes, nose, and mouth with high visual fidelity, although minor artifacts remain in areas with complex textures. These findings demonstrate the effectiveness of GAN-based approaches in facial image restoration and suggest potential improvements through the exploration of alternative network architectures and more diverse training datasets. The proposed model shows promise for applications in digital forensics and historical image recovery.