Claim Missing Document
Check
Articles

Found 1 Documents
Search

Monet2Photo: Reverse Style Transfer using CycleGAN with Impressionism-to-Reality Domain Wijaya, George Kerry; Chang, Shining Sunny; Saputra, Jonathan; Chloe, Annabelle; Johan, Monika Evelin
Journal of Applied Data Sciences Vol 7, No 2: May 2026
Publisher : Bright Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47738/jads.v7i2.1222

Abstract

The application of artificial intelligence in artistic image transformation has primarily focused on converting real-world photographs into stylized artworks. In contrast, the inverse task of reconstructing photorealistic images from paintings remains relatively underexplored and presents substantial technical challenges. This study aims to investigate the feasibility and limitations of reverse style transfer by translating impressionist paintings into realistic photographic images, using the works of Claude Monet as a representative case. The main contribution of this research lies in providing a critical examination of reverse image translation under extreme domain gaps, rather than proposing aesthetic enhancement. An unpaired image-to-image translation framework based on CycleGAN is employed to learn mappings between painting and photographic domains without relying on paired data. The methodology is conceptually grounded in adversarial learning combined with cycle consistency constraints to encourage structural preservation while attempting to reconstruct plausible visual features. The experimental setup utilizes a dataset consisting of 300 Monet paintings and 7,028 real photographs, with targeted data augmentation applied to the painting domain to address data imbalance. Prior to model training, exploratory data analysis is conducted to characterize domain discrepancies through visual and statistical comparisons, including color distribution analysis, grayscale intensity patterns, texture descriptors, and dimensionality reduction. Model performance is evaluated through controlled experiments using distribution-based distance measures and qualitative visual inspection. The results indicate that while the model is capable of preserving coarse spatial layouts and generating diverse outputs without memorization, it struggles to recover high-fidelity textures, illumination, and contrast required for photorealistic reconstruction. These findings highlight the inherent limitations of classical CycleGAN architectures for reverse style transfer and suggest the need for more expressive models and stronger constraints in future research on art-to-reality image translation.