Purpose: Traditionally, 2D anime production involves the expertise of experienced animators and is labor-intensive and time-consuming. Generative adversarial networks (GANs) have been developed to create high-quality anime over the years. However, the developed GANs still have caveats, such as the presence of artifacts, high-frequency noise, color and semantic structure mismatches, blurring, and texture issues. Additionally, research on AI-generated anime images with a particular style is still lacking. Thus, this study aimed to develop double-tail generative adversarial network (DTGAN) with adaptive style transfer to generate quality anime background images aligning with Makoto Shinkai's anime style. Methods: A dataset of real world and anime images was collected and preprocessed. The training was run, and an inference process was done to generate background images with the anime style of Makoto Shinkai using DTGAN with adaptive style transfer. Evaluations of the images produced were performed using visual comparison and quantitative analysis using Fréchet Inception Distance (FID) and peak signal-to-noise ratio (PSNR). Result: Compared to other methods, the images generated by DTGAN with adaptive style transfer had the lowest FID and highest PSNR values of.38.7 and 19.4 dB, respectively. Visual comparison of the images against other methods and real anime image of Makoto Shinkai demonstrated that images from DTGAN had the best quality that matched Makoto's style, as observed from color, background preservation, photorealistic style, and light contrast. Novelty: These findings suggest that DTGAN with adaptive style transfer using adaptive instance normalization (AdaIN) and linearly adaptive denormalization (LADE) outperforms other methods, highlighting its practical use for 2D anime production.