Deep neural networks perform well on clean image classification tasks but often fail under common corruptions and distribution shifts. This paper introduces DistortionMix, a lightweight hybrid distortion-based augmentation technique designed to improve model robustness. It randomly applies contrast variation, Gaussian noise, or impulse noise to training images, enhancing data diversity and encouraging resilient feature learning. We evaluate DistortionMix on CIFAR-10 (clean) and CIFAR-10-C (corrupted), which includes 19 corruption types at five severity levels. A variety of architectures e.g ResNet, DenseNet, EfficientNet, MobileNet, VGG, AlexNet, GoogleNet, and ViT are fine-tuned with and without DistortionMix. Experimental results show that DistortionMix improves corrupted accuracy by up to 13.8%, while maintaining or slightly improving clean accuracy. Among all models, ViT-Base (timm) achieves the highest robustness, reaching 89.4% on severe corruptions and 97.43% on clean data. These findings highlight DistortionMix as a simple yet effective strategy for enhancing out-of-distribution generalization. Future work includes extending distortion types, developing adaptive augmentation policies, and evaluating performance on real-world corrupted datasets. Source code: github.com/HusniFadhilah/DistortionMix.
Copyrights © 2026