This work investigates the impact of noise on model performance by training a neural network on a digit dataset with varying Signal-to-Noise Ratios (SNR) to assess its resilience and generalization ability. The experimental setup involved training the model on datasets with noise levels ranging from clean images to highly distorted ones (SNR 5%–25%), analyzing accuracy, mini-batch loss, and training time. Results indicate that while the model achieves high accuracy (96.88%) at mild noise levels (SNR 5%), performance declines significantly at higher noise levels, with accuracy dropping to 78.91% at SNR 25%. The analysis of mini-batch loss and training time reveals that noise slows convergence and increases computational complexity. The confusion matrix further confirms that while the model effectively distinguishes between classes, noise-induced misclassifications become more frequent at lower SNRs. These findings emphasize the importance of noise reduction techniques and data preprocessing to improve model robustness in real-world applications.
Copyrights © 2025