This Author published in this journals
All Journal Zeta - Math Journal
Fajar, Moh.
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

A Robustness Study of Multi-Layer Perceptrons and Logistic Regression to Data Perturbation: MNIST Dataset Thahiruddin, Muhammad; Khotijah, Siti; Fajar, Moh.; Farras, Adib El
Zeta - Math Journal Vol 10 No 1 (2025): May
Publisher : Universitas Islam Madura

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31102/zeta.2025.10.1.39-50

Abstract

This study systematically evaluates the robustness of Multi-Layer Perceptrons (MLPs) And Logistic Regression (LR) models against data pertubations using the MNIST handwritten digit dataset. While MLPs and LR are foundational in machine learning, their comparative resilience to diverse pertubations-noise, geometric distortions, and adversarial attacks-remains underexplored,despite implications for real-world applications with imperfect data., whe test three pertubations categories : Gaussian noise (σ=0.1 to 1.0), salt and pepper noise (p=0.1 to 0.5), rotational distorsions (5° to 30°), and adversial attacks (FGSM with ϵ=0.005 to0.30). both models were trained on 60.000 MNIST samples and tested on 10.000 pertubed images. Results demonstrate that MLPs exhibit superior robustness under moderate noise and rotations, achieving baseline accuracies of 97.07% (vs. LR’s 92.63%). For Gaussian noise (σ=0.5), MLP retained 35.35% accuracy compared to LR’s 23.91% . however, adversarial attacks (FGSM, ϵ= 0.30) reduced MLP accuracy to 0.20%, revealing critical vulnerabilities. Statistical analysis (paired t-test, p < 0.05) confirmed significant performance differences across pertubations levels. Alinear regressions (R^2 = 0.98) further quantified MLP’s predictable accuracy decline with Gaussian noise intensity. These findings underscore MLP’s suitability for noise-prone environments but highlight urgent needs for adversarial defense mechanisms. Practitioners are advised to prioritize MLPs for tasks with moderate distortions, while future work should integrate robustness enhancements like adversarial training.