Ramadhani, Maya
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Evaluation of the Effect Of Regularization on Neural Networks for Regression Prediction: A Case Study of MLLP, CNN, and FNN Models Susandri; Zamsuri, Ahmad; Nasution, Nurliana; Ramadhani, Maya
INOVTEK Polbeng - Seri Informatika Vol. 10 No. 3 (2025): November
Publisher : P3M Politeknik Negeri Bengkalis

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35314/m2rcsf96

Abstract

Regularization is an important technique for developing deep learning models to improve generalization and reduce overfitting. This study evaluated the effect of regularization on the performance of neural network models in regression prediction tasks using earthquake data. We compare Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Feedforward Neural Network (FNN) architectures with L2 and Dropout regularization. The experimental results show that MLP without regularization achieved the best performance (RMSE: 0.500, MAE: 0.380, R²: 0.625), although prone to overfitting. CNN performed poorly on tabular data, while FNN showed marginal improvement with deeper layers. The novelty of this study lies in a comparative evaluation of regularization strategies across multiple architectures for earthquake regression prediction, highlighting practical implications for early warning systems.