Financial institutions face challenges in credit risk assessment due to fragmented data and strict privacy regulations, which hinder predictive modeling and increase financial risks. Federated Learning (FL) enables privacy-preserving collaborative modeling without sharing raw data. This study evaluates five FL aggregation methods—Federated Averaging (FedAvg), Weighted Average, Median Aggregation, Federated Proximal (FedProx), and Stochastic Controlled Averaging (SCAFFOLD)—using logistic regression on the Credit Approval dataset (690 records, five clients) with non-IID label and feature distributions. Local models were trained and aggregated over 50 rounds. Median Aggregation outperformed the other methods, achieving an F1-score of 97.85% and a recall of 80.6% (vs. 72.3% for others), demonstrating robustness against data skewness. However, global model performance (85.22% for FedAvg, Weighted Average, FedProx, SCAFFOLD; 85.80% for Median) remained static across rounds, indicating limited convergence due to rapid local model convergence and non-IID challenges. The high communication cost of 50 rounds highlights a trade-off between accuracy and efficiency, necessitating optimized strategies like adaptive regularization or client sampling. This study advances theoretical understanding of FL under heterogeneity and provides practical guidance for secure, regulation-compliant credit risk modeling in financial institutions. Future work should explore larger datasets, multi-round convergence, and privacy mechanisms like differential privacy to mitigate risks such as model inversion attacks while ensuring compliance
Copyrights © 2025