Federated learning enables collaborative model training across distributed clients without sharing raw data, yet it remains susceptible to inference threats such as membership inference attacks. This study aims to enhance the privacy of federated learning by integrating differential privacy and systematically evaluating its effects on model utility and adversarial robustness. A synthetic multimodal dataset was developed by combining demographic attributes from the UCI Adult dataset, mobility indicators from Google COVID-19 Mobility Reports, and semantic descriptors from LAION-400M, creating a high-dimensional and bias-reduced benchmark for privacy-preserving experimentation. Differentially private stochastic gradient descent (DP-SGD) was applied under multiple privacy budgets and ablation settings to isolate the individual contributions of gradient clipping and noise injection. Experimental results reveal that model accuracy increases with larger privacy budgets, while membership inference attack accuracy remains close to random guessing, confirming strong defense capability. Gradient clipping proved essential for training stability, whereas excessive noise caused measurable degradation in learning utility. The proposed framework establishes reproducible benchmarks for tuning differential privacy parameters in federated environments and demonstrates that robust privacy guarantees can be achieved without substantial loss of performance, providing practical guidance for deploying trustworthy, privacy-preserving machine learning systems across domains such as healthcare, finance, and mobility.
Copyrights © 2026