Hothefa Jassim
Department of Mathematics and Computer Science, Modern College of Business and Science, Bowshar, Muscat

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Evaluating Differential Privacy Mechanisms in Machine Learning with Emphasis on Utility and Robustness Rashmi Dwivedi; Basant Kumar; Vivek Mishra; Hothefa Jassim; Ozlem Kilickaya
Emerging Science Journal Vol. 10 No. 2 (2026): April
Publisher : Ital Publication

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28991/ESJ-2026-010-02-07

Abstract

Federated learning enables collaborative model training across distributed clients without sharing raw data, yet it remains susceptible to inference threats such as membership inference attacks. This study aims to enhance the privacy of federated learning by integrating differential privacy and systematically evaluating its effects on model utility and adversarial robustness. A synthetic multimodal dataset was developed by combining demographic attributes from the UCI Adult dataset, mobility indicators from Google COVID-19 Mobility Reports, and semantic descriptors from LAION-400M, creating a high-dimensional and bias-reduced benchmark for privacy-preserving experimentation. Differentially private stochastic gradient descent (DP-SGD) was applied under multiple privacy budgets and ablation settings to isolate the individual contributions of gradient clipping and noise injection. Experimental results reveal that model accuracy increases with larger privacy budgets, while membership inference attack accuracy remains close to random guessing, confirming strong defense capability. Gradient clipping proved essential for training stability, whereas excessive noise caused measurable degradation in learning utility. The proposed framework establishes reproducible benchmarks for tuning differential privacy parameters in federated environments and demonstrates that robust privacy guarantees can be achieved without substantial loss of performance, providing practical guidance for deploying trustworthy, privacy-preserving machine learning systems across domains such as healthcare, finance, and mobility.