Murthy, Gururaj Prakash
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Gradient descent optimization based weighted federated learning for privacy-preserving framework Murthy, Gururaj Prakash; Chavan, Chandrashekhar Pomu
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 15, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v15.i1.pp878-887

Abstract

Federated learning (FL) is a disseminated machine learning (ML) paradigm that gained significant consideration in modern days, particularly in a domain of the internet of things (IoT). FL saves communication bandwidth when compared to centralized ML processes by eliminating the need to transmit raw client data to a central server, thereby enhancing data privacy. Nevertheless, participant privacy is still compromised through inference attacks and similar threats. Additionally, a data excellence provided through clients can differs significantly, and excessive inclusion of low-quality data during training may degrade the overall performance of the global model. Hence, this research introduces a gradient descent optimization assisted weighted federated learning (GDO-WFL) method for privacy preservation. The proposed GDO-WFL approach is significantly efficient as it strengthens privacy preservation through reducing exposure to inference attacks and optimises gradient updates for secure learning. Through weighting client contributions based on data quality, an undesirable effect of low-quality data can be minimised, helping to maintain a strength as well as accuracy of the global model. The experimental results illustrate a proposed GDO-WFL approach maintains an overall accuracy of 99.3 and 91.5% on MNIST and CIFAR-10 datasets as compared to the existing method of FedlabX method.