Edge computing has transformed data processing by moving computation closer to the source, enabling real-time analysis and decision-making. Edge devices are decentralized, which creates privacy and confidentiality concerns, especially when applying machine learning algorithms to sensitive data. Privacy-preserving machine learning methods for edge computing are examined in this research. Federated learning, homomorphic encryption, differential privacy, and secure aggregation are examined as data protection methods for network edge machine learning. A thorough study of these methods shows the challenges of balancing privacy, computational economy, and model correctness. Federated learning has promise for collaborative model training without raw data sharing, but communication overhead and convergence speed remain. A fictional healthcare use case shows how federated learning may be used to train collaborative models across many edge devices while protecting patient data. The case study stresses the necessity for sophisticated optimizations to overcome edge device limits and regulatory compliance. Federated learning algorithms, privacy-preserving procedures, and ethics must be improved, according to the research. Future directions include improving heterogeneous edge algorithms, addressing data ownership and consent ethics, and increasing model decision-making openness. This paper presents essential insights on privacy-preserving machine learning in edge computing and advocates for robust techniques for different edge environments. The paper emphasizes the importance of technological advances, ethical frameworks, and regulatory compliance for secure and privacy-aware machine learning in decentralized edge computing