AI is increasingly used in high-stakes fields such as healthcare, finance, education, and public governance, requiring systems that uphold fairness, accountability, transparency, and privacy. This paper highlights the critical role of Supervised Fine-Tuning (SFT) in aligning large AI models with ethical principles and regulatory frameworks like the GDPR and EU AI Act. The interdisciplinary approach combines regulatory analysis, technical research, and case studies. It proposes integrating privacy-preserving techniques—differential privacy, secure multiparty computation, and federated learning—with SFT during deployment. The research also advocates incorporating Human-in-the-Loop (HITL) and Explainable AI (XAI) to ensure ongoing oversight and interpretability. SFT is positioned not only as a technical method but as a core enabler of responsible AI governance and public trust.
Copyrights © 2024