Claim Missing Document
Check
Articles

Found 3 Documents
Search

Ethical and Responsible AI: Governance Frameworks and Policy Implications for Multi-Agent Systems Tejaskumar Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 3 No. 1 (2024): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i1.1962

Abstract

Semi-autonomous, augmented- Artificial Intelligence has become increasingly relevant as collective activities are practiced by two or more autonomic entities. MAS and AI at the intersection have fostered very new waves of socioeconomic exchange, necessitating technological governance and, the most challenging element of them all, ethical governance. These autonomous systems involve a network of decision-making agents working in a decentralized environment, entailing very high accountability, transparency, explanability, ethical alignment, and practically everything in between. The escalated societal functioning of these systems necessitates massive social governance policy interventions and an interdisciplinary governance framework. As an overarching look of multispecialty fields, the research aimed to underscore and pinpoint technology like responsible AI, normative governance frameworks, and multi-agent coordination. This paper unravels insofar as the ethical dilemmas in MAS, picking up loose threads from such international governance configurations and proposing a more adaptive regulatory ethic from an awareness of what it means to coordinate intelligent agents. Bringing together thoughts from ethics, law, computer science, and policy studies, the paper essentially sketches out a path for establishing an AI environment that is sustainable, trustworthy, and ethically grounded.
Ethical and Responsible AI in the Age of Adversarial Diffusion Models: Challenges, Risks, and Mitigation Strategies Tejaskumar Pujari; Anshul Goel; Deepak Kejriwal
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1963

Abstract

The rapid pace of diffusion models in generative AI has completely restructured many fields, particularly with respect to image synthesis, video generation, and creative data enhancement. However, promising developments remain tinged with ethical questions in view of diffusion-based model dual-use. By misusing these models, purveyors could think up deepfaked videos, unpredictable forms of misinformation, instead outing cyber warfare-related attacks over the Internet, therefore aggravating societal vulnerabilities. This paper explores and analyzes these potential ethical risks and adversarial threats of diffusion-based artificial intelligence technologies. We lay out the basis for good AI-the notion of fair, accountable, transparent, and robust (FATR) systems-discussing efforts underway to mitigate these ethical risks through watermarking, model alignment, and regulatory mechanisms. Thus, from the dialogue with ethical viewpoints, also touching upon cybersecurity, military policy, or governance, we present a conceptual model to encapsulate probable ethical considerations in the development and deployment of diffusion models. Human-centered values need to be advanced by a proactive convergent bonding among researchers, decision-makers, and civil society players during the strengthening of a tributary of generative AI's power.
Ensuring Responsible AI: The Role of Supervised Fine-Tuning (SFT) in Upholding Integrity and Privacy Regulations Tejaskumar Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1968

Abstract

AI is increasingly used in high-stakes fields such as healthcare, finance, education, and public governance, requiring systems that uphold fairness, accountability, transparency, and privacy. This paper highlights the critical role of Supervised Fine-Tuning (SFT) in aligning large AI models with ethical principles and regulatory frameworks like the GDPR and EU AI Act. The interdisciplinary approach combines regulatory analysis, technical research, and case studies. It proposes integrating privacy-preserving techniques—differential privacy, secure multiparty computation, and federated learning—with SFT during deployment. The research also advocates incorporating Human-in-the-Loop (HITL) and Explainable AI (XAI) to ensure ongoing oversight and interpretability. SFT is positioned not only as a technical method but as a core enabler of responsible AI governance and public trust.