Claim Missing Document
Check
Articles

Found 4 Documents
Search

Enhancing Cybersecurity in Edge AI through Model Distillation and Quantization: A Robust and Efficient Approach Mangesh Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 1 No. 3 (2022): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v1i3.1957

Abstract

The rapid proliferation of Edge AI has introduced significant cybersecurity challenges, including adversarial attacks, model theft, and data privacy concerns. Traditional deep learning models deployed on edge devices often suffer from high computational complexity and memory requirements, making them vulnerable to exploitation. This paper explores the integration of model distillation and quantization techniques to enhance the security and efficiency of Edge AI systems. Model distillation reduces model complexity by transferring knowledge from a large, cumbersome model (teacher) to a compact, efficient one (student), thereby improving resilience against adversarial manipulations. Quantization further optimizes the student model by reducing bit precision, minimizing attack surfaces while maintaining performance. We present a comprehensive analysis of how these techniques mitigate cybersecurity threats such as model inversion, membership inference, and evasion attacks. Additionally, we evaluate trade-offs between model accuracy, latency, and robustness in resource-constrained edge environments. Experimental results on benchmark datasets demonstrate that distilled and quantized models achieve comparable accuracy to their full-precision counterparts while significantly reducing vulnerability to cyber threats. Our findings highlight the potential of distillation and quantization as key enablers for secure, lightweight, and high-performance Edge AI deployments.
AI-Driven Disinformation Campaigns: Detecting Synthetic Propaganda in Encrypted Messaging via Graph Neural Networks Anil Kumar Pakina; Ashwin Sharma; Deepak Kejriwal
International Journal Science and Technology Vol. 4 No. 1 (2025): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v4i1.1960

Abstract

The rapid rise of generative AI has fueled more sophisticated disinformation campaigns, particularly on encrypted messaging platforms like WhatsApp, Signal, and Telegram. While these platforms protect user privacy through end-to-end encryption, they pose significant challenges to traditional content moderation. Adversaries exploit this privacy to disseminate undetectable synthetic propaganda, influencing public opinion and destabilizing democratic processes without leaving a trace. This research proposes a privacy-preserving detection framework using Graph Neural Networks (GNNs) that focuses on non-content-based signals—such as user interactions, message propagation patterns, temporal behavior, and metadata. GNNs effectively capture relational and structural patterns in encrypted environments, allowing for the detection of coordinated inauthentic behavior without breaching user privacy. Experiments on a large-scale simulated dataset of encrypted messaging scenarios showed that the GNN-based framework achieved 94.2% accuracy and a 92.8% F1-score, outperforming traditional methods like random forests and LSTMs. It was particularly effective in identifying stealthy, low-frequency disinformation campaigns typically missed by conventional anomaly detectors. Positioned at the intersection of AI security, privacy, and disinformation detection, this study introduces a scalable and ethical solution for safeguarding digital spaces. It also initiates dialogue on the legal and ethical implications of behavioral surveillance in encrypted platforms and aligns with broader conversations on responsible AI, digital rights, and democratic resilience.
Ethical and Responsible AI: Governance Frameworks and Policy Implications for Multi-Agent Systems Tejaskumar Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 3 No. 1 (2024): March: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i1.1962

Abstract

Semi-autonomous, augmented- Artificial Intelligence has become increasingly relevant as collective activities are practiced by two or more autonomic entities. MAS and AI at the intersection have fostered very new waves of socioeconomic exchange, necessitating technological governance and, the most challenging element of them all, ethical governance. These autonomous systems involve a network of decision-making agents working in a decentralized environment, entailing very high accountability, transparency, explanability, ethical alignment, and practically everything in between. The escalated societal functioning of these systems necessitates massive social governance policy interventions and an interdisciplinary governance framework. As an overarching look of multispecialty fields, the research aimed to underscore and pinpoint technology like responsible AI, normative governance frameworks, and multi-agent coordination. This paper unravels insofar as the ethical dilemmas in MAS, picking up loose threads from such international governance configurations and proposing a more adaptive regulatory ethic from an awareness of what it means to coordinate intelligent agents. Bringing together thoughts from ethics, law, computer science, and policy studies, the paper essentially sketches out a path for establishing an AI environment that is sustainable, trustworthy, and ethically grounded.
Ensuring Responsible AI: The Role of Supervised Fine-Tuning (SFT) in Upholding Integrity and Privacy Regulations Tejaskumar Pujari; Anshul Goel; Ashwin Sharma
International Journal Science and Technology Vol. 3 No. 3 (2024): November: International Journal Science and Technology
Publisher : Asosiasi Dosen Muda Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.56127/ijst.v3i3.1968

Abstract

AI is increasingly used in high-stakes fields such as healthcare, finance, education, and public governance, requiring systems that uphold fairness, accountability, transparency, and privacy. This paper highlights the critical role of Supervised Fine-Tuning (SFT) in aligning large AI models with ethical principles and regulatory frameworks like the GDPR and EU AI Act. The interdisciplinary approach combines regulatory analysis, technical research, and case studies. It proposes integrating privacy-preserving techniques—differential privacy, secure multiparty computation, and federated learning—with SFT during deployment. The research also advocates incorporating Human-in-the-Loop (HITL) and Explainable AI (XAI) to ensure ongoing oversight and interpretability. SFT is positioned not only as a technical method but as a core enabler of responsible AI governance and public trust.